I0325 11:38:24.327880 7 e2e.go:129] Starting e2e run "f3ba34b4-e88e-4056-a575-266a099c5218" on Ginkgo node 1 {"msg":"Test Suite starting","total":133,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1616672302 - Will randomize all specs Will run 133 of 5737 specs Mar 25 11:38:24.398: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:38:24.400: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 25 11:38:24.426: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 25 11:38:24.594: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 25 11:38:24.594: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 25 11:38:24.594: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 25 11:38:24.618: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 25 11:38:24.618: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 25 11:38:24.618: INFO: e2e test version: v1.21.0-beta.1 Mar 25 11:38:24.620: INFO: kube-apiserver version: v1.21.0-alpha.0 Mar 25 11:38:24.620: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:38:24.717: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:38:24.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test Mar 25 11:38:25.083: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 25 11:38:33.584: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-3c3250e9-01f5-4c5a-845f-d5d952a86cab && mount --bind /tmp/local-volume-test-3c3250e9-01f5-4c5a-845f-d5d952a86cab /tmp/local-volume-test-3c3250e9-01f5-4c5a-845f-d5d952a86cab] Namespace:persistent-local-volumes-test-9173 PodName:hostexec-latest-worker-8c4pd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 11:38:33.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 11:38:34.820: INFO: Creating a PV followed by a PVC Mar 25 11:38:35.440: INFO: Waiting for PV local-pvg8zj7 to bind to PVC pvc-s2f2l Mar 25 11:38:35.440: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-s2f2l] to have phase Bound Mar 25 11:38:35.949: INFO: PersistentVolumeClaim pvc-s2f2l found but phase is Pending instead of Bound. Mar 25 11:38:38.667: INFO: PersistentVolumeClaim pvc-s2f2l found and phase=Bound (3.22634836s) Mar 25 11:38:38.667: INFO: Waiting up to 3m0s for PersistentVolume local-pvg8zj7 to have phase Bound Mar 25 11:38:38.670: INFO: PersistentVolume local-pvg8zj7 found and phase=Bound (2.689426ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Mar 25 11:39:08.146: INFO: pod "pod-4f0090bf-3dca-491d-b1e0-d42b7e34c629" created on Node "latest-worker" STEP: Writing in pod1 Mar 25 11:39:08.146: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9173 PodName:pod-4f0090bf-3dca-491d-b1e0-d42b7e34c629 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 11:39:08.146: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:39:09.254: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Mar 25 11:39:09.254: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9173 PodName:pod-4f0090bf-3dca-491d-b1e0-d42b7e34c629 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 11:39:09.254: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:39:09.909: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Mar 25 11:39:24.997: INFO: pod "pod-97a52e03-3812-4f58-8b3f-e1fe43c5c708" created on Node "latest-worker" Mar 25 11:39:24.997: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9173 PodName:pod-97a52e03-3812-4f58-8b3f-e1fe43c5c708 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 11:39:24.997: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:39:25.397: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Mar 25 11:39:25.397: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-3c3250e9-01f5-4c5a-845f-d5d952a86cab > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9173 PodName:pod-97a52e03-3812-4f58-8b3f-e1fe43c5c708 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 11:39:25.397: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:39:26.260: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-3c3250e9-01f5-4c5a-845f-d5d952a86cab > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Mar 25 11:39:26.260: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9173 PodName:pod-4f0090bf-3dca-491d-b1e0-d42b7e34c629 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 11:39:26.260: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:39:27.042: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-3c3250e9-01f5-4c5a-845f-d5d952a86cab", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-4f0090bf-3dca-491d-b1e0-d42b7e34c629 in namespace persistent-local-volumes-test-9173 STEP: Deleting pod2 STEP: Deleting pod pod-97a52e03-3812-4f58-8b3f-e1fe43c5c708 in namespace persistent-local-volumes-test-9173 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 11:39:28.647: INFO: Deleting PersistentVolumeClaim "pvc-s2f2l" Mar 25 11:39:29.972: INFO: Deleting PersistentVolume "local-pvg8zj7" STEP: Removing the test directory Mar 25 11:39:31.283: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-3c3250e9-01f5-4c5a-845f-d5d952a86cab && rm -r /tmp/local-volume-test-3c3250e9-01f5-4c5a-845f-d5d952a86cab] Namespace:persistent-local-volumes-test-9173 PodName:hostexec-latest-worker-8c4pd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 11:39:31.283: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:39:33.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9173" for this suite. • [SLOW TEST:70.201 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":133,"completed":1,"skipped":17,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create unbound pv count metrics for pvc controller after creating pv only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:485 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:39:34.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Mar 25 11:39:37.203: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:39:37.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-524" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [3.172 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create unbound pv count metrics for pvc controller after creating pv only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:485 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:39:38.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:354 STEP: Initializing test volumes Mar 25 11:39:51.537: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-2f69dcb3-2da7-44c7-bb70-80af77585618] Namespace:persistent-local-volumes-test-1581 PodName:hostexec-latest-worker2-n7zqw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 11:39:51.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 11:39:52.364: INFO: Creating a PV followed by a PVC Mar 25 11:39:53.098: INFO: Waiting for PV local-pvn86mf to bind to PVC pvc-4wtdd Mar 25 11:39:53.098: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-4wtdd] to have phase Bound Mar 25 11:39:53.428: INFO: PersistentVolumeClaim pvc-4wtdd found but phase is Pending instead of Bound. Mar 25 11:39:55.915: INFO: PersistentVolumeClaim pvc-4wtdd found but phase is Pending instead of Bound. Mar 25 11:39:57.959: INFO: PersistentVolumeClaim pvc-4wtdd found but phase is Pending instead of Bound. Mar 25 11:40:00.410: INFO: PersistentVolumeClaim pvc-4wtdd found but phase is Pending instead of Bound. Mar 25 11:40:02.688: INFO: PersistentVolumeClaim pvc-4wtdd found but phase is Pending instead of Bound. Mar 25 11:40:05.394: INFO: PersistentVolumeClaim pvc-4wtdd found and phase=Bound (12.295883165s) Mar 25 11:40:05.394: INFO: Waiting up to 3m0s for PersistentVolume local-pvn86mf to have phase Bound Mar 25 11:40:06.184: INFO: PersistentVolume local-pvn86mf found and phase=Bound (789.708378ms) [It] should fail scheduling due to different NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375 STEP: local-volume-type: dir STEP: Initializing test volumes Mar 25 11:40:06.438: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-50fdade0-9831-4296-84b0-5b945c814862] Namespace:persistent-local-volumes-test-1581 PodName:hostexec-latest-worker2-n7zqw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 11:40:06.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 11:40:07.326: INFO: Creating a PV followed by a PVC Mar 25 11:40:08.160: INFO: Waiting for PV local-pvfr6v5 to bind to PVC pvc-lbjxq Mar 25 11:40:08.160: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-lbjxq] to have phase Bound Mar 25 11:40:08.721: INFO: PersistentVolumeClaim pvc-lbjxq found but phase is Pending instead of Bound. Mar 25 11:40:11.278: INFO: PersistentVolumeClaim pvc-lbjxq found but phase is Pending instead of Bound. Mar 25 11:40:13.304: INFO: PersistentVolumeClaim pvc-lbjxq found but phase is Pending instead of Bound. Mar 25 11:40:15.819: INFO: PersistentVolumeClaim pvc-lbjxq found but phase is Pending instead of Bound. Mar 25 11:40:17.865: INFO: PersistentVolumeClaim pvc-lbjxq found but phase is Pending instead of Bound. Mar 25 11:40:19.954: INFO: PersistentVolumeClaim pvc-lbjxq found and phase=Bound (11.794545273s) Mar 25 11:40:19.954: INFO: Waiting up to 3m0s for PersistentVolume local-pvfr6v5 to have phase Bound Mar 25 11:40:20.006: INFO: PersistentVolume local-pvfr6v5 found and phase=Bound (52.045505ms) Mar 25 11:40:20.799: INFO: Waiting up to 5m0s for pod "pod-14d592d6-b7bf-4c63-9824-55dd4557549d" in namespace "persistent-local-volumes-test-1581" to be "Unschedulable" Mar 25 11:40:20.990: INFO: Pod "pod-14d592d6-b7bf-4c63-9824-55dd4557549d": Phase="Pending", Reason="", readiness=false. Elapsed: 190.466387ms Mar 25 11:40:20.990: INFO: Pod "pod-14d592d6-b7bf-4c63-9824-55dd4557549d" satisfied condition "Unschedulable" [AfterEach] Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:370 STEP: Cleaning up PVC and PV Mar 25 11:40:20.990: INFO: Deleting PersistentVolumeClaim "pvc-4wtdd" Mar 25 11:40:21.085: INFO: Deleting PersistentVolume "local-pvn86mf" STEP: Removing the test directory Mar 25 11:40:21.549: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-2f69dcb3-2da7-44c7-bb70-80af77585618] Namespace:persistent-local-volumes-test-1581 PodName:hostexec-latest-worker2-n7zqw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 11:40:21.549: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:40:22.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1581" for this suite. • [SLOW TEST:44.878 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347 should fail scheduling due to different NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity","total":133,"completed":2,"skipped":67,"failed":0} S ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create none metrics for pvc controller before creating any PV or PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:481 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:40:22.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Mar 25 11:40:24.397: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:40:24.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3467" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [2.045 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create none metrics for pvc controller before creating any PV or PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:481 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:40:25.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not be passed when CSIDriver does not exist /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 STEP: Building a driver namespace object, basename csi-mock-volumes-9487 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 11:40:28.860: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9487-9714/csi-attacher Mar 25 11:40:29.195: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9487 Mar 25 11:40:29.195: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-9487 Mar 25 11:40:29.524: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9487 Mar 25 11:40:29.573: INFO: creating *v1.Role: csi-mock-volumes-9487-9714/external-attacher-cfg-csi-mock-volumes-9487 Mar 25 11:40:29.961: INFO: creating *v1.RoleBinding: csi-mock-volumes-9487-9714/csi-attacher-role-cfg Mar 25 11:40:29.994: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9487-9714/csi-provisioner Mar 25 11:40:30.015: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9487 Mar 25 11:40:30.016: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-9487 Mar 25 11:40:30.518: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9487 Mar 25 11:40:30.542: INFO: creating *v1.Role: csi-mock-volumes-9487-9714/external-provisioner-cfg-csi-mock-volumes-9487 Mar 25 11:40:30.937: INFO: creating *v1.RoleBinding: csi-mock-volumes-9487-9714/csi-provisioner-role-cfg Mar 25 11:40:31.008: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9487-9714/csi-resizer Mar 25 11:40:31.116: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9487 Mar 25 11:40:31.116: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-9487 Mar 25 11:40:31.423: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9487 Mar 25 11:40:31.745: INFO: creating *v1.Role: csi-mock-volumes-9487-9714/external-resizer-cfg-csi-mock-volumes-9487 Mar 25 11:40:31.940: INFO: creating *v1.RoleBinding: csi-mock-volumes-9487-9714/csi-resizer-role-cfg Mar 25 11:40:32.387: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9487-9714/csi-snapshotter Mar 25 11:40:32.612: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9487 Mar 25 11:40:32.612: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-9487 Mar 25 11:40:33.013: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9487 Mar 25 11:40:33.067: INFO: creating *v1.Role: csi-mock-volumes-9487-9714/external-snapshotter-leaderelection-csi-mock-volumes-9487 Mar 25 11:40:33.260: INFO: creating *v1.RoleBinding: csi-mock-volumes-9487-9714/external-snapshotter-leaderelection Mar 25 11:40:33.265: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9487-9714/csi-mock Mar 25 11:40:33.302: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9487 Mar 25 11:40:33.350: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9487 Mar 25 11:40:33.543: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9487 Mar 25 11:40:33.763: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9487 Mar 25 11:40:34.077: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9487 Mar 25 11:40:34.832: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9487 Mar 25 11:40:35.270: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9487 Mar 25 11:40:35.709: INFO: creating *v1.StatefulSet: csi-mock-volumes-9487-9714/csi-mockplugin Mar 25 11:40:36.003: INFO: creating *v1.StatefulSet: csi-mock-volumes-9487-9714/csi-mockplugin-attacher Mar 25 11:40:36.104: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-9487 to register on node latest-worker2 STEP: Creating pod Mar 25 11:40:54.788: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 25 11:40:55.211: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-8q9jc] to have phase Bound Mar 25 11:40:55.509: INFO: PersistentVolumeClaim pvc-8q9jc found but phase is Pending instead of Bound. Mar 25 11:40:57.908: INFO: PersistentVolumeClaim pvc-8q9jc found and phase=Bound (2.696118369s) STEP: Deleting the previously created pod Mar 25 11:41:13.736: INFO: Deleting pod "pvc-volume-tester-crz2s" in namespace "csi-mock-volumes-9487" Mar 25 11:41:13.939: INFO: Wait up to 5m0s for pod "pvc-volume-tester-crz2s" to be fully deleted STEP: Checking CSI driver logs Mar 25 11:41:49.912: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/7ec0746b-af2e-4ac1-8ea0-b246710937db/volumes/kubernetes.io~csi/pvc-b3ef2c9c-5df0-4364-a370-0598f3c28f14/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-crz2s Mar 25 11:41:49.912: INFO: Deleting pod "pvc-volume-tester-crz2s" in namespace "csi-mock-volumes-9487" STEP: Deleting claim pvc-8q9jc Mar 25 11:41:50.842: INFO: Waiting up to 2m0s for PersistentVolume pvc-b3ef2c9c-5df0-4364-a370-0598f3c28f14 to get deleted Mar 25 11:41:51.004: INFO: PersistentVolume pvc-b3ef2c9c-5df0-4364-a370-0598f3c28f14 found and phase=Bound (161.762181ms) Mar 25 11:41:53.165: INFO: PersistentVolume pvc-b3ef2c9c-5df0-4364-a370-0598f3c28f14 was removed STEP: Deleting storageclass csi-mock-volumes-9487-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-9487 STEP: Waiting for namespaces [csi-mock-volumes-9487] to vanish STEP: uninstalling csi mock driver Mar 25 11:42:15.704: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9487-9714/csi-attacher Mar 25 11:42:15.860: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9487 Mar 25 11:42:16.029: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9487 Mar 25 11:42:16.075: INFO: deleting *v1.Role: csi-mock-volumes-9487-9714/external-attacher-cfg-csi-mock-volumes-9487 Mar 25 11:42:16.128: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9487-9714/csi-attacher-role-cfg Mar 25 11:42:16.622: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9487-9714/csi-provisioner Mar 25 11:42:17.091: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9487 Mar 25 11:42:17.675: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9487 Mar 25 11:42:18.660: INFO: deleting *v1.Role: csi-mock-volumes-9487-9714/external-provisioner-cfg-csi-mock-volumes-9487 Mar 25 11:42:19.870: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9487-9714/csi-provisioner-role-cfg Mar 25 11:42:20.247: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9487-9714/csi-resizer Mar 25 11:42:20.719: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9487 Mar 25 11:42:20.794: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9487 Mar 25 11:42:20.877: INFO: deleting *v1.Role: csi-mock-volumes-9487-9714/external-resizer-cfg-csi-mock-volumes-9487 Mar 25 11:42:20.958: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9487-9714/csi-resizer-role-cfg Mar 25 11:42:21.012: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9487-9714/csi-snapshotter Mar 25 11:42:21.932: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9487 Mar 25 11:42:22.549: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9487 Mar 25 11:42:23.065: INFO: deleting *v1.Role: csi-mock-volumes-9487-9714/external-snapshotter-leaderelection-csi-mock-volumes-9487 Mar 25 11:42:23.430: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9487-9714/external-snapshotter-leaderelection Mar 25 11:42:23.706: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9487-9714/csi-mock Mar 25 11:42:24.386: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9487 Mar 25 11:42:24.984: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9487 Mar 25 11:42:25.196: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9487 Mar 25 11:42:25.304: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9487 Mar 25 11:42:25.385: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9487 Mar 25 11:42:25.455: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9487 Mar 25 11:42:25.492: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9487 Mar 25 11:42:25.788: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9487-9714/csi-mockplugin Mar 25 11:42:25.874: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9487-9714/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-9487-9714 STEP: Waiting for namespaces [csi-mock-volumes-9487-9714] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:42:58.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:153.483 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443 should not be passed when CSIDriver does not exist /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist","total":133,"completed":3,"skipped":87,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:42:58.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker" using path "/tmp/local-volume-test-56604439-2b3a-495d-8033-57b4af70d359" Mar 25 11:43:07.120: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-56604439-2b3a-495d-8033-57b4af70d359 && dd if=/dev/zero of=/tmp/local-volume-test-56604439-2b3a-495d-8033-57b4af70d359/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-56604439-2b3a-495d-8033-57b4af70d359/file] Namespace:persistent-local-volumes-test-6485 PodName:hostexec-latest-worker-x6hn6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 11:43:07.120: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:43:07.639: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-56604439-2b3a-495d-8033-57b4af70d359/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6485 PodName:hostexec-latest-worker-x6hn6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 11:43:07.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 11:43:08.030: INFO: Creating a PV followed by a PVC Mar 25 11:43:08.093: INFO: Waiting for PV local-pvzx7rs to bind to PVC pvc-tpv9p Mar 25 11:43:08.093: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-tpv9p] to have phase Bound Mar 25 11:43:08.139: INFO: PersistentVolumeClaim pvc-tpv9p found but phase is Pending instead of Bound. Mar 25 11:43:10.251: INFO: PersistentVolumeClaim pvc-tpv9p found but phase is Pending instead of Bound. Mar 25 11:43:12.265: INFO: PersistentVolumeClaim pvc-tpv9p found but phase is Pending instead of Bound. Mar 25 11:43:14.344: INFO: PersistentVolumeClaim pvc-tpv9p found but phase is Pending instead of Bound. Mar 25 11:43:16.478: INFO: PersistentVolumeClaim pvc-tpv9p found but phase is Pending instead of Bound. Mar 25 11:43:18.543: INFO: PersistentVolumeClaim pvc-tpv9p found and phase=Bound (10.449723015s) Mar 25 11:43:18.543: INFO: Waiting up to 3m0s for PersistentVolume local-pvzx7rs to have phase Bound Mar 25 11:43:18.743: INFO: PersistentVolume local-pvzx7rs found and phase=Bound (200.626975ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 25 11:43:28.098: INFO: pod "pod-3828446c-c7cf-42e3-ac89-2d6bd0c5e092" created on Node "latest-worker" STEP: Writing in pod1 Mar 25 11:43:28.098: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6485 PodName:pod-3828446c-c7cf-42e3-ac89-2d6bd0c5e092 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 11:43:28.098: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:43:28.332: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Mar 25 11:43:28.332: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6485 PodName:pod-3828446c-c7cf-42e3-ac89-2d6bd0c5e092 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 11:43:28.332: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:43:28.642: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Mar 25 11:43:28.642: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /dev/loop0 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6485 PodName:pod-3828446c-c7cf-42e3-ac89-2d6bd0c5e092 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 11:43:28.642: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:43:28.831: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /dev/loop0 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-3828446c-c7cf-42e3-ac89-2d6bd0c5e092 in namespace persistent-local-volumes-test-6485 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 11:43:28.869: INFO: Deleting PersistentVolumeClaim "pvc-tpv9p" Mar 25 11:43:28.911: INFO: Deleting PersistentVolume "local-pvzx7rs" Mar 25 11:43:29.093: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-56604439-2b3a-495d-8033-57b4af70d359/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6485 PodName:hostexec-latest-worker-x6hn6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 11:43:29.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker" at path /tmp/local-volume-test-56604439-2b3a-495d-8033-57b4af70d359/file Mar 25 11:43:29.216: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-6485 PodName:hostexec-latest-worker-x6hn6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 11:43:29.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-56604439-2b3a-495d-8033-57b4af70d359 Mar 25 11:43:29.358: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-56604439-2b3a-495d-8033-57b4af70d359] Namespace:persistent-local-volumes-test-6485 PodName:hostexec-latest-worker-x6hn6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 11:43:29.358: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:43:31.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6485" for this suite. • [SLOW TEST:36.482 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":133,"completed":4,"skipped":107,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:43:34.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker2" using path "/tmp/local-volume-test-a585049e-ce9f-40ae-a942-eaafe9e249d4" Mar 25 11:43:48.873: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-a585049e-ce9f-40ae-a942-eaafe9e249d4 && dd if=/dev/zero of=/tmp/local-volume-test-a585049e-ce9f-40ae-a942-eaafe9e249d4/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-a585049e-ce9f-40ae-a942-eaafe9e249d4/file] Namespace:persistent-local-volumes-test-1176 PodName:hostexec-latest-worker2-pckwl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 11:43:48.873: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:43:49.259: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-a585049e-ce9f-40ae-a942-eaafe9e249d4/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1176 PodName:hostexec-latest-worker2-pckwl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 11:43:49.260: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:43:49.924: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-a585049e-ce9f-40ae-a942-eaafe9e249d4 && chmod o+rwx /tmp/local-volume-test-a585049e-ce9f-40ae-a942-eaafe9e249d4] Namespace:persistent-local-volumes-test-1176 PodName:hostexec-latest-worker2-pckwl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 11:43:49.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 11:43:50.762: INFO: Creating a PV followed by a PVC Mar 25 11:43:50.872: INFO: Waiting for PV local-pvln2wt to bind to PVC pvc-stptb Mar 25 11:43:50.872: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-stptb] to have phase Bound Mar 25 11:43:50.928: INFO: PersistentVolumeClaim pvc-stptb found but phase is Pending instead of Bound. Mar 25 11:43:52.936: INFO: PersistentVolumeClaim pvc-stptb found but phase is Pending instead of Bound. Mar 25 11:43:54.957: INFO: PersistentVolumeClaim pvc-stptb found but phase is Pending instead of Bound. Mar 25 11:43:57.502: INFO: PersistentVolumeClaim pvc-stptb found but phase is Pending instead of Bound. Mar 25 11:44:00.047: INFO: PersistentVolumeClaim pvc-stptb found but phase is Pending instead of Bound. Mar 25 11:44:02.274: INFO: PersistentVolumeClaim pvc-stptb found but phase is Pending instead of Bound. Mar 25 11:44:04.294: INFO: PersistentVolumeClaim pvc-stptb found and phase=Bound (13.421653805s) Mar 25 11:44:04.294: INFO: Waiting up to 3m0s for PersistentVolume local-pvln2wt to have phase Bound Mar 25 11:44:04.513: INFO: PersistentVolume local-pvln2wt found and phase=Bound (219.30759ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Mar 25 11:44:12.945: INFO: pod "pod-4d2e00fa-5a28-44ad-968c-abc1c2870966" created on Node "latest-worker2" STEP: Writing in pod1 Mar 25 11:44:12.945: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1176 PodName:pod-4d2e00fa-5a28-44ad-968c-abc1c2870966 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 11:44:12.945: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:44:13.102: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Mar 25 11:44:13.102: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1176 PodName:pod-4d2e00fa-5a28-44ad-968c-abc1c2870966 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 11:44:13.102: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:44:13.285: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-4d2e00fa-5a28-44ad-968c-abc1c2870966 in namespace persistent-local-volumes-test-1176 STEP: Creating pod2 STEP: Creating a pod Mar 25 11:44:21.906: INFO: pod "pod-420789d2-8abe-4bc9-93cd-09eae7b6394c" created on Node "latest-worker2" STEP: Reading in pod2 Mar 25 11:44:21.906: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1176 PodName:pod-420789d2-8abe-4bc9-93cd-09eae7b6394c ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 11:44:21.906: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:44:22.064: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-420789d2-8abe-4bc9-93cd-09eae7b6394c in namespace persistent-local-volumes-test-1176 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 11:44:22.467: INFO: Deleting PersistentVolumeClaim "pvc-stptb" Mar 25 11:44:22.678: INFO: Deleting PersistentVolume "local-pvln2wt" Mar 25 11:44:23.377: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-a585049e-ce9f-40ae-a942-eaafe9e249d4] Namespace:persistent-local-volumes-test-1176 PodName:hostexec-latest-worker2-pckwl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 11:44:23.377: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:44:24.564: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-a585049e-ce9f-40ae-a942-eaafe9e249d4/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1176 PodName:hostexec-latest-worker2-pckwl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 11:44:24.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker2" at path /tmp/local-volume-test-a585049e-ce9f-40ae-a942-eaafe9e249d4/file Mar 25 11:44:25.666: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-1176 PodName:hostexec-latest-worker2-pckwl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 11:44:25.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-a585049e-ce9f-40ae-a942-eaafe9e249d4 Mar 25 11:44:26.699: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a585049e-ce9f-40ae-a942-eaafe9e249d4] Namespace:persistent-local-volumes-test-1176 PodName:hostexec-latest-worker2-pckwl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 11:44:26.699: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:44:28.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1176" for this suite. • [SLOW TEST:54.251 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":133,"completed":5,"skipped":258,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:44:29.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker2" using path "/tmp/local-volume-test-ec4abd41-7be9-4f26-bcdb-2cc14f6e900a" Mar 25 11:44:38.148: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-ec4abd41-7be9-4f26-bcdb-2cc14f6e900a && dd if=/dev/zero of=/tmp/local-volume-test-ec4abd41-7be9-4f26-bcdb-2cc14f6e900a/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-ec4abd41-7be9-4f26-bcdb-2cc14f6e900a/file] Namespace:persistent-local-volumes-test-8982 PodName:hostexec-latest-worker2-7dbmg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 11:44:38.148: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:44:38.475: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-ec4abd41-7be9-4f26-bcdb-2cc14f6e900a/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-8982 PodName:hostexec-latest-worker2-7dbmg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 11:44:38.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 11:44:38.595: INFO: Creating a PV followed by a PVC Mar 25 11:44:38.681: INFO: Waiting for PV local-pvr267d to bind to PVC pvc-4g2m9 Mar 25 11:44:38.681: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-4g2m9] to have phase Bound Mar 25 11:44:38.736: INFO: PersistentVolumeClaim pvc-4g2m9 found but phase is Pending instead of Bound. Mar 25 11:44:40.997: INFO: PersistentVolumeClaim pvc-4g2m9 found and phase=Bound (2.315734314s) Mar 25 11:44:40.997: INFO: Waiting up to 3m0s for PersistentVolume local-pvr267d to have phase Bound Mar 25 11:44:41.269: INFO: PersistentVolume local-pvr267d found and phase=Bound (271.543255ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 25 11:44:54.920: INFO: pod "pod-4c1ca9e6-4008-4eff-9fc8-378264055aae" created on Node "latest-worker2" STEP: Writing in pod1 Mar 25 11:44:54.920: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8982 PodName:pod-4c1ca9e6-4008-4eff-9fc8-378264055aae ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 11:44:54.920: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:44:55.416: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n18 bytes (18B) copied, 0.000042 seconds, 418.5KB/s", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Mar 25 11:44:55.416: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-8982 PodName:pod-4c1ca9e6-4008-4eff-9fc8-378264055aae ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 11:44:55.417: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:44:55.630: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-4c1ca9e6-4008-4eff-9fc8-378264055aae in namespace persistent-local-volumes-test-8982 [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 11:44:56.000: INFO: Deleting PersistentVolumeClaim "pvc-4g2m9" Mar 25 11:44:57.011: INFO: Deleting PersistentVolume "local-pvr267d" Mar 25 11:44:57.306: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-ec4abd41-7be9-4f26-bcdb-2cc14f6e900a/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-8982 PodName:hostexec-latest-worker2-7dbmg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 11:44:57.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker2" at path /tmp/local-volume-test-ec4abd41-7be9-4f26-bcdb-2cc14f6e900a/file Mar 25 11:44:57.854: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-8982 PodName:hostexec-latest-worker2-7dbmg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 11:44:57.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-ec4abd41-7be9-4f26-bcdb-2cc14f6e900a Mar 25 11:44:59.025: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ec4abd41-7be9-4f26-bcdb-2cc14f6e900a] Namespace:persistent-local-volumes-test-8982 PodName:hostexec-latest-worker2-7dbmg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 11:44:59.025: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:44:59.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8982" for this suite. • [SLOW TEST:30.886 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":133,"completed":6,"skipped":275,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:45:00.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 25 11:45:10.664: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-90c8bedb-6de1-4644-badd-83a67d0a72c7-backend && mount --bind /tmp/local-volume-test-90c8bedb-6de1-4644-badd-83a67d0a72c7-backend /tmp/local-volume-test-90c8bedb-6de1-4644-badd-83a67d0a72c7-backend && ln -s /tmp/local-volume-test-90c8bedb-6de1-4644-badd-83a67d0a72c7-backend /tmp/local-volume-test-90c8bedb-6de1-4644-badd-83a67d0a72c7] Namespace:persistent-local-volumes-test-7014 PodName:hostexec-latest-worker-pgc92 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 11:45:10.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 11:45:10.959: INFO: Creating a PV followed by a PVC Mar 25 11:45:11.092: INFO: Waiting for PV local-pv2tz67 to bind to PVC pvc-njvrv Mar 25 11:45:11.092: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-njvrv] to have phase Bound Mar 25 11:45:12.035: INFO: PersistentVolumeClaim pvc-njvrv found but phase is Pending instead of Bound. Mar 25 11:45:14.113: INFO: PersistentVolumeClaim pvc-njvrv found and phase=Bound (3.021553663s) Mar 25 11:45:14.113: INFO: Waiting up to 3m0s for PersistentVolume local-pv2tz67 to have phase Bound Mar 25 11:45:14.287: INFO: PersistentVolume local-pv2tz67 found and phase=Bound (173.333807ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Mar 25 11:45:14.425: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 11:45:14.425: INFO: Deleting PersistentVolumeClaim "pvc-njvrv" Mar 25 11:45:14.669: INFO: Deleting PersistentVolume "local-pv2tz67" STEP: Removing the test directory Mar 25 11:45:14.707: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-90c8bedb-6de1-4644-badd-83a67d0a72c7 && umount /tmp/local-volume-test-90c8bedb-6de1-4644-badd-83a67d0a72c7-backend && rm -r /tmp/local-volume-test-90c8bedb-6de1-4644-badd-83a67d0a72c7-backend] Namespace:persistent-local-volumes-test-7014 PodName:hostexec-latest-worker-pgc92 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 11:45:14.707: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:45:19.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7014" for this suite. S [SKIPPING] [19.515 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSS ------------------------------ [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1457 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:45:19.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should modify fsGroup if fsGroupPolicy=default /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1457 STEP: Building a driver namespace object, basename csi-mock-volumes-667 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 11:45:22.655: INFO: creating *v1.ServiceAccount: csi-mock-volumes-667-7484/csi-attacher Mar 25 11:45:22.701: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-667 Mar 25 11:45:22.701: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-667 Mar 25 11:45:22.737: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-667 Mar 25 11:45:23.054: INFO: creating *v1.Role: csi-mock-volumes-667-7484/external-attacher-cfg-csi-mock-volumes-667 Mar 25 11:45:23.317: INFO: creating *v1.RoleBinding: csi-mock-volumes-667-7484/csi-attacher-role-cfg Mar 25 11:45:23.617: INFO: creating *v1.ServiceAccount: csi-mock-volumes-667-7484/csi-provisioner Mar 25 11:45:23.671: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-667 Mar 25 11:45:23.671: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-667 Mar 25 11:45:23.706: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-667 Mar 25 11:45:23.826: INFO: creating *v1.Role: csi-mock-volumes-667-7484/external-provisioner-cfg-csi-mock-volumes-667 Mar 25 11:45:23.838: INFO: creating *v1.RoleBinding: csi-mock-volumes-667-7484/csi-provisioner-role-cfg Mar 25 11:45:23.865: INFO: creating *v1.ServiceAccount: csi-mock-volumes-667-7484/csi-resizer Mar 25 11:45:23.880: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-667 Mar 25 11:45:23.880: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-667 Mar 25 11:45:23.904: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-667 Mar 25 11:45:23.994: INFO: creating *v1.Role: csi-mock-volumes-667-7484/external-resizer-cfg-csi-mock-volumes-667 Mar 25 11:45:24.006: INFO: creating *v1.RoleBinding: csi-mock-volumes-667-7484/csi-resizer-role-cfg Mar 25 11:45:24.055: INFO: creating *v1.ServiceAccount: csi-mock-volumes-667-7484/csi-snapshotter Mar 25 11:45:24.073: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-667 Mar 25 11:45:24.073: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-667 Mar 25 11:45:24.162: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-667 Mar 25 11:45:24.193: INFO: creating *v1.Role: csi-mock-volumes-667-7484/external-snapshotter-leaderelection-csi-mock-volumes-667 Mar 25 11:45:25.154: INFO: creating *v1.RoleBinding: csi-mock-volumes-667-7484/external-snapshotter-leaderelection Mar 25 11:45:25.201: INFO: creating *v1.ServiceAccount: csi-mock-volumes-667-7484/csi-mock Mar 25 11:45:25.503: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-667 Mar 25 11:45:25.542: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-667 Mar 25 11:45:25.583: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-667 Mar 25 11:45:26.386: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-667 Mar 25 11:45:26.744: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-667 Mar 25 11:45:27.085: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-667 Mar 25 11:45:27.379: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-667 Mar 25 11:45:27.386: INFO: creating *v1.StatefulSet: csi-mock-volumes-667-7484/csi-mockplugin Mar 25 11:45:27.427: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-667 Mar 25 11:45:27.557: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-667" Mar 25 11:45:27.638: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-667 to register on node latest-worker STEP: Creating pod with fsGroup Mar 25 11:46:02.510: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 25 11:46:02.988: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-m9smg] to have phase Bound Mar 25 11:46:03.186: INFO: PersistentVolumeClaim pvc-m9smg found but phase is Pending instead of Bound. Mar 25 11:46:05.761: INFO: PersistentVolumeClaim pvc-m9smg found and phase=Bound (2.773252427s) Mar 25 11:46:14.850: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir /mnt/test/csi-mock-volumes-667] Namespace:csi-mock-volumes-667 PodName:pvc-volume-tester-79q8q ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 11:46:14.851: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:46:15.284: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'filecontents' > '/mnt/test/csi-mock-volumes-667/csi-mock-volumes-667'; sync] Namespace:csi-mock-volumes-667 PodName:pvc-volume-tester-79q8q ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 11:46:15.284: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:48:39.193: INFO: ExecWithOptions {Command:[/bin/sh -c ls -l /mnt/test/csi-mock-volumes-667/csi-mock-volumes-667] Namespace:csi-mock-volumes-667 PodName:pvc-volume-tester-79q8q ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 11:48:39.193: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:48:39.608: INFO: pod csi-mock-volumes-667/pvc-volume-tester-79q8q exec for cmd ls -l /mnt/test/csi-mock-volumes-667/csi-mock-volumes-667, stdout: -rw-r--r-- 1 root 4559 13 Mar 25 11:46 /mnt/test/csi-mock-volumes-667/csi-mock-volumes-667, stderr: Mar 25 11:48:39.608: INFO: ExecWithOptions {Command:[/bin/sh -c rm -fr /mnt/test/csi-mock-volumes-667] Namespace:csi-mock-volumes-667 PodName:pvc-volume-tester-79q8q ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 11:48:39.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod pvc-volume-tester-79q8q Mar 25 11:48:39.813: INFO: Deleting pod "pvc-volume-tester-79q8q" in namespace "csi-mock-volumes-667" Mar 25 11:48:40.397: INFO: Wait up to 5m0s for pod "pvc-volume-tester-79q8q" to be fully deleted STEP: Deleting claim pvc-m9smg Mar 25 11:49:37.817: INFO: Waiting up to 2m0s for PersistentVolume pvc-f0d02209-5930-468b-9730-09167b08775c to get deleted Mar 25 11:49:37.991: INFO: PersistentVolume pvc-f0d02209-5930-468b-9730-09167b08775c found and phase=Bound (174.549465ms) Mar 25 11:49:40.207: INFO: PersistentVolume pvc-f0d02209-5930-468b-9730-09167b08775c was removed STEP: Deleting storageclass csi-mock-volumes-667-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-667 STEP: Waiting for namespaces [csi-mock-volumes-667] to vanish STEP: uninstalling csi mock driver Mar 25 11:50:08.148: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-667-7484/csi-attacher Mar 25 11:50:08.593: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-667 Mar 25 11:50:09.333: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-667 Mar 25 11:50:09.781: INFO: deleting *v1.Role: csi-mock-volumes-667-7484/external-attacher-cfg-csi-mock-volumes-667 Mar 25 11:50:10.277: INFO: deleting *v1.RoleBinding: csi-mock-volumes-667-7484/csi-attacher-role-cfg Mar 25 11:50:10.525: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-667-7484/csi-provisioner Mar 25 11:50:11.008: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-667 Mar 25 11:50:11.161: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-667 Mar 25 11:50:11.907: INFO: deleting *v1.Role: csi-mock-volumes-667-7484/external-provisioner-cfg-csi-mock-volumes-667 Mar 25 11:50:12.659: INFO: deleting *v1.RoleBinding: csi-mock-volumes-667-7484/csi-provisioner-role-cfg Mar 25 11:50:13.068: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-667-7484/csi-resizer Mar 25 11:50:13.330: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-667 Mar 25 11:50:13.379: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-667 Mar 25 11:50:13.508: INFO: deleting *v1.Role: csi-mock-volumes-667-7484/external-resizer-cfg-csi-mock-volumes-667 Mar 25 11:50:13.541: INFO: deleting *v1.RoleBinding: csi-mock-volumes-667-7484/csi-resizer-role-cfg Mar 25 11:50:13.615: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-667-7484/csi-snapshotter Mar 25 11:50:13.692: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-667 Mar 25 11:50:13.759: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-667 Mar 25 11:50:13.997: INFO: deleting *v1.Role: csi-mock-volumes-667-7484/external-snapshotter-leaderelection-csi-mock-volumes-667 Mar 25 11:50:15.024: INFO: deleting *v1.RoleBinding: csi-mock-volumes-667-7484/external-snapshotter-leaderelection Mar 25 11:50:16.108: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-667-7484/csi-mock Mar 25 11:50:17.255: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-667 Mar 25 11:50:18.523: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-667 Mar 25 11:50:19.497: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-667 Mar 25 11:50:20.633: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-667 Mar 25 11:50:21.750: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-667 Mar 25 11:50:23.260: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-667 Mar 25 11:50:23.841: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-667 Mar 25 11:50:24.039: INFO: deleting *v1.StatefulSet: csi-mock-volumes-667-7484/csi-mockplugin Mar 25 11:50:24.522: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-667 STEP: deleting the driver namespace: csi-mock-volumes-667-7484 STEP: Waiting for namespaces [csi-mock-volumes-667-7484] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:50:49.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:330.579 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI FSGroupPolicy [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1433 should modify fsGroup if fsGroupPolicy=default /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1457 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","total":133,"completed":7,"skipped":294,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create metrics for total number of volumes in A/D Controller /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:322 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:50:50.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Mar 25 11:50:51.338: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:50:51.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-5798" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [2.101 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create metrics for total number of volumes in A/D Controller [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:322 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:50:52.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "latest-worker" at path "/tmp/local-volume-test-ffe04aaf-29de-4d27-8c94-cf3d5000aa03" Mar 25 11:51:01.634: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-ffe04aaf-29de-4d27-8c94-cf3d5000aa03" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-ffe04aaf-29de-4d27-8c94-cf3d5000aa03" "/tmp/local-volume-test-ffe04aaf-29de-4d27-8c94-cf3d5000aa03"] Namespace:persistent-local-volumes-test-6010 PodName:hostexec-latest-worker-tqx5r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 11:51:01.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 11:51:01.817: INFO: Creating a PV followed by a PVC Mar 25 11:51:01.895: INFO: Waiting for PV local-pvfg9tg to bind to PVC pvc-vdb5n Mar 25 11:51:01.895: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-vdb5n] to have phase Bound Mar 25 11:51:02.045: INFO: PersistentVolumeClaim pvc-vdb5n found but phase is Pending instead of Bound. Mar 25 11:51:06.024: INFO: PersistentVolumeClaim pvc-vdb5n found and phase=Bound (4.129385999s) Mar 25 11:51:06.024: INFO: Waiting up to 3m0s for PersistentVolume local-pvfg9tg to have phase Bound Mar 25 11:51:06.869: INFO: PersistentVolume local-pvfg9tg found and phase=Bound (845.368821ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Mar 25 11:51:19.115: INFO: pod "pod-41785dce-dc0b-405a-a87e-b92609710c7c" created on Node "latest-worker" STEP: Writing in pod1 Mar 25 11:51:19.115: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6010 PodName:pod-41785dce-dc0b-405a-a87e-b92609710c7c ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 11:51:19.115: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:51:19.309: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Mar 25 11:51:19.309: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6010 PodName:pod-41785dce-dc0b-405a-a87e-b92609710c7c ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 11:51:19.309: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:51:19.889: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Mar 25 11:51:30.343: INFO: pod "pod-553969b4-4a26-4ebc-ab39-18e5caee398d" created on Node "latest-worker" Mar 25 11:51:30.343: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6010 PodName:pod-553969b4-4a26-4ebc-ab39-18e5caee398d ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 11:51:30.343: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:51:30.576: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Mar 25 11:51:30.576: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-ffe04aaf-29de-4d27-8c94-cf3d5000aa03 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6010 PodName:pod-553969b4-4a26-4ebc-ab39-18e5caee398d ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 11:51:30.576: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:51:31.487: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-ffe04aaf-29de-4d27-8c94-cf3d5000aa03 > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Mar 25 11:51:31.487: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6010 PodName:pod-41785dce-dc0b-405a-a87e-b92609710c7c ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 11:51:31.487: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:51:32.049: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-ffe04aaf-29de-4d27-8c94-cf3d5000aa03", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-41785dce-dc0b-405a-a87e-b92609710c7c in namespace persistent-local-volumes-test-6010 STEP: Deleting pod2 STEP: Deleting pod pod-553969b4-4a26-4ebc-ab39-18e5caee398d in namespace persistent-local-volumes-test-6010 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 11:51:33.506: INFO: Deleting PersistentVolumeClaim "pvc-vdb5n" Mar 25 11:51:35.493: INFO: Deleting PersistentVolume "local-pvfg9tg" STEP: Unmount tmpfs mount point on node "latest-worker" at path "/tmp/local-volume-test-ffe04aaf-29de-4d27-8c94-cf3d5000aa03" Mar 25 11:51:35.873: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-ffe04aaf-29de-4d27-8c94-cf3d5000aa03"] Namespace:persistent-local-volumes-test-6010 PodName:hostexec-latest-worker-tqx5r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 11:51:35.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Mar 25 11:51:36.767: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ffe04aaf-29de-4d27-8c94-cf3d5000aa03] Namespace:persistent-local-volumes-test-6010 PodName:hostexec-latest-worker-tqx5r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 11:51:36.767: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:51:40.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6010" for this suite. • [SLOW TEST:49.030 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":133,"completed":8,"skipped":348,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:51:41.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:49 [It] should allow deletion of pod with invalid volume : secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 Mar 25 11:52:11.839: INFO: Deleting pod "pv-7897"/"pod-ephm-test-projected-pvzm" Mar 25 11:52:11.839: INFO: Deleting pod "pod-ephm-test-projected-pvzm" in namespace "pv-7897" Mar 25 11:52:11.981: INFO: Wait up to 5m0s for pod "pod-ephm-test-projected-pvzm" to be fully deleted [AfterEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:52:18.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-7897" for this suite. • [SLOW TEST:38.216 seconds] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 When pod refers to non-existent ephemeral storage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53 should allow deletion of pod with invalid volume : secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 ------------------------------ {"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret","total":133,"completed":9,"skipped":379,"failed":0} SSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:52:19.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker2" using path "/tmp/local-volume-test-4a12e1c2-a89c-456d-a912-8b40294cc52c" Mar 25 11:52:31.926: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-4a12e1c2-a89c-456d-a912-8b40294cc52c && dd if=/dev/zero of=/tmp/local-volume-test-4a12e1c2-a89c-456d-a912-8b40294cc52c/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-4a12e1c2-a89c-456d-a912-8b40294cc52c/file] Namespace:persistent-local-volumes-test-6516 PodName:hostexec-latest-worker2-v5769 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 11:52:31.926: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:52:32.169: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-4a12e1c2-a89c-456d-a912-8b40294cc52c/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6516 PodName:hostexec-latest-worker2-v5769 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 11:52:32.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 11:52:32.334: INFO: Creating a PV followed by a PVC Mar 25 11:52:32.386: INFO: Waiting for PV local-pv7ldsx to bind to PVC pvc-lndf4 Mar 25 11:52:32.386: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-lndf4] to have phase Bound Mar 25 11:52:32.518: INFO: PersistentVolumeClaim pvc-lndf4 found but phase is Pending instead of Bound. Mar 25 11:52:34.818: INFO: PersistentVolumeClaim pvc-lndf4 found and phase=Bound (2.432059863s) Mar 25 11:52:34.818: INFO: Waiting up to 3m0s for PersistentVolume local-pv7ldsx to have phase Bound Mar 25 11:52:34.864: INFO: PersistentVolume local-pv7ldsx found and phase=Bound (45.584171ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 25 11:52:44.920: INFO: pod "pod-5d656a80-e803-4ad1-9251-abc018716f46" created on Node "latest-worker2" STEP: Writing in pod1 Mar 25 11:52:44.920: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6516 PodName:pod-5d656a80-e803-4ad1-9251-abc018716f46 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 11:52:44.920: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:52:45.019: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Mar 25 11:52:45.019: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6516 PodName:pod-5d656a80-e803-4ad1-9251-abc018716f46 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 11:52:45.019: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:52:45.156: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-5d656a80-e803-4ad1-9251-abc018716f46 in namespace persistent-local-volumes-test-6516 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 11:52:45.274: INFO: Deleting PersistentVolumeClaim "pvc-lndf4" Mar 25 11:52:45.345: INFO: Deleting PersistentVolume "local-pv7ldsx" Mar 25 11:52:45.438: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-4a12e1c2-a89c-456d-a912-8b40294cc52c/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6516 PodName:hostexec-latest-worker2-v5769 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 11:52:45.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker2" at path /tmp/local-volume-test-4a12e1c2-a89c-456d-a912-8b40294cc52c/file Mar 25 11:52:45.582: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-6516 PodName:hostexec-latest-worker2-v5769 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 11:52:45.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-4a12e1c2-a89c-456d-a912-8b40294cc52c Mar 25 11:52:45.698: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4a12e1c2-a89c-456d-a912-8b40294cc52c] Namespace:persistent-local-volumes-test-6516 PodName:hostexec-latest-worker2-v5769 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 11:52:45.698: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:52:45.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6516" for this suite. • [SLOW TEST:26.630 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":133,"completed":10,"skipped":384,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:110 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:52:46.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:110 STEP: Creating configMap with name configmap-test-volume-map-73774b11-77a1-4095-94ff-993391c73b39 STEP: Creating a pod to test consume configMaps Mar 25 11:52:47.481: INFO: Waiting up to 5m0s for pod "pod-configmaps-5b88370b-0cb0-40f7-b1e9-c882dfa7171e" in namespace "configmap-2703" to be "Succeeded or Failed" Mar 25 11:52:47.655: INFO: Pod "pod-configmaps-5b88370b-0cb0-40f7-b1e9-c882dfa7171e": Phase="Pending", Reason="", readiness=false. Elapsed: 174.384274ms Mar 25 11:52:49.711: INFO: Pod "pod-configmaps-5b88370b-0cb0-40f7-b1e9-c882dfa7171e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.230633905s Mar 25 11:52:51.992: INFO: Pod "pod-configmaps-5b88370b-0cb0-40f7-b1e9-c882dfa7171e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.510885968s Mar 25 11:52:54.236: INFO: Pod "pod-configmaps-5b88370b-0cb0-40f7-b1e9-c882dfa7171e": Phase="Running", Reason="", readiness=true. Elapsed: 6.755203628s Mar 25 11:52:56.352: INFO: Pod "pod-configmaps-5b88370b-0cb0-40f7-b1e9-c882dfa7171e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.871419338s STEP: Saw pod success Mar 25 11:52:56.352: INFO: Pod "pod-configmaps-5b88370b-0cb0-40f7-b1e9-c882dfa7171e" satisfied condition "Succeeded or Failed" Mar 25 11:52:56.420: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-5b88370b-0cb0-40f7-b1e9-c882dfa7171e container agnhost-container: STEP: delete the pod Mar 25 11:52:57.425: INFO: Waiting for pod pod-configmaps-5b88370b-0cb0-40f7-b1e9-c882dfa7171e to disappear Mar 25 11:52:57.532: INFO: Pod pod-configmaps-5b88370b-0cb0-40f7-b1e9-c882dfa7171e no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:52:57.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2703" for this suite. • [SLOW TEST:11.558 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:110 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":133,"completed":11,"skipped":406,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Volume limits should verify that all nodes have volume limits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_limits.go:41 [BeforeEach] [sig-storage] Volume limits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:52:57.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-limits-on-node STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volume limits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_limits.go:35 Mar 25 11:52:59.246: INFO: Only supported for providers [aws gce gke] (not local) [AfterEach] [sig-storage] Volume limits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:52:59.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-limits-on-node-9073" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.982 seconds] [sig-storage] Volume limits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should verify that all nodes have volume limits [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_limits.go:41 Only supported for providers [aws gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_limits.go:36 ------------------------------ [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:53:00.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not be passed when podInfoOnMount=nil /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 STEP: Building a driver namespace object, basename csi-mock-volumes-9724 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 11:53:07.530: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9724-2743/csi-attacher Mar 25 11:53:08.018: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9724 Mar 25 11:53:08.018: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-9724 Mar 25 11:53:08.323: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9724 Mar 25 11:53:08.359: INFO: creating *v1.Role: csi-mock-volumes-9724-2743/external-attacher-cfg-csi-mock-volumes-9724 Mar 25 11:53:08.957: INFO: creating *v1.RoleBinding: csi-mock-volumes-9724-2743/csi-attacher-role-cfg Mar 25 11:53:09.614: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9724-2743/csi-provisioner Mar 25 11:53:10.071: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9724 Mar 25 11:53:10.071: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-9724 Mar 25 11:53:10.658: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9724 Mar 25 11:53:11.048: INFO: creating *v1.Role: csi-mock-volumes-9724-2743/external-provisioner-cfg-csi-mock-volumes-9724 Mar 25 11:53:11.085: INFO: creating *v1.RoleBinding: csi-mock-volumes-9724-2743/csi-provisioner-role-cfg Mar 25 11:53:11.302: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9724-2743/csi-resizer Mar 25 11:53:11.550: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9724 Mar 25 11:53:11.550: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-9724 Mar 25 11:53:11.755: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9724 Mar 25 11:53:11.832: INFO: creating *v1.Role: csi-mock-volumes-9724-2743/external-resizer-cfg-csi-mock-volumes-9724 Mar 25 11:53:12.658: INFO: creating *v1.RoleBinding: csi-mock-volumes-9724-2743/csi-resizer-role-cfg Mar 25 11:53:12.951: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9724-2743/csi-snapshotter Mar 25 11:53:13.325: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9724 Mar 25 11:53:13.325: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-9724 Mar 25 11:53:13.549: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9724 Mar 25 11:53:13.753: INFO: creating *v1.Role: csi-mock-volumes-9724-2743/external-snapshotter-leaderelection-csi-mock-volumes-9724 Mar 25 11:53:13.810: INFO: creating *v1.RoleBinding: csi-mock-volumes-9724-2743/external-snapshotter-leaderelection Mar 25 11:53:13.935: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9724-2743/csi-mock Mar 25 11:53:14.178: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9724 Mar 25 11:53:14.217: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9724 Mar 25 11:53:14.544: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9724 Mar 25 11:53:14.681: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9724 Mar 25 11:53:14.927: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9724 Mar 25 11:53:14.992: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9724 Mar 25 11:53:15.328: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9724 Mar 25 11:53:15.331: INFO: creating *v1.StatefulSet: csi-mock-volumes-9724-2743/csi-mockplugin Mar 25 11:53:15.393: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-9724 Mar 25 11:53:15.479: INFO: creating *v1.StatefulSet: csi-mock-volumes-9724-2743/csi-mockplugin-attacher Mar 25 11:53:15.493: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-9724" Mar 25 11:53:15.550: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-9724 to register on node latest-worker2 STEP: Creating pod Mar 25 11:53:44.330: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 25 11:53:45.675: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-4fbj8] to have phase Bound Mar 25 11:53:45.935: INFO: PersistentVolumeClaim pvc-4fbj8 found but phase is Pending instead of Bound. Mar 25 11:53:48.722: INFO: PersistentVolumeClaim pvc-4fbj8 found and phase=Bound (3.046878034s) STEP: Deleting the previously created pod Mar 25 11:54:09.962: INFO: Deleting pod "pvc-volume-tester-ppfm5" in namespace "csi-mock-volumes-9724" Mar 25 11:54:10.174: INFO: Wait up to 5m0s for pod "pvc-volume-tester-ppfm5" to be fully deleted STEP: Checking CSI driver logs Mar 25 11:54:50.725: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/618f6b01-e585-4543-8352-d657c191b995/volumes/kubernetes.io~csi/pvc-26ec8780-c0f1-4911-8f97-299243612fd1/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-ppfm5 Mar 25 11:54:50.725: INFO: Deleting pod "pvc-volume-tester-ppfm5" in namespace "csi-mock-volumes-9724" STEP: Deleting claim pvc-4fbj8 Mar 25 11:54:51.533: INFO: Waiting up to 2m0s for PersistentVolume pvc-26ec8780-c0f1-4911-8f97-299243612fd1 to get deleted Mar 25 11:54:51.550: INFO: PersistentVolume pvc-26ec8780-c0f1-4911-8f97-299243612fd1 found and phase=Bound (16.648007ms) Mar 25 11:54:53.747: INFO: PersistentVolume pvc-26ec8780-c0f1-4911-8f97-299243612fd1 was removed STEP: Deleting storageclass csi-mock-volumes-9724-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-9724 STEP: Waiting for namespaces [csi-mock-volumes-9724] to vanish STEP: uninstalling csi mock driver Mar 25 11:55:16.163: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9724-2743/csi-attacher Mar 25 11:55:16.202: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9724 Mar 25 11:55:16.424: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9724 Mar 25 11:55:16.505: INFO: deleting *v1.Role: csi-mock-volumes-9724-2743/external-attacher-cfg-csi-mock-volumes-9724 Mar 25 11:55:16.532: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9724-2743/csi-attacher-role-cfg Mar 25 11:55:16.646: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9724-2743/csi-provisioner Mar 25 11:55:16.681: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9724 Mar 25 11:55:16.710: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9724 Mar 25 11:55:16.849: INFO: deleting *v1.Role: csi-mock-volumes-9724-2743/external-provisioner-cfg-csi-mock-volumes-9724 Mar 25 11:55:17.582: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9724-2743/csi-provisioner-role-cfg Mar 25 11:55:17.661: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9724-2743/csi-resizer Mar 25 11:55:17.736: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9724 Mar 25 11:55:17.919: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9724 Mar 25 11:55:17.993: INFO: deleting *v1.Role: csi-mock-volumes-9724-2743/external-resizer-cfg-csi-mock-volumes-9724 Mar 25 11:55:18.293: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9724-2743/csi-resizer-role-cfg Mar 25 11:55:20.783: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9724-2743/csi-snapshotter Mar 25 11:55:21.682: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9724 Mar 25 11:55:22.379: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9724 Mar 25 11:55:22.523: INFO: deleting *v1.Role: csi-mock-volumes-9724-2743/external-snapshotter-leaderelection-csi-mock-volumes-9724 Mar 25 11:55:22.653: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9724-2743/external-snapshotter-leaderelection Mar 25 11:55:22.840: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9724-2743/csi-mock Mar 25 11:55:22.964: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9724 Mar 25 11:55:23.107: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9724 Mar 25 11:55:23.176: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9724 Mar 25 11:55:23.269: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9724 Mar 25 11:55:23.419: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9724 Mar 25 11:55:23.585: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9724 Mar 25 11:55:23.833: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9724 Mar 25 11:55:25.203: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9724-2743/csi-mockplugin Mar 25 11:55:25.419: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-9724 Mar 25 11:55:26.144: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9724-2743/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-9724-2743 STEP: Waiting for namespaces [csi-mock-volumes-9724-2743] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:56:37.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:216.755 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443 should not be passed when podInfoOnMount=nil /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil","total":133,"completed":12,"skipped":466,"failed":0} SSSSS ------------------------------ [sig-storage] PersistentVolumes GCEPD should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:141 [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:56:37.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:77 Mar 25 11:56:37.726: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:56:37.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-8195" for this suite. [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:110 Mar 25 11:56:38.014: INFO: AfterEach: Cleaning up test resources Mar 25 11:56:38.014: INFO: pvc is nil Mar 25 11:56:38.014: INFO: pv is nil S [SKIPPING] in Spec Setup (BeforeEach) [0.523 seconds] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:141 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: block] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:56:38.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker2" using path "/tmp/local-volume-test-0da79d0d-8832-434a-886d-0f6f410b07a3" Mar 25 11:56:46.882: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-0da79d0d-8832-434a-886d-0f6f410b07a3 && dd if=/dev/zero of=/tmp/local-volume-test-0da79d0d-8832-434a-886d-0f6f410b07a3/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-0da79d0d-8832-434a-886d-0f6f410b07a3/file] Namespace:persistent-local-volumes-test-151 PodName:hostexec-latest-worker2-g4fpw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 11:56:46.882: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:56:47.408: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-0da79d0d-8832-434a-886d-0f6f410b07a3/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-151 PodName:hostexec-latest-worker2-g4fpw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 11:56:47.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 11:56:48.661: INFO: Creating a PV followed by a PVC Mar 25 11:56:48.790: INFO: Waiting for PV local-pvk9px8 to bind to PVC pvc-djkjj Mar 25 11:56:48.790: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-djkjj] to have phase Bound Mar 25 11:56:49.521: INFO: PersistentVolumeClaim pvc-djkjj found but phase is Pending instead of Bound. Mar 25 11:56:52.109: INFO: PersistentVolumeClaim pvc-djkjj found and phase=Bound (3.318672691s) Mar 25 11:56:52.109: INFO: Waiting up to 3m0s for PersistentVolume local-pvk9px8 to have phase Bound Mar 25 11:56:52.377: INFO: PersistentVolume local-pvk9px8 found and phase=Bound (268.178859ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 Mar 25 11:56:52.799: INFO: We don't set fsGroup on block device, skipped. [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 11:56:52.800: INFO: Deleting PersistentVolumeClaim "pvc-djkjj" Mar 25 11:56:53.109: INFO: Deleting PersistentVolume "local-pvk9px8" Mar 25 11:56:53.452: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-0da79d0d-8832-434a-886d-0f6f410b07a3/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-151 PodName:hostexec-latest-worker2-g4fpw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 11:56:53.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker2" at path /tmp/local-volume-test-0da79d0d-8832-434a-886d-0f6f410b07a3/file Mar 25 11:56:53.711: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-151 PodName:hostexec-latest-worker2-g4fpw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 11:56:53.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-0da79d0d-8832-434a-886d-0f6f410b07a3 Mar 25 11:56:53.963: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0da79d0d-8832-434a-886d-0f6f410b07a3] Namespace:persistent-local-volumes-test-151 PodName:hostexec-latest-worker2-g4fpw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 11:56:53.963: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:56:54.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-151" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [16.418 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 We don't set fsGroup on block device, skipped. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:263 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:56:54.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity used, insufficient capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 STEP: Building a driver namespace object, basename csi-mock-volumes-566 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 11:56:57.913: INFO: creating *v1.ServiceAccount: csi-mock-volumes-566-1961/csi-attacher Mar 25 11:56:57.941: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-566 Mar 25 11:56:57.941: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-566 Mar 25 11:56:57.977: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-566 Mar 25 11:56:58.091: INFO: creating *v1.Role: csi-mock-volumes-566-1961/external-attacher-cfg-csi-mock-volumes-566 Mar 25 11:56:58.998: INFO: creating *v1.RoleBinding: csi-mock-volumes-566-1961/csi-attacher-role-cfg Mar 25 11:56:59.241: INFO: creating *v1.ServiceAccount: csi-mock-volumes-566-1961/csi-provisioner Mar 25 11:56:59.300: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-566 Mar 25 11:56:59.300: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-566 Mar 25 11:56:59.443: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-566 Mar 25 11:56:59.510: INFO: creating *v1.Role: csi-mock-volumes-566-1961/external-provisioner-cfg-csi-mock-volumes-566 Mar 25 11:57:00.971: INFO: creating *v1.RoleBinding: csi-mock-volumes-566-1961/csi-provisioner-role-cfg Mar 25 11:57:01.275: INFO: creating *v1.ServiceAccount: csi-mock-volumes-566-1961/csi-resizer Mar 25 11:57:01.306: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-566 Mar 25 11:57:01.306: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-566 Mar 25 11:57:01.428: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-566 Mar 25 11:57:01.463: INFO: creating *v1.Role: csi-mock-volumes-566-1961/external-resizer-cfg-csi-mock-volumes-566 Mar 25 11:57:01.467: INFO: creating *v1.RoleBinding: csi-mock-volumes-566-1961/csi-resizer-role-cfg Mar 25 11:57:01.497: INFO: creating *v1.ServiceAccount: csi-mock-volumes-566-1961/csi-snapshotter Mar 25 11:57:02.405: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-566 Mar 25 11:57:02.405: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-566 Mar 25 11:57:02.563: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-566 Mar 25 11:57:02.917: INFO: creating *v1.Role: csi-mock-volumes-566-1961/external-snapshotter-leaderelection-csi-mock-volumes-566 Mar 25 11:57:03.282: INFO: creating *v1.RoleBinding: csi-mock-volumes-566-1961/external-snapshotter-leaderelection Mar 25 11:57:03.636: INFO: creating *v1.ServiceAccount: csi-mock-volumes-566-1961/csi-mock Mar 25 11:57:03.679: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-566 Mar 25 11:57:04.577: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-566 Mar 25 11:57:04.611: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-566 Mar 25 11:57:04.623: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-566 Mar 25 11:57:05.013: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-566 Mar 25 11:57:05.261: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-566 Mar 25 11:57:05.318: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-566 Mar 25 11:57:05.485: INFO: creating *v1.StatefulSet: csi-mock-volumes-566-1961/csi-mockplugin Mar 25 11:57:05.498: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-566 Mar 25 11:57:05.642: INFO: creating *v1.StatefulSet: csi-mock-volumes-566-1961/csi-mockplugin-attacher Mar 25 11:57:05.803: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-566" Mar 25 11:57:06.006: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-566 to register on node latest-worker2 Mar 25 11:57:23.076: FAIL: create CSIStorageCapacity {TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name: GenerateName:fake-capacity- Namespace: SelfLink: UID: ResourceVersion: Generation:0 CreationTimestamp:0001-01-01 00:00:00 +0000 UTC DeletionTimestamp: DeletionGracePeriodSeconds: Labels:map[] Annotations:map[] OwnerReferences:[] Finalizers:[] ClusterName: ManagedFields:[]} NodeTopology:&LabelSelector{MatchLabels:map[string]string{},MatchExpressions:[]LabelSelectorRequirement{},} StorageClassName:mock-csi-storage-capacity-csi-mock-volumes-566 Capacity:1Mi MaximumVolumeSize:} Unexpected error: <*errors.StatusError | 0xc000da2e60>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func1.14.1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1201 +0x47a k8s.io/kubernetes/test/e2e.RunE2ETests(0xc003264a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc003264a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc003264a80, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-566 STEP: Waiting for namespaces [csi-mock-volumes-566] to vanish STEP: uninstalling csi mock driver Mar 25 11:57:35.235: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-566-1961/csi-attacher Mar 25 11:57:35.318: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-566 Mar 25 11:57:35.413: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-566 Mar 25 11:57:36.774: INFO: deleting *v1.Role: csi-mock-volumes-566-1961/external-attacher-cfg-csi-mock-volumes-566 Mar 25 11:57:37.019: INFO: deleting *v1.RoleBinding: csi-mock-volumes-566-1961/csi-attacher-role-cfg Mar 25 11:57:37.097: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-566-1961/csi-provisioner Mar 25 11:57:37.303: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-566 Mar 25 11:57:38.027: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-566 Mar 25 11:57:38.164: INFO: deleting *v1.Role: csi-mock-volumes-566-1961/external-provisioner-cfg-csi-mock-volumes-566 Mar 25 11:57:38.312: INFO: deleting *v1.RoleBinding: csi-mock-volumes-566-1961/csi-provisioner-role-cfg Mar 25 11:57:38.362: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-566-1961/csi-resizer Mar 25 11:57:38.389: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-566 Mar 25 11:57:38.476: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-566 Mar 25 11:57:38.534: INFO: deleting *v1.Role: csi-mock-volumes-566-1961/external-resizer-cfg-csi-mock-volumes-566 Mar 25 11:57:38.657: INFO: deleting *v1.RoleBinding: csi-mock-volumes-566-1961/csi-resizer-role-cfg Mar 25 11:57:38.750: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-566-1961/csi-snapshotter Mar 25 11:57:38.803: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-566 Mar 25 11:57:38.886: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-566 Mar 25 11:57:38.946: INFO: deleting *v1.Role: csi-mock-volumes-566-1961/external-snapshotter-leaderelection-csi-mock-volumes-566 Mar 25 11:57:39.106: INFO: deleting *v1.RoleBinding: csi-mock-volumes-566-1961/external-snapshotter-leaderelection Mar 25 11:57:39.172: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-566-1961/csi-mock Mar 25 11:57:39.318: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-566 Mar 25 11:57:39.371: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-566 Mar 25 11:57:39.463: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-566 Mar 25 11:57:39.534: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-566 Mar 25 11:57:39.660: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-566 Mar 25 11:57:39.691: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-566 Mar 25 11:57:39.787: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-566 Mar 25 11:57:39.875: INFO: deleting *v1.StatefulSet: csi-mock-volumes-566-1961/csi-mockplugin Mar 25 11:57:39.963: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-566 Mar 25 11:57:40.072: INFO: deleting *v1.StatefulSet: csi-mock-volumes-566-1961/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-566-1961 STEP: Waiting for namespaces [csi-mock-volumes-566-1961] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:59:02.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • Failure [130.176 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134 CSIStorageCapacity used, insufficient capacity [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 Mar 25 11:57:23.076: create CSIStorageCapacity {TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name: GenerateName:fake-capacity- Namespace: SelfLink: UID: ResourceVersion: Generation:0 CreationTimestamp:0001-01-01 00:00:00 +0000 UTC DeletionTimestamp: DeletionGracePeriodSeconds: Labels:map[] Annotations:map[] OwnerReferences:[] Finalizers:[] ClusterName: ManagedFields:[]} NodeTopology:&LabelSelector{MatchLabels:map[string]string{},MatchExpressions:[]LabelSelectorRequirement{},} StorageClassName:mock-csi-storage-capacity-csi-mock-volumes-566 Capacity:1Mi MaximumVolumeSize:} Unexpected error: <*errors.StatusError | 0xc000da2e60>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1201 ------------------------------ {"msg":"FAILED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","total":133,"completed":12,"skipped":587,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:59:04.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 25 11:59:18.009: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-c7b990a9-95d5-4aaa-af60-fd2fe0349c7f && mount --bind /tmp/local-volume-test-c7b990a9-95d5-4aaa-af60-fd2fe0349c7f /tmp/local-volume-test-c7b990a9-95d5-4aaa-af60-fd2fe0349c7f] Namespace:persistent-local-volumes-test-1898 PodName:hostexec-latest-worker-pnjj5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 11:59:18.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 11:59:18.953: INFO: Creating a PV followed by a PVC Mar 25 11:59:19.397: INFO: Waiting for PV local-pvwd6pw to bind to PVC pvc-k69zh Mar 25 11:59:19.397: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-k69zh] to have phase Bound Mar 25 11:59:19.403: INFO: PersistentVolumeClaim pvc-k69zh found but phase is Pending instead of Bound. Mar 25 11:59:21.546: INFO: PersistentVolumeClaim pvc-k69zh found and phase=Bound (2.149009914s) Mar 25 11:59:21.546: INFO: Waiting up to 3m0s for PersistentVolume local-pvwd6pw to have phase Bound Mar 25 11:59:21.750: INFO: PersistentVolume local-pvwd6pw found and phase=Bound (203.904794ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Mar 25 11:59:31.458: INFO: pod "pod-aa100fef-9ec1-4467-9acc-f966bb7498bb" created on Node "latest-worker" STEP: Writing in pod1 Mar 25 11:59:31.458: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1898 PodName:pod-aa100fef-9ec1-4467-9acc-f966bb7498bb ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 11:59:31.458: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:59:32.595: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Mar 25 11:59:32.595: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1898 PodName:pod-aa100fef-9ec1-4467-9acc-f966bb7498bb ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 11:59:32.595: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:59:33.307: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-aa100fef-9ec1-4467-9acc-f966bb7498bb in namespace persistent-local-volumes-test-1898 STEP: Creating pod2 STEP: Creating a pod Mar 25 11:59:44.908: INFO: pod "pod-8b175b22-5e0d-4796-8007-29f022f39767" created on Node "latest-worker" STEP: Reading in pod2 Mar 25 11:59:44.908: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1898 PodName:pod-8b175b22-5e0d-4796-8007-29f022f39767 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 11:59:44.908: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:59:45.122: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-8b175b22-5e0d-4796-8007-29f022f39767 in namespace persistent-local-volumes-test-1898 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 11:59:45.184: INFO: Deleting PersistentVolumeClaim "pvc-k69zh" Mar 25 11:59:45.338: INFO: Deleting PersistentVolume "local-pvwd6pw" STEP: Removing the test directory Mar 25 11:59:45.405: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-c7b990a9-95d5-4aaa-af60-fd2fe0349c7f && rm -r /tmp/local-volume-test-c7b990a9-95d5-4aaa-af60-fd2fe0349c7f] Namespace:persistent-local-volumes-test-1898 PodName:hostexec-latest-worker-pnjj5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 11:59:45.405: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:59:45.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1898" for this suite. • [SLOW TEST:42.129 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":133,"completed":13,"skipped":709,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:59:46.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 25 11:59:54.536: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-1c66adaa-7e24-4971-9196-5d598f46f1ce-backend && ln -s /tmp/local-volume-test-1c66adaa-7e24-4971-9196-5d598f46f1ce-backend /tmp/local-volume-test-1c66adaa-7e24-4971-9196-5d598f46f1ce] Namespace:persistent-local-volumes-test-4244 PodName:hostexec-latest-worker2-f62rd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 11:59:54.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 11:59:54.733: INFO: Creating a PV followed by a PVC Mar 25 11:59:54.853: INFO: Waiting for PV local-pv4rbjq to bind to PVC pvc-qxwrr Mar 25 11:59:54.853: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-qxwrr] to have phase Bound Mar 25 11:59:54.879: INFO: PersistentVolumeClaim pvc-qxwrr found but phase is Pending instead of Bound. Mar 25 11:59:57.104: INFO: PersistentVolumeClaim pvc-qxwrr found but phase is Pending instead of Bound. Mar 25 12:00:00.023: INFO: PersistentVolumeClaim pvc-qxwrr found but phase is Pending instead of Bound. Mar 25 12:00:02.359: INFO: PersistentVolumeClaim pvc-qxwrr found but phase is Pending instead of Bound. Mar 25 12:00:04.421: INFO: PersistentVolumeClaim pvc-qxwrr found and phase=Bound (9.567845883s) Mar 25 12:00:04.421: INFO: Waiting up to 3m0s for PersistentVolume local-pv4rbjq to have phase Bound Mar 25 12:00:04.472: INFO: PersistentVolume local-pv4rbjq found and phase=Bound (51.190083ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 25 12:00:14.948: INFO: pod "pod-a6bbe1a3-49e5-40da-a5a1-f4d8ed5f5fcd" created on Node "latest-worker2" STEP: Writing in pod1 Mar 25 12:00:14.948: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4244 PodName:pod-a6bbe1a3-49e5-40da-a5a1-f4d8ed5f5fcd ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:00:14.948: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:00:15.140: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Mar 25 12:00:15.140: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4244 PodName:pod-a6bbe1a3-49e5-40da-a5a1-f4d8ed5f5fcd ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:00:15.140: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:00:15.268: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Mar 25 12:00:15.268: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-1c66adaa-7e24-4971-9196-5d598f46f1ce > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4244 PodName:pod-a6bbe1a3-49e5-40da-a5a1-f4d8ed5f5fcd ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:00:15.268: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:00:15.508: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-1c66adaa-7e24-4971-9196-5d598f46f1ce > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-a6bbe1a3-49e5-40da-a5a1-f4d8ed5f5fcd in namespace persistent-local-volumes-test-4244 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 12:00:15.597: INFO: Deleting PersistentVolumeClaim "pvc-qxwrr" Mar 25 12:00:15.754: INFO: Deleting PersistentVolume "local-pv4rbjq" STEP: Removing the test directory Mar 25 12:00:15.956: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-1c66adaa-7e24-4971-9196-5d598f46f1ce && rm -r /tmp/local-volume-test-1c66adaa-7e24-4971-9196-5d598f46f1ce-backend] Namespace:persistent-local-volumes-test-4244 PodName:hostexec-latest-worker2-f62rd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:00:15.956: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:00:17.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4244" for this suite. • [SLOW TEST:32.156 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":133,"completed":14,"skipped":787,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:531 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:00:18.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:455 STEP: Setting up 10 local volumes on node "latest-worker" STEP: Creating tmpfs mount point on node "latest-worker" at path "/tmp/local-volume-test-e7bfb771-5f84-478f-b327-8aa0d7f85d76" Mar 25 12:00:28.888: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-e7bfb771-5f84-478f-b327-8aa0d7f85d76" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-e7bfb771-5f84-478f-b327-8aa0d7f85d76" "/tmp/local-volume-test-e7bfb771-5f84-478f-b327-8aa0d7f85d76"] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker-d9db5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:00:28.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "latest-worker" at path "/tmp/local-volume-test-8ce216f6-2025-48d5-af7e-ef5903e6c336" Mar 25 12:00:29.485: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-8ce216f6-2025-48d5-af7e-ef5903e6c336" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-8ce216f6-2025-48d5-af7e-ef5903e6c336" "/tmp/local-volume-test-8ce216f6-2025-48d5-af7e-ef5903e6c336"] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker-d9db5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:00:29.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "latest-worker" at path "/tmp/local-volume-test-5af4e45d-3a5a-4c75-9d38-c30bbfe8e96f" Mar 25 12:00:29.603: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-5af4e45d-3a5a-4c75-9d38-c30bbfe8e96f" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-5af4e45d-3a5a-4c75-9d38-c30bbfe8e96f" "/tmp/local-volume-test-5af4e45d-3a5a-4c75-9d38-c30bbfe8e96f"] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker-d9db5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:00:29.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "latest-worker" at path "/tmp/local-volume-test-10337142-fd07-4a55-90db-473aafb9c62e" Mar 25 12:00:29.815: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-10337142-fd07-4a55-90db-473aafb9c62e" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-10337142-fd07-4a55-90db-473aafb9c62e" "/tmp/local-volume-test-10337142-fd07-4a55-90db-473aafb9c62e"] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker-d9db5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:00:29.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "latest-worker" at path "/tmp/local-volume-test-0d937990-de20-481b-8048-2cac0a2425ad" Mar 25 12:00:29.942: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-0d937990-de20-481b-8048-2cac0a2425ad" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-0d937990-de20-481b-8048-2cac0a2425ad" "/tmp/local-volume-test-0d937990-de20-481b-8048-2cac0a2425ad"] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker-d9db5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:00:29.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "latest-worker" at path "/tmp/local-volume-test-e79e9be6-ad3f-45dd-8fdc-c4e9bedbf74b" Mar 25 12:00:30.102: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-e79e9be6-ad3f-45dd-8fdc-c4e9bedbf74b" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-e79e9be6-ad3f-45dd-8fdc-c4e9bedbf74b" "/tmp/local-volume-test-e79e9be6-ad3f-45dd-8fdc-c4e9bedbf74b"] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker-d9db5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:00:30.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "latest-worker" at path "/tmp/local-volume-test-4eca7bf2-e897-4322-b780-290858bb4233" Mar 25 12:00:30.274: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-4eca7bf2-e897-4322-b780-290858bb4233" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-4eca7bf2-e897-4322-b780-290858bb4233" "/tmp/local-volume-test-4eca7bf2-e897-4322-b780-290858bb4233"] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker-d9db5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:00:30.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "latest-worker" at path "/tmp/local-volume-test-a8ac093d-12e8-4abc-b4d7-53d5b60af75a" Mar 25 12:00:30.550: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-a8ac093d-12e8-4abc-b4d7-53d5b60af75a" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-a8ac093d-12e8-4abc-b4d7-53d5b60af75a" "/tmp/local-volume-test-a8ac093d-12e8-4abc-b4d7-53d5b60af75a"] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker-d9db5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:00:30.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "latest-worker" at path "/tmp/local-volume-test-69d5670c-09cc-4c8a-a56d-d323a23c7277" Mar 25 12:00:30.809: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-69d5670c-09cc-4c8a-a56d-d323a23c7277" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-69d5670c-09cc-4c8a-a56d-d323a23c7277" "/tmp/local-volume-test-69d5670c-09cc-4c8a-a56d-d323a23c7277"] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker-d9db5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:00:30.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "latest-worker" at path "/tmp/local-volume-test-36506673-b4a7-4ce7-947d-38e45403c646" Mar 25 12:00:31.505: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-36506673-b4a7-4ce7-947d-38e45403c646" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-36506673-b4a7-4ce7-947d-38e45403c646" "/tmp/local-volume-test-36506673-b4a7-4ce7-947d-38e45403c646"] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker-d9db5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:00:31.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Setting up 10 local volumes on node "latest-worker2" STEP: Creating tmpfs mount point on node "latest-worker2" at path "/tmp/local-volume-test-634b76df-20a2-4b47-b5a7-4adcf11d3184" Mar 25 12:00:36.677: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-634b76df-20a2-4b47-b5a7-4adcf11d3184" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-634b76df-20a2-4b47-b5a7-4adcf11d3184" "/tmp/local-volume-test-634b76df-20a2-4b47-b5a7-4adcf11d3184"] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker2-h786m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:00:36.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "latest-worker2" at path "/tmp/local-volume-test-ad84026f-38ea-4163-b376-57f58919f4ef" Mar 25 12:00:36.853: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-ad84026f-38ea-4163-b376-57f58919f4ef" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-ad84026f-38ea-4163-b376-57f58919f4ef" "/tmp/local-volume-test-ad84026f-38ea-4163-b376-57f58919f4ef"] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker2-h786m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:00:36.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "latest-worker2" at path "/tmp/local-volume-test-5826cb9d-9521-4b59-a8d6-6018ec84bae4" Mar 25 12:00:36.963: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-5826cb9d-9521-4b59-a8d6-6018ec84bae4" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-5826cb9d-9521-4b59-a8d6-6018ec84bae4" "/tmp/local-volume-test-5826cb9d-9521-4b59-a8d6-6018ec84bae4"] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker2-h786m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:00:36.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "latest-worker2" at path "/tmp/local-volume-test-9a5bdc86-d51e-483e-9b78-00f14110bef5" Mar 25 12:00:37.161: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-9a5bdc86-d51e-483e-9b78-00f14110bef5" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-9a5bdc86-d51e-483e-9b78-00f14110bef5" "/tmp/local-volume-test-9a5bdc86-d51e-483e-9b78-00f14110bef5"] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker2-h786m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:00:37.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "latest-worker2" at path "/tmp/local-volume-test-7f929152-7dc4-4988-b457-8f8dd16d3d18" Mar 25 12:00:37.374: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-7f929152-7dc4-4988-b457-8f8dd16d3d18" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-7f929152-7dc4-4988-b457-8f8dd16d3d18" "/tmp/local-volume-test-7f929152-7dc4-4988-b457-8f8dd16d3d18"] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker2-h786m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:00:37.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "latest-worker2" at path "/tmp/local-volume-test-dfa655bd-eaa1-46a6-9151-cc66aba01d9b" Mar 25 12:00:37.544: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-dfa655bd-eaa1-46a6-9151-cc66aba01d9b" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-dfa655bd-eaa1-46a6-9151-cc66aba01d9b" "/tmp/local-volume-test-dfa655bd-eaa1-46a6-9151-cc66aba01d9b"] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker2-h786m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:00:37.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "latest-worker2" at path "/tmp/local-volume-test-e6fa280c-4bad-4ff2-b98e-d6eb40b1d7c0" Mar 25 12:00:37.694: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-e6fa280c-4bad-4ff2-b98e-d6eb40b1d7c0" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-e6fa280c-4bad-4ff2-b98e-d6eb40b1d7c0" "/tmp/local-volume-test-e6fa280c-4bad-4ff2-b98e-d6eb40b1d7c0"] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker2-h786m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:00:37.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "latest-worker2" at path "/tmp/local-volume-test-7c9e7d3a-8626-4d0e-9e6a-a51805a515e5" Mar 25 12:00:37.886: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-7c9e7d3a-8626-4d0e-9e6a-a51805a515e5" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-7c9e7d3a-8626-4d0e-9e6a-a51805a515e5" "/tmp/local-volume-test-7c9e7d3a-8626-4d0e-9e6a-a51805a515e5"] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker2-h786m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:00:37.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "latest-worker2" at path "/tmp/local-volume-test-158658eb-837a-4b60-8441-b89b0dbf2f6c" Mar 25 12:00:38.114: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-158658eb-837a-4b60-8441-b89b0dbf2f6c" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-158658eb-837a-4b60-8441-b89b0dbf2f6c" "/tmp/local-volume-test-158658eb-837a-4b60-8441-b89b0dbf2f6c"] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker2-h786m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:00:38.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "latest-worker2" at path "/tmp/local-volume-test-cab29ba9-1f51-46ee-864c-8087036ffe11" Mar 25 12:00:39.438: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-cab29ba9-1f51-46ee-864c-8087036ffe11" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-cab29ba9-1f51-46ee-864c-8087036ffe11" "/tmp/local-volume-test-cab29ba9-1f51-46ee-864c-8087036ffe11"] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker2-h786m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:00:39.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Create 20 PVs STEP: Start a goroutine to recycle unbound PVs [It] should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:531 STEP: Creating 7 pods periodically STEP: Waiting for all pods to complete successfully Mar 25 12:00:52.850: INFO: Deleting pod pod-dfd9d64b-bf83-4501-9827-07cd23146491 Mar 25 12:00:54.166: INFO: Deleting PersistentVolumeClaim "pvc-vbs4f" Mar 25 12:00:55.570: INFO: Deleting PersistentVolumeClaim "pvc-qn26c" Mar 25 12:00:56.356: INFO: Deleting PersistentVolumeClaim "pvc-frj6b" Mar 25 12:00:57.366: INFO: 1/28 pods finished Mar 25 12:00:58.640: INFO: Deleting pod pod-2ddd83c5-648c-42f1-aecb-3171534e482a STEP: Delete "local-pvftw4l" and create a new PV for same local volume storage Mar 25 12:01:00.023: INFO: Deleting PersistentVolumeClaim "pvc-g2hmw" Mar 25 12:01:00.147: INFO: Deleting PersistentVolumeClaim "pvc-snrmt" STEP: Delete "local-pvftw4l" and create a new PV for same local volume storage Mar 25 12:01:00.431: INFO: Deleting PersistentVolumeClaim "pvc-pnvd2" STEP: Delete "local-pvbrdq4" and create a new PV for same local volume storage Mar 25 12:01:01.187: INFO: 2/28 pods finished Mar 25 12:01:01.187: INFO: Deleting pod pod-ac45c850-8f21-46be-bdb5-b28b9a501087 STEP: Delete "local-pvbrdq4" and create a new PV for same local volume storage Mar 25 12:01:02.109: INFO: Deleting PersistentVolumeClaim "pvc-dtnmc" STEP: Delete "local-pvdrm2l" and create a new PV for same local volume storage Mar 25 12:01:03.327: INFO: Deleting PersistentVolumeClaim "pvc-zwrhr" Mar 25 12:01:03.948: INFO: Deleting PersistentVolumeClaim "pvc-6xhhn" STEP: Delete "local-pvkzd5j" and create a new PV for same local volume storage Mar 25 12:01:04.522: INFO: 3/28 pods finished STEP: Delete "local-pv4r75t" and create a new PV for same local volume storage Mar 25 12:01:05.028: INFO: Deleting pod pod-3fb7da43-aa06-4737-acc8-97cc80fa5581 STEP: Delete "local-pvs5lhp" and create a new PV for same local volume storage Mar 25 12:01:05.758: INFO: Deleting PersistentVolumeClaim "pvc-nzvdl" Mar 25 12:01:06.077: INFO: Deleting PersistentVolumeClaim "pvc-95l92" STEP: Delete "local-pvr86bv" and create a new PV for same local volume storage Mar 25 12:01:06.269: INFO: Deleting PersistentVolumeClaim "pvc-jnn66" Mar 25 12:01:06.459: INFO: 4/28 pods finished Mar 25 12:01:06.459: INFO: Deleting pod pod-95132eb2-919c-478f-b260-fcc67c609912 STEP: Delete "local-pv2nlpl" and create a new PV for same local volume storage Mar 25 12:01:06.717: INFO: Deleting PersistentVolumeClaim "pvc-lmvpr" Mar 25 12:01:07.249: INFO: Deleting PersistentVolumeClaim "pvc-fdf2t" Mar 25 12:01:07.473: INFO: Deleting PersistentVolumeClaim "pvc-hlqzn" STEP: Delete "local-pvlx8gs" and create a new PV for same local volume storage Mar 25 12:01:07.527: INFO: 5/28 pods finished Mar 25 12:01:07.527: INFO: Deleting pod pod-fa739ddf-e38b-4be9-8353-21a7d64d5ac6 STEP: Delete "local-pv4c5cq" and create a new PV for same local volume storage Mar 25 12:01:08.341: INFO: Deleting PersistentVolumeClaim "pvc-xp7q9" Mar 25 12:01:08.755: INFO: Deleting PersistentVolumeClaim "pvc-rqsmt" STEP: Delete "local-pvhr6g6" and create a new PV for same local volume storage Mar 25 12:01:08.808: INFO: Deleting PersistentVolumeClaim "pvc-kjrk9" Mar 25 12:01:08.931: INFO: 6/28 pods finished STEP: Delete "local-pvgm8wh" and create a new PV for same local volume storage STEP: Delete "local-pv6tm75" and create a new PV for same local volume storage STEP: Delete "local-pvml88k" and create a new PV for same local volume storage STEP: Delete "local-pv4fw6b" and create a new PV for same local volume storage STEP: Delete "local-pvrtf9h" and create a new PV for same local volume storage STEP: Delete "local-pvdnfsw" and create a new PV for same local volume storage STEP: Delete "local-pvb569z" and create a new PV for same local volume storage Mar 25 12:01:22.174: INFO: Deleting pod pod-c12984ea-b0bc-4d3b-a5d4-3fda84ff66b9 Mar 25 12:01:23.070: INFO: Deleting PersistentVolumeClaim "pvc-cbzf7" Mar 25 12:01:23.794: INFO: Deleting PersistentVolumeClaim "pvc-wqn5k" Mar 25 12:01:24.625: INFO: Deleting PersistentVolumeClaim "pvc-vn6vz" Mar 25 12:01:24.893: INFO: 7/28 pods finished Mar 25 12:01:25.298: INFO: Deleting pod pod-52a73a7d-3d8e-4f89-99d7-31af26c6a3c3 STEP: Delete "local-pvqxjsl" and create a new PV for same local volume storage Mar 25 12:01:26.817: INFO: Deleting PersistentVolumeClaim "pvc-kp65k" Mar 25 12:01:27.566: INFO: Deleting PersistentVolumeClaim "pvc-5dgm2" STEP: Delete "local-pvqxjsl" and create a new PV for same local volume storage STEP: Delete "local-pvll5jk" and create a new PV for same local volume storage Mar 25 12:01:28.094: INFO: Deleting PersistentVolumeClaim "pvc-gw956" Mar 25 12:01:28.308: INFO: 8/28 pods finished Mar 25 12:01:28.308: INFO: Deleting pod pod-f2b6ce62-a5bf-4ff4-b072-0a488ea531b2 STEP: Delete "local-pvll5jk" and create a new PV for same local volume storage STEP: Delete "local-pvrrnvm" and create a new PV for same local volume storage Mar 25 12:01:29.679: INFO: Deleting PersistentVolumeClaim "pvc-zj7rt" Mar 25 12:01:30.885: INFO: Deleting PersistentVolumeClaim "pvc-bbs78" STEP: Delete "local-pvrrnvm" and create a new PV for same local volume storage Mar 25 12:01:31.050: INFO: Deleting PersistentVolumeClaim "pvc-2njdw" STEP: Delete "local-pvf6zhz" and create a new PV for same local volume storage Mar 25 12:01:31.166: INFO: 9/28 pods finished STEP: Delete "local-pvf6zhz" and create a new PV for same local volume storage STEP: Delete "local-pv5kk87" and create a new PV for same local volume storage Mar 25 12:01:32.263: INFO: Deleting pod pod-0616e748-2a81-43e4-b832-60638dc38865 STEP: Delete "local-pv5kk87" and create a new PV for same local volume storage STEP: Delete "local-pvprmtv" and create a new PV for same local volume storage Mar 25 12:01:34.068: INFO: Deleting PersistentVolumeClaim "pvc-gbkkx" Mar 25 12:01:34.715: INFO: Deleting PersistentVolumeClaim "pvc-crzc8" STEP: Delete "local-pvprmtv" and create a new PV for same local volume storage Mar 25 12:01:35.728: INFO: Deleting PersistentVolumeClaim "pvc-8jn44" STEP: Delete "local-pvbthbp" and create a new PV for same local volume storage Mar 25 12:01:36.204: INFO: 10/28 pods finished Mar 25 12:01:36.204: INFO: Deleting pod pod-34fdea5c-5c1d-4149-a4d6-63376de9cc08 STEP: Delete "local-pvbthbp" and create a new PV for same local volume storage STEP: Delete "local-pvjwflc" and create a new PV for same local volume storage Mar 25 12:01:38.352: INFO: Deleting PersistentVolumeClaim "pvc-5lng6" Mar 25 12:01:39.278: INFO: Deleting PersistentVolumeClaim "pvc-5tcs2" STEP: Delete "local-pvjwflc" and create a new PV for same local volume storage Mar 25 12:01:39.464: INFO: Deleting PersistentVolumeClaim "pvc-htjk4" Mar 25 12:01:40.270: INFO: 11/28 pods finished Mar 25 12:01:40.270: INFO: Deleting pod pod-a4de4bbd-da71-4c9d-85b1-eb8e8ebe4879 Mar 25 12:01:43.033: INFO: Deleting PersistentVolumeClaim "pvc-gcr7d" Mar 25 12:01:44.423: INFO: Deleting PersistentVolumeClaim "pvc-25wm5" STEP: Delete "local-pv46hq6" and create a new PV for same local volume storage Mar 25 12:01:44.687: INFO: Deleting PersistentVolumeClaim "pvc-qcxkt" Mar 25 12:01:44.906: INFO: 12/28 pods finished STEP: Delete "local-pv7k7fb" and create a new PV for same local volume storage STEP: Delete "local-pvff26c" and create a new PV for same local volume storage STEP: Delete "local-pvll5kp" and create a new PV for same local volume storage STEP: Delete "local-pvfzxjf" and create a new PV for same local volume storage STEP: Delete "local-pvhfzqn" and create a new PV for same local volume storage Mar 25 12:02:01.617: INFO: Deleting pod pod-ca2566b3-8ec8-4967-8ce8-819c54faa9a9 STEP: Delete "local-pvdr79h" and create a new PV for same local volume storage Mar 25 12:02:05.164: INFO: Deleting PersistentVolumeClaim "pvc-g5bzb" Mar 25 12:02:06.451: INFO: Deleting PersistentVolumeClaim "pvc-zfwkv" STEP: Delete "local-pvnj9xq" and create a new PV for same local volume storage Mar 25 12:02:07.941: INFO: Deleting PersistentVolumeClaim "pvc-6qs9w" Mar 25 12:02:11.703: INFO: 13/28 pods finished STEP: Delete "local-pvdmgnt" and create a new PV for same local volume storage STEP: Delete "local-pv7g42h" and create a new PV for same local volume storage Mar 25 12:02:16.712: INFO: Deleting pod pod-9d08b5ba-b2d8-40eb-a92f-24c3746c7708 Mar 25 12:02:19.909: INFO: Deleting PersistentVolumeClaim "pvc-ntcxr" Mar 25 12:02:21.053: INFO: Deleting PersistentVolumeClaim "pvc-58w2c" STEP: Delete "local-pv7srzx" and create a new PV for same local volume storage Mar 25 12:02:22.896: INFO: Deleting PersistentVolumeClaim "pvc-kmbn6" Mar 25 12:02:24.875: INFO: 14/28 pods finished Mar 25 12:02:24.875: INFO: Deleting pod pod-ea50106f-0614-4104-aee0-b9b438172030 STEP: Delete "local-pv7fq2p" and create a new PV for same local volume storage Mar 25 12:02:27.957: INFO: Deleting PersistentVolumeClaim "pvc-mnwzr" Mar 25 12:02:28.919: INFO: Deleting PersistentVolumeClaim "pvc-llwnq" STEP: Delete "local-pvpmczn" and create a new PV for same local volume storage Mar 25 12:02:29.404: INFO: Deleting PersistentVolumeClaim "pvc-bwp4q" Mar 25 12:02:29.483: INFO: 15/28 pods finished STEP: Delete "local-pvzjspr" and create a new PV for same local volume storage STEP: Delete "local-pvlqgqc" and create a new PV for same local volume storage Mar 25 12:02:30.098: INFO: Deleting pod pod-06205f52-81c4-4795-a6e1-2674823bb5b3 Mar 25 12:02:30.638: INFO: Deleting PersistentVolumeClaim "pvc-mjblh" Mar 25 12:02:32.433: INFO: Deleting PersistentVolumeClaim "pvc-qrflv" STEP: Delete "local-pvmj4rk" and create a new PV for same local volume storage Mar 25 12:02:36.048: INFO: Deleting PersistentVolumeClaim "pvc-j77c9" Mar 25 12:02:38.016: INFO: 16/28 pods finished STEP: Delete "local-pvrr7fl" and create a new PV for same local volume storage Mar 25 12:02:39.186: INFO: Deleting pod pod-28387a40-2b6c-4f2f-9822-a86b5a046144 STEP: Delete "local-pvrlg4x" and create a new PV for same local volume storage Mar 25 12:02:43.191: INFO: Deleting PersistentVolumeClaim "pvc-qp7kl" Mar 25 12:02:45.537: INFO: Deleting PersistentVolumeClaim "pvc-bvzl2" Mar 25 12:02:46.589: INFO: Deleting PersistentVolumeClaim "pvc-zqr5c" STEP: Delete "local-pvbmb8p" and create a new PV for same local volume storage Mar 25 12:02:47.222: INFO: 17/28 pods finished STEP: Delete "local-pvh5fm2" and create a new PV for same local volume storage Mar 25 12:02:51.082: INFO: Deleting pod pod-c06093b6-0b6d-49a5-8f3c-5425748d0556 Mar 25 12:02:54.216: INFO: Deleting PersistentVolumeClaim "pvc-qr9vz" STEP: Delete "local-pvklmxg" and create a new PV for same local volume storage Mar 25 12:02:54.646: INFO: Deleting PersistentVolumeClaim "pvc-mjnjc" Mar 25 12:02:56.339: INFO: Deleting PersistentVolumeClaim "pvc-n654f" Mar 25 12:02:57.202: INFO: 18/28 pods finished STEP: Delete "local-pvrglkk" and create a new PV for same local volume storage STEP: Delete "local-pvrmfxj" and create a new PV for same local volume storage STEP: Delete "local-pvjbz22" and create a new PV for same local volume storage STEP: Delete "local-pv5q84j" and create a new PV for same local volume storage Mar 25 12:03:06.436: INFO: Deleting pod pod-975f50e5-c6d0-4ae9-aa3d-f156b9d64782 STEP: Delete "local-pv67bpf" and create a new PV for same local volume storage Mar 25 12:03:09.370: INFO: Deleting PersistentVolumeClaim "pvc-dhgrh" Mar 25 12:03:12.129: INFO: Deleting PersistentVolumeClaim "pvc-w95gg" STEP: Delete "local-pvz2twm" and create a new PV for same local volume storage Mar 25 12:03:12.798: INFO: Deleting PersistentVolumeClaim "pvc-nx6r4" Mar 25 12:03:14.984: INFO: 19/28 pods finished STEP: Delete "local-pvwxpl7" and create a new PV for same local volume storage Mar 25 12:03:16.701: INFO: Deleting pod pod-36f7d5c5-1461-4146-9a29-33440b114b85 Mar 25 12:03:18.181: INFO: Deleting PersistentVolumeClaim "pvc-5vljj" Mar 25 12:03:19.957: INFO: Deleting PersistentVolumeClaim "pvc-kwj82" STEP: Delete "local-pvdxk7n" and create a new PV for same local volume storage Mar 25 12:03:20.289: INFO: Deleting PersistentVolumeClaim "pvc-sr57p" Mar 25 12:03:22.076: INFO: 20/28 pods finished Mar 25 12:03:22.076: INFO: Deleting pod pod-83f9124e-4aef-4626-92a5-5300194a689c STEP: Delete "local-pvdxk7n" and create a new PV for same local volume storage STEP: Delete "local-pv5hhkl" and create a new PV for same local volume storage Mar 25 12:03:23.381: INFO: Deleting PersistentVolumeClaim "pvc-tsljr" Mar 25 12:03:24.476: INFO: Deleting PersistentVolumeClaim "pvc-n6fb5" STEP: Delete "local-pv2vchm" and create a new PV for same local volume storage Mar 25 12:03:25.327: INFO: Deleting PersistentVolumeClaim "pvc-ngqf8" Mar 25 12:03:25.948: INFO: 21/28 pods finished STEP: Delete "local-pvttspp" and create a new PV for same local volume storage Mar 25 12:03:27.466: INFO: Deleting pod pod-cd604e1d-d02a-49c1-a42b-ae4ef084fde7 STEP: Delete "local-pvc2w89" and create a new PV for same local volume storage Mar 25 12:03:29.692: INFO: Deleting PersistentVolumeClaim "pvc-95c55" Mar 25 12:03:33.589: INFO: Deleting PersistentVolumeClaim "pvc-pm8mt" STEP: Delete "local-pvvcg95" and create a new PV for same local volume storage Mar 25 12:03:34.039: INFO: Deleting PersistentVolumeClaim "pvc-ljpp2" Mar 25 12:03:34.185: INFO: 22/28 pods finished STEP: Delete "local-pvmlzzq" and create a new PV for same local volume storage STEP: Delete "local-pvhfss5" and create a new PV for same local volume storage STEP: Delete "local-pvs8r27" and create a new PV for same local volume storage Mar 25 12:03:34.938: INFO: Deleting pod pod-f96b793c-535b-40c9-aa14-67d1960fda23 Mar 25 12:03:35.104: INFO: Deleting PersistentVolumeClaim "pvc-lxd2j" Mar 25 12:03:35.229: INFO: Deleting PersistentVolumeClaim "pvc-prxt7" STEP: Delete "local-pvwnx82" and create a new PV for same local volume storage Mar 25 12:03:35.311: INFO: Deleting PersistentVolumeClaim "pvc-5c62q" Mar 25 12:03:35.385: INFO: 23/28 pods finished STEP: Delete "local-pvm9nb8" and create a new PV for same local volume storage STEP: Delete "local-pvszj29" and create a new PV for same local volume storage STEP: Delete "local-pvm46gt" and create a new PV for same local volume storage STEP: Delete "local-pvwxgpt" and create a new PV for same local volume storage Mar 25 12:03:48.890: INFO: Deleting pod pod-607c1ad9-4478-449d-b904-3f8a0a4fa3a9 Mar 25 12:03:53.241: INFO: Deleting PersistentVolumeClaim "pvc-fwxsf" Mar 25 12:03:53.909: INFO: Deleting PersistentVolumeClaim "pvc-cqchf" STEP: Delete "local-pvqx548" and create a new PV for same local volume storage Mar 25 12:03:54.710: INFO: Deleting PersistentVolumeClaim "pvc-wwkgc" Mar 25 12:03:56.069: INFO: 24/28 pods finished STEP: Delete "local-pvqx548" and create a new PV for same local volume storage Mar 25 12:03:57.492: INFO: Deleting pod pod-56a879c2-06c7-487b-a940-c637e71bb27e STEP: Delete "local-pvv8q2r" and create a new PV for same local volume storage Mar 25 12:03:58.287: INFO: Deleting PersistentVolumeClaim "pvc-mzrqb" STEP: Delete "local-pvv8q2r" and create a new PV for same local volume storage Mar 25 12:03:59.141: INFO: Deleting PersistentVolumeClaim "pvc-2tqhv" STEP: Delete "local-pv96h44" and create a new PV for same local volume storage Mar 25 12:03:59.637: INFO: Deleting PersistentVolumeClaim "pvc-rfvhx" Mar 25 12:03:59.794: INFO: 25/28 pods finished STEP: Delete "local-pv96h44" and create a new PV for same local volume storage STEP: Delete "local-pv85thl" and create a new PV for same local volume storage STEP: Delete "local-pv474hv" and create a new PV for same local volume storage STEP: Delete "local-pvmdpdz" and create a new PV for same local volume storage Mar 25 12:04:07.166: INFO: Deleting pod pod-a59ce561-ed32-41a2-a622-5891207c8909 STEP: Delete "local-pvdbdmd" and create a new PV for same local volume storage Mar 25 12:04:09.619: INFO: Deleting PersistentVolumeClaim "pvc-rdwhz" Mar 25 12:04:10.971: INFO: Deleting PersistentVolumeClaim "pvc-9ssfg" STEP: Delete "local-pvdbdmd" and create a new PV for same local volume storage Mar 25 12:04:11.616: INFO: Deleting PersistentVolumeClaim "pvc-vtmtc" Mar 25 12:04:12.204: INFO: 26/28 pods finished Mar 25 12:04:13.107: INFO: Deleting pod pod-0c3b7b7e-5c25-4cda-8903-c8c79bd65d12 STEP: Delete "local-pv25tsb" and create a new PV for same local volume storage Mar 25 12:04:14.551: INFO: Deleting PersistentVolumeClaim "pvc-5jq5k" Mar 25 12:04:15.139: INFO: Deleting PersistentVolumeClaim "pvc-szfgf" STEP: Delete "local-pv25tsb" and create a new PV for same local volume storage Mar 25 12:04:15.342: INFO: Deleting PersistentVolumeClaim "pvc-h2dmz" STEP: Delete "local-pv5f477" and create a new PV for same local volume storage Mar 25 12:04:15.442: INFO: 27/28 pods finished Mar 25 12:04:15.442: INFO: Deleting pod pod-ba3fc45f-a74a-437c-93fe-4f28330371a1 STEP: Delete "local-pv5f477" and create a new PV for same local volume storage Mar 25 12:04:16.843: INFO: Deleting PersistentVolumeClaim "pvc-6ksgs" STEP: Delete "local-pv9pk88" and create a new PV for same local volume storage Mar 25 12:04:17.035: INFO: Deleting PersistentVolumeClaim "pvc-x6tj9" Mar 25 12:04:17.123: INFO: Deleting PersistentVolumeClaim "pvc-8k862" STEP: Delete "local-pvxlvwn" and create a new PV for same local volume storage Mar 25 12:04:17.269: INFO: 28/28 pods finished [AfterEach] Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:519 STEP: Stop and wait for recycle goroutine to finish STEP: Clean all PVs STEP: Cleaning up 10 local volumes on node "latest-worker" STEP: Cleaning up PVC and PV Mar 25 12:04:17.304: INFO: pvc is nil Mar 25 12:04:17.304: INFO: Deleting PersistentVolume "local-pvpf4hr" STEP: Cleaning up PVC and PV Mar 25 12:04:17.441: INFO: pvc is nil Mar 25 12:04:17.441: INFO: Deleting PersistentVolume "local-pvdh6pr" STEP: Cleaning up PVC and PV Mar 25 12:04:17.585: INFO: pvc is nil Mar 25 12:04:17.585: INFO: Deleting PersistentVolume "local-pvrqn8n" STEP: Cleaning up PVC and PV Mar 25 12:04:17.635: INFO: pvc is nil Mar 25 12:04:17.635: INFO: Deleting PersistentVolume "local-pvbr2vw" STEP: Cleaning up PVC and PV Mar 25 12:04:17.800: INFO: pvc is nil Mar 25 12:04:17.800: INFO: Deleting PersistentVolume "local-pvl56pg" STEP: Cleaning up PVC and PV Mar 25 12:04:18.763: INFO: pvc is nil Mar 25 12:04:18.763: INFO: Deleting PersistentVolume "local-pvn2ffw" STEP: Cleaning up PVC and PV Mar 25 12:04:19.242: INFO: pvc is nil Mar 25 12:04:19.242: INFO: Deleting PersistentVolume "local-pvskpvs" STEP: Cleaning up PVC and PV Mar 25 12:04:19.392: INFO: pvc is nil Mar 25 12:04:19.392: INFO: Deleting PersistentVolume "local-pvrxw6w" STEP: Cleaning up PVC and PV Mar 25 12:04:19.541: INFO: pvc is nil Mar 25 12:04:19.542: INFO: Deleting PersistentVolume "local-pvwtm7w" STEP: Cleaning up PVC and PV Mar 25 12:04:19.764: INFO: pvc is nil Mar 25 12:04:19.764: INFO: Deleting PersistentVolume "local-pvdcpvh" STEP: Unmount tmpfs mount point on node "latest-worker" at path "/tmp/local-volume-test-e7bfb771-5f84-478f-b327-8aa0d7f85d76" Mar 25 12:04:20.093: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-e7bfb771-5f84-478f-b327-8aa0d7f85d76"] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker-d9db5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:04:20.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Mar 25 12:04:21.217: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e7bfb771-5f84-478f-b327-8aa0d7f85d76] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker-d9db5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:04:21.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "latest-worker" at path "/tmp/local-volume-test-8ce216f6-2025-48d5-af7e-ef5903e6c336" Mar 25 12:04:21.894: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-8ce216f6-2025-48d5-af7e-ef5903e6c336"] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker-d9db5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:04:21.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Mar 25 12:04:22.371: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8ce216f6-2025-48d5-af7e-ef5903e6c336] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker-d9db5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:04:22.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "latest-worker" at path "/tmp/local-volume-test-5af4e45d-3a5a-4c75-9d38-c30bbfe8e96f" Mar 25 12:04:22.538: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-5af4e45d-3a5a-4c75-9d38-c30bbfe8e96f"] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker-d9db5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:04:22.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Mar 25 12:04:22.811: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-5af4e45d-3a5a-4c75-9d38-c30bbfe8e96f] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker-d9db5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:04:22.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "latest-worker" at path "/tmp/local-volume-test-10337142-fd07-4a55-90db-473aafb9c62e" Mar 25 12:04:22.912: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-10337142-fd07-4a55-90db-473aafb9c62e"] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker-d9db5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:04:22.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Mar 25 12:04:23.145: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-10337142-fd07-4a55-90db-473aafb9c62e] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker-d9db5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:04:23.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "latest-worker" at path "/tmp/local-volume-test-0d937990-de20-481b-8048-2cac0a2425ad" Mar 25 12:04:23.254: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-0d937990-de20-481b-8048-2cac0a2425ad"] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker-d9db5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:04:23.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Mar 25 12:04:23.480: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0d937990-de20-481b-8048-2cac0a2425ad] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker-d9db5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:04:23.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "latest-worker" at path "/tmp/local-volume-test-e79e9be6-ad3f-45dd-8fdc-c4e9bedbf74b" Mar 25 12:04:23.793: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-e79e9be6-ad3f-45dd-8fdc-c4e9bedbf74b"] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker-d9db5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:04:23.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Mar 25 12:04:24.060: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e79e9be6-ad3f-45dd-8fdc-c4e9bedbf74b] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker-d9db5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:04:24.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "latest-worker" at path "/tmp/local-volume-test-4eca7bf2-e897-4322-b780-290858bb4233" Mar 25 12:04:24.379: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-4eca7bf2-e897-4322-b780-290858bb4233"] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker-d9db5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:04:24.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Mar 25 12:04:24.671: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4eca7bf2-e897-4322-b780-290858bb4233] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker-d9db5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:04:24.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "latest-worker" at path "/tmp/local-volume-test-a8ac093d-12e8-4abc-b4d7-53d5b60af75a" Mar 25 12:04:26.913: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-a8ac093d-12e8-4abc-b4d7-53d5b60af75a"] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker-d9db5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:04:26.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Mar 25 12:04:27.453: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a8ac093d-12e8-4abc-b4d7-53d5b60af75a] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker-d9db5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:04:27.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "latest-worker" at path "/tmp/local-volume-test-69d5670c-09cc-4c8a-a56d-d323a23c7277" Mar 25 12:04:28.704: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-69d5670c-09cc-4c8a-a56d-d323a23c7277"] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker-d9db5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:04:28.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Mar 25 12:04:29.377: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-69d5670c-09cc-4c8a-a56d-d323a23c7277] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker-d9db5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:04:29.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "latest-worker" at path "/tmp/local-volume-test-36506673-b4a7-4ce7-947d-38e45403c646" Mar 25 12:04:29.616: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-36506673-b4a7-4ce7-947d-38e45403c646"] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker-d9db5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:04:29.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Mar 25 12:04:29.806: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-36506673-b4a7-4ce7-947d-38e45403c646] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker-d9db5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:04:29.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up 10 local volumes on node "latest-worker2" STEP: Cleaning up PVC and PV Mar 25 12:04:29.934: INFO: pvc is nil Mar 25 12:04:29.934: INFO: Deleting PersistentVolume "local-pvgljxp" STEP: Cleaning up PVC and PV Mar 25 12:04:30.004: INFO: pvc is nil Mar 25 12:04:30.004: INFO: Deleting PersistentVolume "local-pvxl778" STEP: Cleaning up PVC and PV Mar 25 12:04:30.078: INFO: pvc is nil Mar 25 12:04:30.078: INFO: Deleting PersistentVolume "local-pv2fnrz" STEP: Cleaning up PVC and PV Mar 25 12:04:30.180: INFO: pvc is nil Mar 25 12:04:30.180: INFO: Deleting PersistentVolume "local-pvb92qx" STEP: Cleaning up PVC and PV Mar 25 12:04:30.586: INFO: pvc is nil Mar 25 12:04:30.586: INFO: Deleting PersistentVolume "local-pvh4g85" STEP: Cleaning up PVC and PV Mar 25 12:04:30.698: INFO: pvc is nil Mar 25 12:04:30.698: INFO: Deleting PersistentVolume "local-pvt7fhg" STEP: Cleaning up PVC and PV Mar 25 12:04:30.761: INFO: pvc is nil Mar 25 12:04:30.761: INFO: Deleting PersistentVolume "local-pvvd8rl" STEP: Cleaning up PVC and PV Mar 25 12:04:30.865: INFO: pvc is nil Mar 25 12:04:30.865: INFO: Deleting PersistentVolume "local-pv8nxcl" STEP: Cleaning up PVC and PV Mar 25 12:04:30.937: INFO: pvc is nil Mar 25 12:04:30.937: INFO: Deleting PersistentVolume "local-pvr5kx7" STEP: Cleaning up PVC and PV Mar 25 12:04:31.103: INFO: pvc is nil Mar 25 12:04:31.103: INFO: Deleting PersistentVolume "local-pvz5hsl" STEP: Unmount tmpfs mount point on node "latest-worker2" at path "/tmp/local-volume-test-634b76df-20a2-4b47-b5a7-4adcf11d3184" Mar 25 12:04:31.265: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-634b76df-20a2-4b47-b5a7-4adcf11d3184"] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker2-h786m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:04:31.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Mar 25 12:04:32.851: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-634b76df-20a2-4b47-b5a7-4adcf11d3184] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker2-h786m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:04:32.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "latest-worker2" at path "/tmp/local-volume-test-ad84026f-38ea-4163-b376-57f58919f4ef" Mar 25 12:04:33.004: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-ad84026f-38ea-4163-b376-57f58919f4ef"] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker2-h786m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:04:33.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Mar 25 12:04:33.873: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ad84026f-38ea-4163-b376-57f58919f4ef] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker2-h786m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:04:33.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "latest-worker2" at path "/tmp/local-volume-test-5826cb9d-9521-4b59-a8d6-6018ec84bae4" Mar 25 12:04:34.042: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-5826cb9d-9521-4b59-a8d6-6018ec84bae4"] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker2-h786m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:04:34.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Mar 25 12:04:35.007: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-5826cb9d-9521-4b59-a8d6-6018ec84bae4] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker2-h786m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:04:35.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "latest-worker2" at path "/tmp/local-volume-test-9a5bdc86-d51e-483e-9b78-00f14110bef5" Mar 25 12:04:35.153: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-9a5bdc86-d51e-483e-9b78-00f14110bef5"] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker2-h786m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:04:35.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Mar 25 12:04:35.481: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-9a5bdc86-d51e-483e-9b78-00f14110bef5] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker2-h786m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:04:35.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "latest-worker2" at path "/tmp/local-volume-test-7f929152-7dc4-4988-b457-8f8dd16d3d18" Mar 25 12:04:35.837: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-7f929152-7dc4-4988-b457-8f8dd16d3d18"] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker2-h786m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:04:35.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Mar 25 12:04:36.952: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-7f929152-7dc4-4988-b457-8f8dd16d3d18] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker2-h786m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:04:36.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "latest-worker2" at path "/tmp/local-volume-test-dfa655bd-eaa1-46a6-9151-cc66aba01d9b" Mar 25 12:04:37.255: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-dfa655bd-eaa1-46a6-9151-cc66aba01d9b"] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker2-h786m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:04:37.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Mar 25 12:04:38.737: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-dfa655bd-eaa1-46a6-9151-cc66aba01d9b] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker2-h786m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:04:38.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "latest-worker2" at path "/tmp/local-volume-test-e6fa280c-4bad-4ff2-b98e-d6eb40b1d7c0" Mar 25 12:04:39.033: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-e6fa280c-4bad-4ff2-b98e-d6eb40b1d7c0"] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker2-h786m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:04:39.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Mar 25 12:04:39.257: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e6fa280c-4bad-4ff2-b98e-d6eb40b1d7c0] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker2-h786m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:04:39.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "latest-worker2" at path "/tmp/local-volume-test-7c9e7d3a-8626-4d0e-9e6a-a51805a515e5" Mar 25 12:04:39.446: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-7c9e7d3a-8626-4d0e-9e6a-a51805a515e5"] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker2-h786m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:04:39.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Mar 25 12:04:39.799: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-7c9e7d3a-8626-4d0e-9e6a-a51805a515e5] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker2-h786m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:04:39.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "latest-worker2" at path "/tmp/local-volume-test-158658eb-837a-4b60-8441-b89b0dbf2f6c" Mar 25 12:04:39.973: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-158658eb-837a-4b60-8441-b89b0dbf2f6c"] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker2-h786m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:04:39.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Mar 25 12:04:40.191: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-158658eb-837a-4b60-8441-b89b0dbf2f6c] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker2-h786m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:04:40.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "latest-worker2" at path "/tmp/local-volume-test-cab29ba9-1f51-46ee-864c-8087036ffe11" Mar 25 12:04:40.324: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-cab29ba9-1f51-46ee-864c-8087036ffe11"] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker2-h786m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:04:40.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Mar 25 12:04:40.464: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-cab29ba9-1f51-46ee-864c-8087036ffe11] Namespace:persistent-local-volumes-test-2503 PodName:hostexec-latest-worker2-h786m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:04:40.464: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:04:41.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2503" for this suite. • [SLOW TEST:263.395 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:441 should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:531 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","total":133,"completed":15,"skipped":794,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} S ------------------------------ [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:04:42.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume without restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687 STEP: Building a driver namespace object, basename csi-mock-volumes-6615 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 12:04:43.064: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6615-3366/csi-attacher Mar 25 12:04:43.124: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6615 Mar 25 12:04:43.124: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-6615 Mar 25 12:04:43.143: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6615 Mar 25 12:04:43.201: INFO: creating *v1.Role: csi-mock-volumes-6615-3366/external-attacher-cfg-csi-mock-volumes-6615 Mar 25 12:04:43.302: INFO: creating *v1.RoleBinding: csi-mock-volumes-6615-3366/csi-attacher-role-cfg Mar 25 12:04:43.387: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6615-3366/csi-provisioner Mar 25 12:04:43.482: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6615 Mar 25 12:04:43.482: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-6615 Mar 25 12:04:43.513: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6615 Mar 25 12:04:43.568: INFO: creating *v1.Role: csi-mock-volumes-6615-3366/external-provisioner-cfg-csi-mock-volumes-6615 Mar 25 12:04:43.655: INFO: creating *v1.RoleBinding: csi-mock-volumes-6615-3366/csi-provisioner-role-cfg Mar 25 12:04:43.708: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6615-3366/csi-resizer Mar 25 12:04:43.730: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6615 Mar 25 12:04:43.730: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-6615 Mar 25 12:04:43.809: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6615 Mar 25 12:04:43.840: INFO: creating *v1.Role: csi-mock-volumes-6615-3366/external-resizer-cfg-csi-mock-volumes-6615 Mar 25 12:04:43.864: INFO: creating *v1.RoleBinding: csi-mock-volumes-6615-3366/csi-resizer-role-cfg Mar 25 12:04:43.882: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6615-3366/csi-snapshotter Mar 25 12:04:43.949: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6615 Mar 25 12:04:43.949: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-6615 Mar 25 12:04:43.996: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6615 Mar 25 12:04:44.014: INFO: creating *v1.Role: csi-mock-volumes-6615-3366/external-snapshotter-leaderelection-csi-mock-volumes-6615 Mar 25 12:04:44.034: INFO: creating *v1.RoleBinding: csi-mock-volumes-6615-3366/external-snapshotter-leaderelection Mar 25 12:04:44.104: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6615-3366/csi-mock Mar 25 12:04:44.135: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6615 Mar 25 12:04:44.147: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6615 Mar 25 12:04:44.185: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6615 Mar 25 12:04:44.280: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6615 Mar 25 12:04:44.298: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6615 Mar 25 12:04:44.465: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6615 Mar 25 12:04:44.500: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6615 Mar 25 12:04:44.535: INFO: creating *v1.StatefulSet: csi-mock-volumes-6615-3366/csi-mockplugin Mar 25 12:04:44.613: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-6615 Mar 25 12:04:44.631: INFO: creating *v1.StatefulSet: csi-mock-volumes-6615-3366/csi-mockplugin-resizer Mar 25 12:04:44.655: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-6615" Mar 25 12:04:44.781: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-6615 to register on node latest-worker STEP: Creating pod Mar 25 12:05:15.010: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 25 12:05:16.183: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-m6c5j] to have phase Bound Mar 25 12:05:17.057: INFO: PersistentVolumeClaim pvc-m6c5j found but phase is Pending instead of Bound. Mar 25 12:05:19.244: INFO: PersistentVolumeClaim pvc-m6c5j found and phase=Bound (3.060578073s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-p57hw Mar 25 12:06:38.939: INFO: Deleting pod "pvc-volume-tester-p57hw" in namespace "csi-mock-volumes-6615" Mar 25 12:06:39.149: INFO: Wait up to 5m0s for pod "pvc-volume-tester-p57hw" to be fully deleted STEP: Deleting claim pvc-m6c5j Mar 25 12:07:48.388: INFO: Waiting up to 2m0s for PersistentVolume pvc-9221b5f6-2b2c-4ecc-bc2b-14e9bdf5c175 to get deleted Mar 25 12:07:48.566: INFO: PersistentVolume pvc-9221b5f6-2b2c-4ecc-bc2b-14e9bdf5c175 found and phase=Bound (178.753757ms) Mar 25 12:07:50.788: INFO: PersistentVolume pvc-9221b5f6-2b2c-4ecc-bc2b-14e9bdf5c175 was removed STEP: Deleting storageclass csi-mock-volumes-6615-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-6615 STEP: Waiting for namespaces [csi-mock-volumes-6615] to vanish STEP: uninstalling csi mock driver Mar 25 12:08:09.570: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6615-3366/csi-attacher Mar 25 12:08:09.658: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6615 Mar 25 12:08:09.807: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6615 Mar 25 12:08:09.997: INFO: deleting *v1.Role: csi-mock-volumes-6615-3366/external-attacher-cfg-csi-mock-volumes-6615 Mar 25 12:08:11.243: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6615-3366/csi-attacher-role-cfg Mar 25 12:08:12.196: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6615-3366/csi-provisioner Mar 25 12:08:12.535: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6615 Mar 25 12:08:12.986: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6615 Mar 25 12:08:13.316: INFO: deleting *v1.Role: csi-mock-volumes-6615-3366/external-provisioner-cfg-csi-mock-volumes-6615 Mar 25 12:08:14.261: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6615-3366/csi-provisioner-role-cfg Mar 25 12:08:14.459: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6615-3366/csi-resizer Mar 25 12:08:14.492: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6615 Mar 25 12:08:14.610: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6615 Mar 25 12:08:14.685: INFO: deleting *v1.Role: csi-mock-volumes-6615-3366/external-resizer-cfg-csi-mock-volumes-6615 Mar 25 12:08:14.793: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6615-3366/csi-resizer-role-cfg Mar 25 12:08:14.921: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6615-3366/csi-snapshotter Mar 25 12:08:14.950: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6615 Mar 25 12:08:15.064: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6615 Mar 25 12:08:15.161: INFO: deleting *v1.Role: csi-mock-volumes-6615-3366/external-snapshotter-leaderelection-csi-mock-volumes-6615 Mar 25 12:08:15.271: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6615-3366/external-snapshotter-leaderelection Mar 25 12:08:15.387: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6615-3366/csi-mock Mar 25 12:08:15.449: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6615 Mar 25 12:08:15.605: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6615 Mar 25 12:08:15.826: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6615 Mar 25 12:08:16.011: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6615 Mar 25 12:08:16.089: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6615 Mar 25 12:08:16.215: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6615 Mar 25 12:08:16.361: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6615 Mar 25 12:08:16.483: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6615-3366/csi-mockplugin Mar 25 12:08:16.742: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-6615 Mar 25 12:08:17.806: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6615-3366/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-6615-3366 STEP: Waiting for namespaces [csi-mock-volumes-6615-3366] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:09:01.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:259.263 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI online volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:672 should expand volume without restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on","total":133,"completed":16,"skipped":795,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:09:01.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker2" using path "/tmp/local-volume-test-6ef41f93-a61e-4f20-ad04-508617a81c8b" Mar 25 12:09:10.275: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-6ef41f93-a61e-4f20-ad04-508617a81c8b && dd if=/dev/zero of=/tmp/local-volume-test-6ef41f93-a61e-4f20-ad04-508617a81c8b/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-6ef41f93-a61e-4f20-ad04-508617a81c8b/file] Namespace:persistent-local-volumes-test-2930 PodName:hostexec-latest-worker2-rd8x5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:09:10.275: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:09:10.884: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-6ef41f93-a61e-4f20-ad04-508617a81c8b/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-2930 PodName:hostexec-latest-worker2-rd8x5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:09:10.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 12:09:11.213: INFO: Creating a PV followed by a PVC Mar 25 12:09:11.353: INFO: Waiting for PV local-pvfz5sr to bind to PVC pvc-8qkr9 Mar 25 12:09:11.353: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-8qkr9] to have phase Bound Mar 25 12:09:11.541: INFO: PersistentVolumeClaim pvc-8qkr9 found but phase is Pending instead of Bound. Mar 25 12:09:13.860: INFO: PersistentVolumeClaim pvc-8qkr9 found and phase=Bound (2.506993359s) Mar 25 12:09:13.860: INFO: Waiting up to 3m0s for PersistentVolume local-pvfz5sr to have phase Bound Mar 25 12:09:13.899: INFO: PersistentVolume local-pvfz5sr found and phase=Bound (38.638954ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 25 12:09:27.197: INFO: pod "pod-56aa130d-7dc4-495f-a37b-c92f8df86b7d" created on Node "latest-worker2" STEP: Writing in pod1 Mar 25 12:09:27.197: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2930 PodName:pod-56aa130d-7dc4-495f-a37b-c92f8df86b7d ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:09:27.197: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:09:27.965: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n18 bytes (18B) copied, 0.000041 seconds, 428.7KB/s", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Mar 25 12:09:27.965: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-2930 PodName:pod-56aa130d-7dc4-495f-a37b-c92f8df86b7d ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:09:27.965: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:09:28.586: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Writing in pod1 Mar 25 12:09:28.586: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo /dev/loop0 > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2930 PodName:pod-56aa130d-7dc4-495f-a37b-c92f8df86b7d ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:09:28.586: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:09:28.773: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo /dev/loop0 > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n11 bytes (11B) copied, 0.000033 seconds, 325.5KB/s", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-56aa130d-7dc4-495f-a37b-c92f8df86b7d in namespace persistent-local-volumes-test-2930 [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 12:09:28.881: INFO: Deleting PersistentVolumeClaim "pvc-8qkr9" Mar 25 12:09:29.059: INFO: Deleting PersistentVolume "local-pvfz5sr" Mar 25 12:09:30.882: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-6ef41f93-a61e-4f20-ad04-508617a81c8b/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-2930 PodName:hostexec-latest-worker2-rd8x5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:09:30.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker2" at path /tmp/local-volume-test-6ef41f93-a61e-4f20-ad04-508617a81c8b/file Mar 25 12:09:31.987: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-2930 PodName:hostexec-latest-worker2-rd8x5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:09:31.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-6ef41f93-a61e-4f20-ad04-508617a81c8b Mar 25 12:09:32.609: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6ef41f93-a61e-4f20-ad04-508617a81c8b] Namespace:persistent-local-volumes-test-2930 PodName:hostexec-latest-worker2-rd8x5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:09:32.609: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:09:34.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2930" for this suite. • [SLOW TEST:35.580 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":133,"completed":17,"skipped":813,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should support r/w [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:65 [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:09:37.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37 [It] should support r/w [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:65 STEP: Creating a pod to test hostPath r/w Mar 25 12:09:42.568: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9400" to be "Succeeded or Failed" Mar 25 12:09:43.315: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 747.069313ms Mar 25 12:09:45.933: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.364960731s Mar 25 12:09:48.946: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.377855913s Mar 25 12:09:51.191: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.623173178s Mar 25 12:09:53.226: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.658341256s Mar 25 12:09:56.042: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 13.474099377s Mar 25 12:09:58.077: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 15.50943927s Mar 25 12:10:00.593: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 18.024672013s Mar 25 12:10:02.938: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 20.370480549s Mar 25 12:10:05.184: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 22.615780633s Mar 25 12:10:07.709: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 25.140700017s Mar 25 12:10:09.717: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 27.148567894s Mar 25 12:10:13.102: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 30.533881617s Mar 25 12:10:15.118: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 32.549710249s Mar 25 12:10:17.170: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 34.601530916s Mar 25 12:10:19.353: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 36.785436704s Mar 25 12:10:21.405: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 38.836640224s Mar 25 12:10:23.531: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 40.962921491s Mar 25 12:10:26.315: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 43.746916437s Mar 25 12:10:29.264: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 46.696450821s Mar 25 12:10:31.310: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 48.742195764s Mar 25 12:10:33.623: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 51.054723348s Mar 25 12:10:35.735: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 53.167119955s Mar 25 12:10:37.971: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 55.402751805s Mar 25 12:10:40.231: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 57.663338409s Mar 25 12:10:42.790: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.221798188s Mar 25 12:10:44.909: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.340864563s Mar 25 12:10:47.726: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 1m5.157920629s Mar 25 12:10:49.768: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 1m7.200294822s Mar 25 12:10:52.042: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 1m9.473680914s Mar 25 12:10:55.522: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.954417283s Mar 25 12:10:58.077: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 1m15.509028596s Mar 25 12:11:01.963: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 1m19.394708535s Mar 25 12:11:04.621: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.052771679s Mar 25 12:11:06.814: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.245744871s Mar 25 12:11:09.161: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.593262242s Mar 25 12:11:11.232: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m28.664054331s STEP: Saw pod success Mar 25 12:11:11.232: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Mar 25 12:11:11.239: INFO: Trying to get logs from node latest-worker2 pod pod-host-path-test container test-container-2: STEP: delete the pod Mar 25 12:11:13.488: INFO: Waiting for pod pod-host-path-test to disappear Mar 25 12:11:13.795: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:11:13.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-9400" for this suite. • [SLOW TEST:97.997 seconds] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support r/w [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:65 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should support r/w [NodeConformance]","total":133,"completed":18,"skipped":874,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:11:15.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 25 12:11:26.449: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-c970d4f8-3f8f-43ee-8901-099942a9cd7b] Namespace:persistent-local-volumes-test-5413 PodName:hostexec-latest-worker2-cbtx4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:11:26.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 12:11:26.841: INFO: Creating a PV followed by a PVC Mar 25 12:11:26.994: INFO: Waiting for PV local-pvplgxg to bind to PVC pvc-wlq4j Mar 25 12:11:26.994: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-wlq4j] to have phase Bound Mar 25 12:11:27.628: INFO: PersistentVolumeClaim pvc-wlq4j found but phase is Pending instead of Bound. Mar 25 12:11:30.518: INFO: PersistentVolumeClaim pvc-wlq4j found but phase is Pending instead of Bound. Mar 25 12:11:32.523: INFO: PersistentVolumeClaim pvc-wlq4j found and phase=Bound (5.528215157s) Mar 25 12:11:32.523: INFO: Waiting up to 3m0s for PersistentVolume local-pvplgxg to have phase Bound Mar 25 12:11:32.648: INFO: PersistentVolume local-pvplgxg found and phase=Bound (125.456121ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Mar 25 12:11:43.880: INFO: pod "pod-15814597-c6bb-4549-b805-241756cb1a31" created on Node "latest-worker2" STEP: Writing in pod1 Mar 25 12:11:43.880: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5413 PodName:pod-15814597-c6bb-4549-b805-241756cb1a31 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:11:43.880: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:11:44.341: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Mar 25 12:11:44.341: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5413 PodName:pod-15814597-c6bb-4549-b805-241756cb1a31 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:11:44.342: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:11:44.767: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Mar 25 12:11:55.358: INFO: pod "pod-ca1d1ef9-cc39-44d8-8658-f9a6ed01d248" created on Node "latest-worker2" Mar 25 12:11:55.358: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5413 PodName:pod-ca1d1ef9-cc39-44d8-8658-f9a6ed01d248 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:11:55.358: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:11:55.582: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Mar 25 12:11:55.582: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-c970d4f8-3f8f-43ee-8901-099942a9cd7b > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5413 PodName:pod-ca1d1ef9-cc39-44d8-8658-f9a6ed01d248 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:11:55.582: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:11:55.989: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-c970d4f8-3f8f-43ee-8901-099942a9cd7b > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Mar 25 12:11:55.989: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5413 PodName:pod-15814597-c6bb-4549-b805-241756cb1a31 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:11:55.989: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:11:56.156: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-c970d4f8-3f8f-43ee-8901-099942a9cd7b", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-15814597-c6bb-4549-b805-241756cb1a31 in namespace persistent-local-volumes-test-5413 STEP: Deleting pod2 STEP: Deleting pod pod-ca1d1ef9-cc39-44d8-8658-f9a6ed01d248 in namespace persistent-local-volumes-test-5413 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 12:11:56.519: INFO: Deleting PersistentVolumeClaim "pvc-wlq4j" Mar 25 12:11:57.340: INFO: Deleting PersistentVolume "local-pvplgxg" STEP: Removing the test directory Mar 25 12:11:58.539: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c970d4f8-3f8f-43ee-8901-099942a9cd7b] Namespace:persistent-local-volumes-test-5413 PodName:hostexec-latest-worker2-cbtx4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:11:58.539: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:12:00.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5413" for this suite. • [SLOW TEST:46.181 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":133,"completed":19,"skipped":918,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:12:01.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 25 12:12:14.294: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-a8b07ac5-1da7-4775-8dfa-8a8867f05674-backend && mount --bind /tmp/local-volume-test-a8b07ac5-1da7-4775-8dfa-8a8867f05674-backend /tmp/local-volume-test-a8b07ac5-1da7-4775-8dfa-8a8867f05674-backend && ln -s /tmp/local-volume-test-a8b07ac5-1da7-4775-8dfa-8a8867f05674-backend /tmp/local-volume-test-a8b07ac5-1da7-4775-8dfa-8a8867f05674] Namespace:persistent-local-volumes-test-248 PodName:hostexec-latest-worker-zslgr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:12:14.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 12:12:14.440: INFO: Creating a PV followed by a PVC Mar 25 12:12:14.516: INFO: Waiting for PV local-pvlw5fk to bind to PVC pvc-ss7nq Mar 25 12:12:14.516: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-ss7nq] to have phase Bound Mar 25 12:12:16.250: INFO: PersistentVolumeClaim pvc-ss7nq found but phase is Pending instead of Bound. Mar 25 12:12:18.651: INFO: PersistentVolumeClaim pvc-ss7nq found and phase=Bound (4.134938615s) Mar 25 12:12:18.651: INFO: Waiting up to 3m0s for PersistentVolume local-pvlw5fk to have phase Bound Mar 25 12:12:18.958: INFO: PersistentVolume local-pvlw5fk found and phase=Bound (307.363301ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Mar 25 12:12:29.940: INFO: pod "pod-7885969d-f6ec-4913-b0fd-9b095aa34edb" created on Node "latest-worker" STEP: Writing in pod1 Mar 25 12:12:29.940: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-248 PodName:pod-7885969d-f6ec-4913-b0fd-9b095aa34edb ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:12:29.940: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:12:30.284: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Mar 25 12:12:30.284: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-248 PodName:pod-7885969d-f6ec-4913-b0fd-9b095aa34edb ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:12:30.284: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:12:30.948: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-7885969d-f6ec-4913-b0fd-9b095aa34edb in namespace persistent-local-volumes-test-248 STEP: Creating pod2 STEP: Creating a pod Mar 25 12:12:39.814: INFO: pod "pod-cdc32984-f7d0-486c-aacb-025d14ab3de4" created on Node "latest-worker" STEP: Reading in pod2 Mar 25 12:12:39.814: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-248 PodName:pod-cdc32984-f7d0-486c-aacb-025d14ab3de4 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:12:39.814: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:12:39.974: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-cdc32984-f7d0-486c-aacb-025d14ab3de4 in namespace persistent-local-volumes-test-248 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 12:12:43.377: INFO: Deleting PersistentVolumeClaim "pvc-ss7nq" Mar 25 12:12:45.138: INFO: Deleting PersistentVolume "local-pvlw5fk" STEP: Removing the test directory Mar 25 12:12:46.255: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-a8b07ac5-1da7-4775-8dfa-8a8867f05674 && umount /tmp/local-volume-test-a8b07ac5-1da7-4775-8dfa-8a8867f05674-backend && rm -r /tmp/local-volume-test-a8b07ac5-1da7-4775-8dfa-8a8867f05674-backend] Namespace:persistent-local-volumes-test-248 PodName:hostexec-latest-worker-zslgr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:12:46.255: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:12:49.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-248" for this suite. • [SLOW TEST:49.033 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":133,"completed":20,"skipped":931,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:12:50.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Mar 25 12:13:07.692: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-6459 PodName:hostexec-latest-worker2-vt4jp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:13:07.692: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:13:08.552: INFO: exec latest-worker2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Mar 25 12:13:08.552: INFO: exec latest-worker2: stdout: "0\n" Mar 25 12:13:08.552: INFO: exec latest-worker2: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Mar 25 12:13:08.552: INFO: exec latest-worker2: exit code: 0 Mar 25 12:13:08.552: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:13:08.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6459" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [18.410 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1256 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:13:08.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker2" using path "/tmp/local-volume-test-ae61c55d-42f2-4e4b-aaca-68e8655c6b8f" Mar 25 12:13:15.833: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-ae61c55d-42f2-4e4b-aaca-68e8655c6b8f && dd if=/dev/zero of=/tmp/local-volume-test-ae61c55d-42f2-4e4b-aaca-68e8655c6b8f/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-ae61c55d-42f2-4e4b-aaca-68e8655c6b8f/file] Namespace:persistent-local-volumes-test-59 PodName:hostexec-latest-worker2-6f7m9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:13:15.833: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:13:16.063: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-ae61c55d-42f2-4e4b-aaca-68e8655c6b8f/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-59 PodName:hostexec-latest-worker2-6f7m9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:13:16.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 12:13:16.252: INFO: Creating a PV followed by a PVC Mar 25 12:13:16.304: INFO: Waiting for PV local-pv4bzr2 to bind to PVC pvc-spxpg Mar 25 12:13:16.305: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-spxpg] to have phase Bound Mar 25 12:13:16.398: INFO: PersistentVolumeClaim pvc-spxpg found but phase is Pending instead of Bound. Mar 25 12:13:18.647: INFO: PersistentVolumeClaim pvc-spxpg found and phase=Bound (2.342570163s) Mar 25 12:13:18.647: INFO: Waiting up to 3m0s for PersistentVolume local-pv4bzr2 to have phase Bound Mar 25 12:13:18.740: INFO: PersistentVolume local-pv4bzr2 found and phase=Bound (93.065435ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Mar 25 12:13:27.525: INFO: pod "pod-027d10a9-7297-4226-a265-467b78fc92a9" created on Node "latest-worker2" STEP: Writing in pod1 Mar 25 12:13:27.525: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-59 PodName:pod-027d10a9-7297-4226-a265-467b78fc92a9 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:13:27.525: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:13:27.732: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n18 bytes (18B) copied, 0.000053 seconds, 331.7KB/s", err: Mar 25 12:13:27.732: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-59 PodName:pod-027d10a9-7297-4226-a265-467b78fc92a9 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:13:27.732: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:13:27.847: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-027d10a9-7297-4226-a265-467b78fc92a9 in namespace persistent-local-volumes-test-59 STEP: Creating pod2 STEP: Creating a pod Mar 25 12:13:36.139: INFO: pod "pod-dac038c9-c543-4f4c-abe0-ac1b9e27f0c7" created on Node "latest-worker2" STEP: Reading in pod2 Mar 25 12:13:36.139: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-59 PodName:pod-dac038c9-c543-4f4c-abe0-ac1b9e27f0c7 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:13:36.139: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:13:36.339: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-dac038c9-c543-4f4c-abe0-ac1b9e27f0c7 in namespace persistent-local-volumes-test-59 [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 12:13:36.448: INFO: Deleting PersistentVolumeClaim "pvc-spxpg" Mar 25 12:13:36.510: INFO: Deleting PersistentVolume "local-pv4bzr2" Mar 25 12:13:36.650: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-ae61c55d-42f2-4e4b-aaca-68e8655c6b8f/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-59 PodName:hostexec-latest-worker2-6f7m9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:13:36.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker2" at path /tmp/local-volume-test-ae61c55d-42f2-4e4b-aaca-68e8655c6b8f/file Mar 25 12:13:36.815: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-59 PodName:hostexec-latest-worker2-6f7m9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:13:36.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-ae61c55d-42f2-4e4b-aaca-68e8655c6b8f Mar 25 12:13:37.174: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ae61c55d-42f2-4e4b-aaca-68e8655c6b8f] Namespace:persistent-local-volumes-test-59 PodName:hostexec-latest-worker2-6f7m9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:13:37.174: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:13:37.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-59" for this suite. • [SLOW TEST:28.805 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":133,"completed":21,"skipped":1224,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:298 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:13:37.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support memory backed volumes of specified size /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:298 [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:13:38.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5493" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size","total":133,"completed":22,"skipped":1299,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:13:39.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Mar 25 12:13:55.608: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-8691 PodName:hostexec-latest-worker-hvknq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:13:55.609: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:13:56.005: INFO: exec latest-worker: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Mar 25 12:13:56.005: INFO: exec latest-worker: stdout: "0\n" Mar 25 12:13:56.005: INFO: exec latest-worker: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Mar 25 12:13:56.005: INFO: exec latest-worker: exit code: 0 Mar 25 12:13:56.005: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:13:56.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8691" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [18.367 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1256 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Volumes ConfigMap should be mountable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:48 [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:13:57.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:42 [It] should be mountable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:48 STEP: starting configmap-client STEP: Checking that text file contents are perfect. Mar 25 12:14:10.762: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=volume-1070 exec configmap-client --namespace=volume-1070 -- cat /opt/0/firstfile' Mar 25 12:14:25.184: INFO: stderr: "" Mar 25 12:14:25.184: INFO: stdout: "this is the first file" Mar 25 12:14:25.184: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /opt/0] Namespace:volume-1070 PodName:configmap-client ContainerName:configmap-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:14:25.184: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:14:25.477: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /opt/0] Namespace:volume-1070 PodName:configmap-client ContainerName:configmap-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:14:25.477: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:14:25.884: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=volume-1070 exec configmap-client --namespace=volume-1070 -- cat /opt/1/secondfile' Mar 25 12:14:26.402: INFO: stderr: "" Mar 25 12:14:26.402: INFO: stdout: "this is the second file" Mar 25 12:14:26.402: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /opt/1] Namespace:volume-1070 PodName:configmap-client ContainerName:configmap-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:14:26.402: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:14:26.678: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /opt/1] Namespace:volume-1070 PodName:configmap-client ContainerName:configmap-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:14:26.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod configmap-client in namespace volume-1070 Mar 25 12:14:27.038: INFO: Waiting for pod configmap-client to disappear Mar 25 12:14:27.803: INFO: Pod configmap-client still exists Mar 25 12:14:29.803: INFO: Waiting for pod configmap-client to disappear Mar 25 12:14:30.121: INFO: Pod configmap-client still exists Mar 25 12:14:31.804: INFO: Waiting for pod configmap-client to disappear Mar 25 12:14:32.211: INFO: Pod configmap-client still exists Mar 25 12:14:33.804: INFO: Waiting for pod configmap-client to disappear Mar 25 12:14:33.977: INFO: Pod configmap-client still exists Mar 25 12:14:35.803: INFO: Waiting for pod configmap-client to disappear Mar 25 12:14:35.876: INFO: Pod configmap-client still exists Mar 25 12:14:37.804: INFO: Waiting for pod configmap-client to disappear Mar 25 12:14:39.059: INFO: Pod configmap-client still exists Mar 25 12:14:39.804: INFO: Waiting for pod configmap-client to disappear Mar 25 12:14:40.145: INFO: Pod configmap-client still exists Mar 25 12:14:41.804: INFO: Waiting for pod configmap-client to disappear Mar 25 12:14:42.058: INFO: Pod configmap-client still exists Mar 25 12:14:43.803: INFO: Waiting for pod configmap-client to disappear Mar 25 12:14:44.310: INFO: Pod configmap-client still exists Mar 25 12:14:45.804: INFO: Waiting for pod configmap-client to disappear Mar 25 12:14:46.404: INFO: Pod configmap-client still exists Mar 25 12:14:47.803: INFO: Waiting for pod configmap-client to disappear Mar 25 12:14:48.166: INFO: Pod configmap-client still exists Mar 25 12:14:49.803: INFO: Waiting for pod configmap-client to disappear Mar 25 12:14:49.975: INFO: Pod configmap-client still exists Mar 25 12:14:51.804: INFO: Waiting for pod configmap-client to disappear Mar 25 12:14:51.922: INFO: Pod configmap-client still exists Mar 25 12:14:53.803: INFO: Waiting for pod configmap-client to disappear Mar 25 12:14:53.998: INFO: Pod configmap-client still exists Mar 25 12:14:55.804: INFO: Waiting for pod configmap-client to disappear Mar 25 12:14:55.894: INFO: Pod configmap-client still exists Mar 25 12:14:57.804: INFO: Waiting for pod configmap-client to disappear Mar 25 12:14:58.654: INFO: Pod configmap-client still exists Mar 25 12:14:59.804: INFO: Waiting for pod configmap-client to disappear Mar 25 12:15:00.365: INFO: Pod configmap-client still exists Mar 25 12:15:01.803: INFO: Waiting for pod configmap-client to disappear Mar 25 12:15:03.153: INFO: Pod configmap-client still exists Mar 25 12:15:03.803: INFO: Waiting for pod configmap-client to disappear Mar 25 12:15:04.457: INFO: Pod configmap-client still exists Mar 25 12:15:05.804: INFO: Waiting for pod configmap-client to disappear Mar 25 12:15:06.818: INFO: Pod configmap-client still exists Mar 25 12:15:07.803: INFO: Waiting for pod configmap-client to disappear Mar 25 12:15:08.231: INFO: Pod configmap-client still exists Mar 25 12:15:09.803: INFO: Waiting for pod configmap-client to disappear Mar 25 12:15:10.453: INFO: Pod configmap-client still exists Mar 25 12:15:11.803: INFO: Waiting for pod configmap-client to disappear Mar 25 12:15:12.167: INFO: Pod configmap-client still exists Mar 25 12:15:13.803: INFO: Waiting for pod configmap-client to disappear Mar 25 12:15:15.745: INFO: Pod configmap-client no longer exists [AfterEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:15:19.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-1070" for this suite. • [SLOW TEST:83.722 seconds] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:47 should be mountable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:48 ------------------------------ {"msg":"PASSED [sig-storage] Volumes ConfigMap should be mountable","total":133,"completed":23,"skipped":1396,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:15:21.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 25 12:15:43.685: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-9bb92f99-d08a-4d02-aa0d-cebd372eeba4 && mount --bind /tmp/local-volume-test-9bb92f99-d08a-4d02-aa0d-cebd372eeba4 /tmp/local-volume-test-9bb92f99-d08a-4d02-aa0d-cebd372eeba4] Namespace:persistent-local-volumes-test-1178 PodName:hostexec-latest-worker-tclpd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:15:43.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 12:15:43.876: INFO: Creating a PV followed by a PVC Mar 25 12:15:46.037: INFO: Waiting for PV local-pv9p4zj to bind to PVC pvc-hp9xf Mar 25 12:15:46.037: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-hp9xf] to have phase Bound Mar 25 12:15:48.187: INFO: PersistentVolumeClaim pvc-hp9xf found but phase is Pending instead of Bound. Mar 25 12:15:50.465: INFO: PersistentVolumeClaim pvc-hp9xf found but phase is Pending instead of Bound. Mar 25 12:15:53.446: INFO: PersistentVolumeClaim pvc-hp9xf found and phase=Bound (7.408800458s) Mar 25 12:15:53.446: INFO: Waiting up to 3m0s for PersistentVolume local-pv9p4zj to have phase Bound Mar 25 12:15:53.921: INFO: PersistentVolume local-pv9p4zj found and phase=Bound (474.794165ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 25 12:16:15.894: INFO: pod "pod-4fe9604c-260d-4ce6-b494-d7ff819976e5" created on Node "latest-worker" STEP: Writing in pod1 Mar 25 12:16:15.894: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1178 PodName:pod-4fe9604c-260d-4ce6-b494-d7ff819976e5 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:16:15.895: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:16:16.190: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Mar 25 12:16:16.191: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1178 PodName:pod-4fe9604c-260d-4ce6-b494-d7ff819976e5 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:16:16.191: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:16:17.038: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Mar 25 12:16:17.038: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-9bb92f99-d08a-4d02-aa0d-cebd372eeba4 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1178 PodName:pod-4fe9604c-260d-4ce6-b494-d7ff819976e5 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:16:17.038: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:16:17.363: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-9bb92f99-d08a-4d02-aa0d-cebd372eeba4 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-4fe9604c-260d-4ce6-b494-d7ff819976e5 in namespace persistent-local-volumes-test-1178 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 12:16:17.547: INFO: Deleting PersistentVolumeClaim "pvc-hp9xf" Mar 25 12:16:17.687: INFO: Deleting PersistentVolume "local-pv9p4zj" STEP: Removing the test directory Mar 25 12:16:17.821: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-9bb92f99-d08a-4d02-aa0d-cebd372eeba4 && rm -r /tmp/local-volume-test-9bb92f99-d08a-4d02-aa0d-cebd372eeba4] Namespace:persistent-local-volumes-test-1178 PodName:hostexec-latest-worker-tclpd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:16:17.821: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:16:23.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1178" for this suite. • [SLOW TEST:63.688 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":133,"completed":24,"skipped":1409,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:16:24.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not be passed when podInfoOnMount=false /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 STEP: Building a driver namespace object, basename csi-mock-volumes-3632 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 12:16:30.387: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3632-2843/csi-attacher Mar 25 12:16:30.422: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3632 Mar 25 12:16:30.422: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-3632 Mar 25 12:16:30.822: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3632 Mar 25 12:16:30.871: INFO: creating *v1.Role: csi-mock-volumes-3632-2843/external-attacher-cfg-csi-mock-volumes-3632 Mar 25 12:16:32.285: INFO: creating *v1.RoleBinding: csi-mock-volumes-3632-2843/csi-attacher-role-cfg Mar 25 12:16:32.509: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3632-2843/csi-provisioner Mar 25 12:16:33.002: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3632 Mar 25 12:16:33.002: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-3632 Mar 25 12:16:33.655: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3632 Mar 25 12:16:33.711: INFO: creating *v1.Role: csi-mock-volumes-3632-2843/external-provisioner-cfg-csi-mock-volumes-3632 Mar 25 12:16:33.811: INFO: creating *v1.RoleBinding: csi-mock-volumes-3632-2843/csi-provisioner-role-cfg Mar 25 12:16:33.907: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3632-2843/csi-resizer Mar 25 12:16:34.267: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3632 Mar 25 12:16:34.267: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-3632 Mar 25 12:16:34.581: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3632 Mar 25 12:16:34.597: INFO: creating *v1.Role: csi-mock-volumes-3632-2843/external-resizer-cfg-csi-mock-volumes-3632 Mar 25 12:16:34.669: INFO: creating *v1.RoleBinding: csi-mock-volumes-3632-2843/csi-resizer-role-cfg Mar 25 12:16:35.074: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3632-2843/csi-snapshotter Mar 25 12:16:35.111: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3632 Mar 25 12:16:35.111: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-3632 Mar 25 12:16:35.439: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3632 Mar 25 12:16:35.446: INFO: creating *v1.Role: csi-mock-volumes-3632-2843/external-snapshotter-leaderelection-csi-mock-volumes-3632 Mar 25 12:16:35.511: INFO: creating *v1.RoleBinding: csi-mock-volumes-3632-2843/external-snapshotter-leaderelection Mar 25 12:16:35.612: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3632-2843/csi-mock Mar 25 12:16:35.667: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3632 Mar 25 12:16:35.703: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3632 Mar 25 12:16:35.816: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3632 Mar 25 12:16:35.848: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3632 Mar 25 12:16:36.513: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3632 Mar 25 12:16:36.957: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3632 Mar 25 12:16:37.003: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3632 Mar 25 12:16:37.349: INFO: creating *v1.StatefulSet: csi-mock-volumes-3632-2843/csi-mockplugin Mar 25 12:16:37.394: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-3632 Mar 25 12:16:38.103: INFO: creating *v1.StatefulSet: csi-mock-volumes-3632-2843/csi-mockplugin-attacher Mar 25 12:16:38.565: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-3632" Mar 25 12:16:39.501: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-3632 to register on node latest-worker STEP: Creating pod Mar 25 12:17:10.840: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 25 12:17:11.718: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-p57ht] to have phase Bound Mar 25 12:17:13.322: INFO: PersistentVolumeClaim pvc-p57ht found but phase is Pending instead of Bound. Mar 25 12:17:16.029: INFO: PersistentVolumeClaim pvc-p57ht found and phase=Bound (4.310703667s) STEP: Deleting the previously created pod Mar 25 12:17:43.098: INFO: Deleting pod "pvc-volume-tester-fhhjl" in namespace "csi-mock-volumes-3632" Mar 25 12:17:43.157: INFO: Wait up to 5m0s for pod "pvc-volume-tester-fhhjl" to be fully deleted STEP: Checking CSI driver logs Mar 25 12:18:19.946: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/7bce085e-d9ea-4756-b49a-6452160c5281/volumes/kubernetes.io~csi/pvc-63408fb4-19e3-460b-8502-5ee3803107a8/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-fhhjl Mar 25 12:18:19.946: INFO: Deleting pod "pvc-volume-tester-fhhjl" in namespace "csi-mock-volumes-3632" STEP: Deleting claim pvc-p57ht Mar 25 12:18:20.407: INFO: Waiting up to 2m0s for PersistentVolume pvc-63408fb4-19e3-460b-8502-5ee3803107a8 to get deleted Mar 25 12:18:20.592: INFO: PersistentVolume pvc-63408fb4-19e3-460b-8502-5ee3803107a8 found and phase=Bound (185.553244ms) Mar 25 12:18:23.236: INFO: PersistentVolume pvc-63408fb4-19e3-460b-8502-5ee3803107a8 was removed STEP: Deleting storageclass csi-mock-volumes-3632-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-3632 STEP: Waiting for namespaces [csi-mock-volumes-3632] to vanish STEP: uninstalling csi mock driver Mar 25 12:19:03.212: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3632-2843/csi-attacher Mar 25 12:19:03.284: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3632 Mar 25 12:19:03.407: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3632 Mar 25 12:19:03.453: INFO: deleting *v1.Role: csi-mock-volumes-3632-2843/external-attacher-cfg-csi-mock-volumes-3632 Mar 25 12:19:03.558: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3632-2843/csi-attacher-role-cfg Mar 25 12:19:03.733: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3632-2843/csi-provisioner Mar 25 12:19:03.829: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3632 Mar 25 12:19:03.924: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3632 Mar 25 12:19:04.046: INFO: deleting *v1.Role: csi-mock-volumes-3632-2843/external-provisioner-cfg-csi-mock-volumes-3632 Mar 25 12:19:04.108: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3632-2843/csi-provisioner-role-cfg Mar 25 12:19:04.222: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3632-2843/csi-resizer Mar 25 12:19:04.439: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3632 Mar 25 12:19:04.475: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3632 Mar 25 12:19:04.642: INFO: deleting *v1.Role: csi-mock-volumes-3632-2843/external-resizer-cfg-csi-mock-volumes-3632 Mar 25 12:19:04.823: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3632-2843/csi-resizer-role-cfg Mar 25 12:19:05.104: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3632-2843/csi-snapshotter Mar 25 12:19:05.329: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3632 Mar 25 12:19:06.410: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3632 Mar 25 12:19:07.595: INFO: deleting *v1.Role: csi-mock-volumes-3632-2843/external-snapshotter-leaderelection-csi-mock-volumes-3632 Mar 25 12:19:07.694: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3632-2843/external-snapshotter-leaderelection Mar 25 12:19:08.463: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3632-2843/csi-mock Mar 25 12:19:08.625: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3632 Mar 25 12:19:08.762: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3632 Mar 25 12:19:08.964: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3632 Mar 25 12:19:08.989: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3632 Mar 25 12:19:09.115: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3632 Mar 25 12:19:09.217: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3632 Mar 25 12:19:09.253: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3632 Mar 25 12:19:09.296: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3632-2843/csi-mockplugin Mar 25 12:19:09.421: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-3632 Mar 25 12:19:09.472: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3632-2843/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-3632-2843 STEP: Waiting for namespaces [csi-mock-volumes-3632-2843] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:19:42.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:198.185 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443 should not be passed when podInfoOnMount=false /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false","total":133,"completed":25,"skipped":1461,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} S ------------------------------ [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:19:43.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume by restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 STEP: Building a driver namespace object, basename csi-mock-volumes-3816 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 12:19:45.588: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3816-6987/csi-attacher Mar 25 12:19:45.802: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3816 Mar 25 12:19:45.802: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-3816 Mar 25 12:19:46.171: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3816 Mar 25 12:19:46.356: INFO: creating *v1.Role: csi-mock-volumes-3816-6987/external-attacher-cfg-csi-mock-volumes-3816 Mar 25 12:19:46.455: INFO: creating *v1.RoleBinding: csi-mock-volumes-3816-6987/csi-attacher-role-cfg Mar 25 12:19:47.054: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3816-6987/csi-provisioner Mar 25 12:19:47.669: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3816 Mar 25 12:19:47.669: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-3816 Mar 25 12:19:47.927: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3816 Mar 25 12:19:47.971: INFO: creating *v1.Role: csi-mock-volumes-3816-6987/external-provisioner-cfg-csi-mock-volumes-3816 Mar 25 12:19:48.189: INFO: creating *v1.RoleBinding: csi-mock-volumes-3816-6987/csi-provisioner-role-cfg Mar 25 12:19:48.267: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3816-6987/csi-resizer Mar 25 12:19:48.362: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3816 Mar 25 12:19:48.362: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-3816 Mar 25 12:19:48.395: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3816 Mar 25 12:19:48.411: INFO: creating *v1.Role: csi-mock-volumes-3816-6987/external-resizer-cfg-csi-mock-volumes-3816 Mar 25 12:19:48.628: INFO: creating *v1.RoleBinding: csi-mock-volumes-3816-6987/csi-resizer-role-cfg Mar 25 12:19:49.282: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3816-6987/csi-snapshotter Mar 25 12:19:49.508: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3816 Mar 25 12:19:49.508: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-3816 Mar 25 12:19:49.551: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3816 Mar 25 12:19:49.587: INFO: creating *v1.Role: csi-mock-volumes-3816-6987/external-snapshotter-leaderelection-csi-mock-volumes-3816 Mar 25 12:19:50.280: INFO: creating *v1.RoleBinding: csi-mock-volumes-3816-6987/external-snapshotter-leaderelection Mar 25 12:19:50.512: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3816-6987/csi-mock Mar 25 12:19:50.551: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3816 Mar 25 12:19:50.604: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3816 Mar 25 12:19:51.075: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3816 Mar 25 12:19:51.214: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3816 Mar 25 12:19:51.278: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3816 Mar 25 12:19:51.356: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3816 Mar 25 12:19:51.371: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3816 Mar 25 12:19:51.446: INFO: creating *v1.StatefulSet: csi-mock-volumes-3816-6987/csi-mockplugin Mar 25 12:19:51.554: INFO: creating *v1.StatefulSet: csi-mock-volumes-3816-6987/csi-mockplugin-attacher Mar 25 12:19:51.860: INFO: creating *v1.StatefulSet: csi-mock-volumes-3816-6987/csi-mockplugin-resizer Mar 25 12:19:51.867: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-3816 to register on node latest-worker STEP: Creating pod Mar 25 12:20:23.203: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 25 12:20:24.063: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-94rbs] to have phase Bound Mar 25 12:20:24.067: INFO: PersistentVolumeClaim pvc-94rbs found but phase is Pending instead of Bound. Mar 25 12:20:26.137: INFO: PersistentVolumeClaim pvc-94rbs found and phase=Bound (2.07359119s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Checking for conditions on pvc STEP: Deleting the previously created pod Mar 25 12:20:57.711: INFO: Deleting pod "pvc-volume-tester-cv6rm" in namespace "csi-mock-volumes-3816" Mar 25 12:20:57.843: INFO: Wait up to 5m0s for pod "pvc-volume-tester-cv6rm" to be fully deleted STEP: Creating a new pod with same volume STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-cv6rm Mar 25 12:21:12.141: INFO: Deleting pod "pvc-volume-tester-cv6rm" in namespace "csi-mock-volumes-3816" STEP: Deleting pod pvc-volume-tester-wvmk6 Mar 25 12:21:13.005: INFO: Deleting pod "pvc-volume-tester-wvmk6" in namespace "csi-mock-volumes-3816" Mar 25 12:21:13.437: INFO: Wait up to 5m0s for pod "pvc-volume-tester-wvmk6" to be fully deleted STEP: Deleting claim pvc-94rbs Mar 25 12:21:25.959: INFO: Waiting up to 2m0s for PersistentVolume pvc-4a193c67-3bbf-4099-a407-5d3698de3327 to get deleted Mar 25 12:21:26.083: INFO: PersistentVolume pvc-4a193c67-3bbf-4099-a407-5d3698de3327 found and phase=Bound (124.403591ms) Mar 25 12:21:28.109: INFO: PersistentVolume pvc-4a193c67-3bbf-4099-a407-5d3698de3327 was removed STEP: Deleting storageclass csi-mock-volumes-3816-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-3816 STEP: Waiting for namespaces [csi-mock-volumes-3816] to vanish STEP: uninstalling csi mock driver Mar 25 12:21:49.238: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3816-6987/csi-attacher Mar 25 12:21:50.475: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3816 Mar 25 12:21:51.095: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3816 Mar 25 12:21:51.360: INFO: deleting *v1.Role: csi-mock-volumes-3816-6987/external-attacher-cfg-csi-mock-volumes-3816 Mar 25 12:21:51.550: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3816-6987/csi-attacher-role-cfg Mar 25 12:21:52.021: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3816-6987/csi-provisioner Mar 25 12:21:54.348: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3816 Mar 25 12:21:56.298: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3816 Mar 25 12:21:57.213: INFO: deleting *v1.Role: csi-mock-volumes-3816-6987/external-provisioner-cfg-csi-mock-volumes-3816 Mar 25 12:21:57.711: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3816-6987/csi-provisioner-role-cfg Mar 25 12:21:58.450: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3816-6987/csi-resizer Mar 25 12:21:58.655: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3816 Mar 25 12:21:58.872: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3816 Mar 25 12:21:59.896: INFO: deleting *v1.Role: csi-mock-volumes-3816-6987/external-resizer-cfg-csi-mock-volumes-3816 Mar 25 12:22:00.060: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3816-6987/csi-resizer-role-cfg Mar 25 12:22:00.333: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3816-6987/csi-snapshotter Mar 25 12:22:00.584: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3816 Mar 25 12:22:02.539: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3816 Mar 25 12:22:03.218: INFO: deleting *v1.Role: csi-mock-volumes-3816-6987/external-snapshotter-leaderelection-csi-mock-volumes-3816 Mar 25 12:22:03.747: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3816-6987/external-snapshotter-leaderelection Mar 25 12:22:04.473: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3816-6987/csi-mock Mar 25 12:22:04.645: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3816 Mar 25 12:22:04.838: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3816 Mar 25 12:22:05.525: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3816 Mar 25 12:22:05.636: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3816 Mar 25 12:22:05.741: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3816 Mar 25 12:22:06.471: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3816 Mar 25 12:22:06.939: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3816 Mar 25 12:22:07.211: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3816-6987/csi-mockplugin Mar 25 12:22:07.422: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3816-6987/csi-mockplugin-attacher Mar 25 12:22:08.082: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3816-6987/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-3816-6987 STEP: Waiting for namespaces [csi-mock-volumes-3816-6987] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:22:33.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:170.300 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI Volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561 should expand volume by restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on","total":133,"completed":26,"skipped":1462,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:22:33.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not require VolumeAttach for drivers without attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 STEP: Building a driver namespace object, basename csi-mock-volumes-9102 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 12:22:34.620: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9102-5235/csi-attacher Mar 25 12:22:34.660: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9102 Mar 25 12:22:34.660: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-9102 Mar 25 12:22:34.704: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9102 Mar 25 12:22:34.712: INFO: creating *v1.Role: csi-mock-volumes-9102-5235/external-attacher-cfg-csi-mock-volumes-9102 Mar 25 12:22:34.755: INFO: creating *v1.RoleBinding: csi-mock-volumes-9102-5235/csi-attacher-role-cfg Mar 25 12:22:34.873: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9102-5235/csi-provisioner Mar 25 12:22:34.892: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9102 Mar 25 12:22:34.892: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-9102 Mar 25 12:22:34.998: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9102 Mar 25 12:22:35.027: INFO: creating *v1.Role: csi-mock-volumes-9102-5235/external-provisioner-cfg-csi-mock-volumes-9102 Mar 25 12:22:35.081: INFO: creating *v1.RoleBinding: csi-mock-volumes-9102-5235/csi-provisioner-role-cfg Mar 25 12:22:35.124: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9102-5235/csi-resizer Mar 25 12:22:35.163: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9102 Mar 25 12:22:35.163: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-9102 Mar 25 12:22:35.182: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9102 Mar 25 12:22:35.209: INFO: creating *v1.Role: csi-mock-volumes-9102-5235/external-resizer-cfg-csi-mock-volumes-9102 Mar 25 12:22:35.270: INFO: creating *v1.RoleBinding: csi-mock-volumes-9102-5235/csi-resizer-role-cfg Mar 25 12:22:35.334: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9102-5235/csi-snapshotter Mar 25 12:22:35.341: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9102 Mar 25 12:22:35.341: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-9102 Mar 25 12:22:35.401: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9102 Mar 25 12:22:35.524: INFO: creating *v1.Role: csi-mock-volumes-9102-5235/external-snapshotter-leaderelection-csi-mock-volumes-9102 Mar 25 12:22:35.531: INFO: creating *v1.RoleBinding: csi-mock-volumes-9102-5235/external-snapshotter-leaderelection Mar 25 12:22:35.612: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9102-5235/csi-mock Mar 25 12:22:35.726: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9102 Mar 25 12:22:35.779: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9102 Mar 25 12:22:35.796: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9102 Mar 25 12:22:35.817: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9102 Mar 25 12:22:35.905: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9102 Mar 25 12:22:35.915: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9102 Mar 25 12:22:35.954: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9102 Mar 25 12:22:35.969: INFO: creating *v1.StatefulSet: csi-mock-volumes-9102-5235/csi-mockplugin Mar 25 12:22:36.038: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-9102 Mar 25 12:22:36.104: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-9102" Mar 25 12:22:36.196: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-9102 to register on node latest-worker2 STEP: Creating pod Mar 25 12:22:54.193: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 25 12:22:54.757: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-drjfv] to have phase Bound Mar 25 12:22:55.008: INFO: PersistentVolumeClaim pvc-drjfv found but phase is Pending instead of Bound. Mar 25 12:22:57.011: INFO: PersistentVolumeClaim pvc-drjfv found and phase=Bound (2.253868603s) STEP: Checking if VolumeAttachment was created for the pod STEP: Deleting pod pvc-volume-tester-x7bfj Mar 25 12:23:07.541: INFO: Deleting pod "pvc-volume-tester-x7bfj" in namespace "csi-mock-volumes-9102" Mar 25 12:23:07.752: INFO: Wait up to 5m0s for pod "pvc-volume-tester-x7bfj" to be fully deleted STEP: Deleting claim pvc-drjfv Mar 25 12:23:39.520: INFO: Waiting up to 2m0s for PersistentVolume pvc-d6761050-fa01-44df-8f10-fbe03accd49e to get deleted Mar 25 12:23:40.331: INFO: PersistentVolume pvc-d6761050-fa01-44df-8f10-fbe03accd49e found and phase=Bound (810.966629ms) Mar 25 12:23:42.814: INFO: PersistentVolume pvc-d6761050-fa01-44df-8f10-fbe03accd49e was removed STEP: Deleting storageclass csi-mock-volumes-9102-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-9102 STEP: Waiting for namespaces [csi-mock-volumes-9102] to vanish STEP: uninstalling csi mock driver Mar 25 12:24:07.638: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9102-5235/csi-attacher Mar 25 12:24:07.942: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9102 Mar 25 12:24:09.450: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9102 Mar 25 12:24:09.829: INFO: deleting *v1.Role: csi-mock-volumes-9102-5235/external-attacher-cfg-csi-mock-volumes-9102 Mar 25 12:24:10.558: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9102-5235/csi-attacher-role-cfg Mar 25 12:24:10.818: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9102-5235/csi-provisioner Mar 25 12:24:11.097: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9102 Mar 25 12:24:11.468: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9102 Mar 25 12:24:12.399: INFO: deleting *v1.Role: csi-mock-volumes-9102-5235/external-provisioner-cfg-csi-mock-volumes-9102 Mar 25 12:24:12.466: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9102-5235/csi-provisioner-role-cfg Mar 25 12:24:12.533: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9102-5235/csi-resizer Mar 25 12:24:12.631: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9102 Mar 25 12:24:13.225: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9102 Mar 25 12:24:13.338: INFO: deleting *v1.Role: csi-mock-volumes-9102-5235/external-resizer-cfg-csi-mock-volumes-9102 Mar 25 12:24:13.473: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9102-5235/csi-resizer-role-cfg Mar 25 12:24:13.610: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9102-5235/csi-snapshotter Mar 25 12:24:13.667: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9102 Mar 25 12:24:13.821: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9102 Mar 25 12:24:14.564: INFO: deleting *v1.Role: csi-mock-volumes-9102-5235/external-snapshotter-leaderelection-csi-mock-volumes-9102 Mar 25 12:24:14.713: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9102-5235/external-snapshotter-leaderelection Mar 25 12:24:15.246: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9102-5235/csi-mock Mar 25 12:24:15.758: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9102 Mar 25 12:24:15.971: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9102 Mar 25 12:24:16.086: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9102 Mar 25 12:24:16.236: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9102 Mar 25 12:24:16.328: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9102 Mar 25 12:24:16.368: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9102 Mar 25 12:24:16.396: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9102 Mar 25 12:24:16.423: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9102-5235/csi-mockplugin Mar 25 12:24:16.540: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-9102 STEP: deleting the driver namespace: csi-mock-volumes-9102-5235 STEP: Waiting for namespaces [csi-mock-volumes-9102-5235] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:25:05.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:152.607 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI attach test using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:316 should not require VolumeAttach for drivers without attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment","total":133,"completed":27,"skipped":1520,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create bound pv/pvc count metrics for pvc controller after creating both pv and pvc /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:503 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:25:05.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Mar 25 12:25:09.000: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:25:09.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-4285" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [3.931 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create bound pv/pvc count metrics for pvc controller after creating both pv and pvc /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:503 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ S ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:25:09.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 25 12:25:21.097: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-7bfe0df0-16dc-4c59-817f-a044f2f04c05-backend && mount --bind /tmp/local-volume-test-7bfe0df0-16dc-4c59-817f-a044f2f04c05-backend /tmp/local-volume-test-7bfe0df0-16dc-4c59-817f-a044f2f04c05-backend && ln -s /tmp/local-volume-test-7bfe0df0-16dc-4c59-817f-a044f2f04c05-backend /tmp/local-volume-test-7bfe0df0-16dc-4c59-817f-a044f2f04c05] Namespace:persistent-local-volumes-test-5095 PodName:hostexec-latest-worker2-fsvqq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:25:21.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 12:25:21.196: INFO: Creating a PV followed by a PVC Mar 25 12:25:22.042: INFO: Waiting for PV local-pvhs8pp to bind to PVC pvc-xhcjr Mar 25 12:25:22.042: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-xhcjr] to have phase Bound Mar 25 12:25:22.519: INFO: PersistentVolumeClaim pvc-xhcjr found but phase is Pending instead of Bound. Mar 25 12:25:24.574: INFO: PersistentVolumeClaim pvc-xhcjr found and phase=Bound (2.5322936s) Mar 25 12:25:24.574: INFO: Waiting up to 3m0s for PersistentVolume local-pvhs8pp to have phase Bound Mar 25 12:25:24.619: INFO: PersistentVolume local-pvhs8pp found and phase=Bound (44.921515ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Mar 25 12:25:36.107: INFO: pod "pod-d777a3cf-81af-4104-97e3-c9d031b90ed1" created on Node "latest-worker2" STEP: Writing in pod1 Mar 25 12:25:36.107: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5095 PodName:pod-d777a3cf-81af-4104-97e3-c9d031b90ed1 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:25:36.107: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:25:37.159: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Mar 25 12:25:37.159: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5095 PodName:pod-d777a3cf-81af-4104-97e3-c9d031b90ed1 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:25:37.159: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:25:37.918: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Mar 25 12:25:44.118: INFO: pod "pod-26a2604d-6777-48c4-a3ea-36958956ce53" created on Node "latest-worker2" Mar 25 12:25:44.118: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5095 PodName:pod-26a2604d-6777-48c4-a3ea-36958956ce53 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:25:44.118: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:25:44.331: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Mar 25 12:25:44.331: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-7bfe0df0-16dc-4c59-817f-a044f2f04c05 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5095 PodName:pod-26a2604d-6777-48c4-a3ea-36958956ce53 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:25:44.331: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:25:44.482: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-7bfe0df0-16dc-4c59-817f-a044f2f04c05 > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Mar 25 12:25:44.482: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5095 PodName:pod-d777a3cf-81af-4104-97e3-c9d031b90ed1 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:25:44.482: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:25:44.679: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-7bfe0df0-16dc-4c59-817f-a044f2f04c05", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-d777a3cf-81af-4104-97e3-c9d031b90ed1 in namespace persistent-local-volumes-test-5095 STEP: Deleting pod2 STEP: Deleting pod pod-26a2604d-6777-48c4-a3ea-36958956ce53 in namespace persistent-local-volumes-test-5095 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 12:25:44.850: INFO: Deleting PersistentVolumeClaim "pvc-xhcjr" Mar 25 12:25:44.954: INFO: Deleting PersistentVolume "local-pvhs8pp" STEP: Removing the test directory Mar 25 12:25:44.966: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-7bfe0df0-16dc-4c59-817f-a044f2f04c05 && umount /tmp/local-volume-test-7bfe0df0-16dc-4c59-817f-a044f2f04c05-backend && rm -r /tmp/local-volume-test-7bfe0df0-16dc-4c59-817f-a044f2f04c05-backend] Namespace:persistent-local-volumes-test-5095 PodName:hostexec-latest-worker2-fsvqq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:25:44.966: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:25:45.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5095" for this suite. • [SLOW TEST:36.436 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":133,"completed":28,"skipped":1547,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Mounted volume expand Should verify mounted devices can be resized /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:117 [BeforeEach] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:25:46.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename mounted-volume-expand STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:59 Mar 25 12:25:47.489: INFO: Only supported for providers [aws gce] (not local) [AfterEach] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:25:47.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "mounted-volume-expand-387" for this suite. [AfterEach] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:105 Mar 25 12:25:47.857: INFO: AfterEach: Cleaning up resources for mounted volume resize S [SKIPPING] in Spec Setup (BeforeEach) [1.561 seconds] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should verify mounted devices can be resized [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:117 Only supported for providers [aws gce] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:60 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:25:47.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker2" using path "/tmp/local-volume-test-d225503c-bbe3-441c-87b7-cd7848e1cc09" Mar 25 12:25:52.969: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-d225503c-bbe3-441c-87b7-cd7848e1cc09 && dd if=/dev/zero of=/tmp/local-volume-test-d225503c-bbe3-441c-87b7-cd7848e1cc09/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-d225503c-bbe3-441c-87b7-cd7848e1cc09/file] Namespace:persistent-local-volumes-test-1013 PodName:hostexec-latest-worker2-dc4tz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:25:52.970: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:25:53.384: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-d225503c-bbe3-441c-87b7-cd7848e1cc09/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1013 PodName:hostexec-latest-worker2-dc4tz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:25:53.384: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:25:53.508: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-d225503c-bbe3-441c-87b7-cd7848e1cc09 && chmod o+rwx /tmp/local-volume-test-d225503c-bbe3-441c-87b7-cd7848e1cc09] Namespace:persistent-local-volumes-test-1013 PodName:hostexec-latest-worker2-dc4tz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:25:53.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 12:25:53.964: INFO: Creating a PV followed by a PVC Mar 25 12:25:54.091: INFO: Waiting for PV local-pv689g5 to bind to PVC pvc-qwpmg Mar 25 12:25:54.091: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-qwpmg] to have phase Bound Mar 25 12:25:54.198: INFO: PersistentVolumeClaim pvc-qwpmg found but phase is Pending instead of Bound. Mar 25 12:25:56.226: INFO: PersistentVolumeClaim pvc-qwpmg found and phase=Bound (2.134851383s) Mar 25 12:25:56.226: INFO: Waiting up to 3m0s for PersistentVolume local-pv689g5 to have phase Bound Mar 25 12:25:57.451: INFO: PersistentVolume local-pv689g5 found and phase=Bound (1.225102591s) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 25 12:26:06.934: INFO: pod "pod-18272c6d-c60f-4363-a6ae-206e94fdfabd" created on Node "latest-worker2" STEP: Writing in pod1 Mar 25 12:26:06.934: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1013 PodName:pod-18272c6d-c60f-4363-a6ae-206e94fdfabd ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:26:06.934: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:26:07.428: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Mar 25 12:26:07.428: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1013 PodName:pod-18272c6d-c60f-4363-a6ae-206e94fdfabd ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:26:07.428: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:26:08.762: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-18272c6d-c60f-4363-a6ae-206e94fdfabd in namespace persistent-local-volumes-test-1013 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 12:26:09.452: INFO: Deleting PersistentVolumeClaim "pvc-qwpmg" Mar 25 12:26:10.650: INFO: Deleting PersistentVolume "local-pv689g5" Mar 25 12:26:11.049: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-d225503c-bbe3-441c-87b7-cd7848e1cc09] Namespace:persistent-local-volumes-test-1013 PodName:hostexec-latest-worker2-dc4tz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:26:11.049: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:26:11.524: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-d225503c-bbe3-441c-87b7-cd7848e1cc09/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1013 PodName:hostexec-latest-worker2-dc4tz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:26:11.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker2" at path /tmp/local-volume-test-d225503c-bbe3-441c-87b7-cd7848e1cc09/file Mar 25 12:26:11.643: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-1013 PodName:hostexec-latest-worker2-dc4tz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:26:11.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-d225503c-bbe3-441c-87b7-cd7848e1cc09 Mar 25 12:26:12.030: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-d225503c-bbe3-441c-87b7-cd7848e1cc09] Namespace:persistent-local-volumes-test-1013 PodName:hostexec-latest-worker2-dc4tz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:26:12.030: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:26:13.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1013" for this suite. • [SLOW TEST:25.569 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":133,"completed":29,"skipped":1630,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:26:13.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 25 12:26:18.712: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-ea4d08aa-9b6e-40eb-8a63-ad7aa0a4840a] Namespace:persistent-local-volumes-test-3020 PodName:hostexec-latest-worker2-ljxqp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:26:18.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 12:26:19.499: INFO: Creating a PV followed by a PVC Mar 25 12:26:19.772: INFO: Waiting for PV local-pvkm6zh to bind to PVC pvc-qm2vr Mar 25 12:26:19.772: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-qm2vr] to have phase Bound Mar 25 12:26:19.868: INFO: PersistentVolumeClaim pvc-qm2vr found but phase is Pending instead of Bound. Mar 25 12:26:23.182: INFO: PersistentVolumeClaim pvc-qm2vr found but phase is Pending instead of Bound. Mar 25 12:26:25.509: INFO: PersistentVolumeClaim pvc-qm2vr found but phase is Pending instead of Bound. Mar 25 12:26:27.712: INFO: PersistentVolumeClaim pvc-qm2vr found but phase is Pending instead of Bound. Mar 25 12:26:29.789: INFO: PersistentVolumeClaim pvc-qm2vr found but phase is Pending instead of Bound. Mar 25 12:26:32.205: INFO: PersistentVolumeClaim pvc-qm2vr found but phase is Pending instead of Bound. Mar 25 12:26:34.491: INFO: PersistentVolumeClaim pvc-qm2vr found and phase=Bound (14.718613471s) Mar 25 12:26:34.491: INFO: Waiting up to 3m0s for PersistentVolume local-pvkm6zh to have phase Bound Mar 25 12:26:34.730: INFO: PersistentVolume local-pvkm6zh found and phase=Bound (238.562484ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 25 12:26:45.060: INFO: pod "pod-bc339f6d-1dce-4abe-b130-2edf424c1b49" created on Node "latest-worker2" STEP: Writing in pod1 Mar 25 12:26:45.060: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3020 PodName:pod-bc339f6d-1dce-4abe-b130-2edf424c1b49 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:26:45.060: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:26:45.840: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Mar 25 12:26:45.840: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3020 PodName:pod-bc339f6d-1dce-4abe-b130-2edf424c1b49 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:26:45.840: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:26:46.344: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-bc339f6d-1dce-4abe-b130-2edf424c1b49 in namespace persistent-local-volumes-test-3020 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 12:26:47.095: INFO: Deleting PersistentVolumeClaim "pvc-qm2vr" Mar 25 12:26:47.411: INFO: Deleting PersistentVolume "local-pvkm6zh" STEP: Removing the test directory Mar 25 12:26:48.113: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ea4d08aa-9b6e-40eb-8a63-ad7aa0a4840a] Namespace:persistent-local-volumes-test-3020 PodName:hostexec-latest-worker2-ljxqp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:26:48.113: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:26:49.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3020" for this suite. • [SLOW TEST:36.771 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":133,"completed":30,"skipped":1839,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:75 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:26:50.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] volume on tmpfs should have the correct mode using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:75 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 25 12:26:50.852: INFO: Waiting up to 5m0s for pod "pod-d26ea741-8ef0-4fa7-9948-7993ed7eaf6f" in namespace "emptydir-1142" to be "Succeeded or Failed" Mar 25 12:26:50.951: INFO: Pod "pod-d26ea741-8ef0-4fa7-9948-7993ed7eaf6f": Phase="Pending", Reason="", readiness=false. Elapsed: 99.739181ms Mar 25 12:26:53.078: INFO: Pod "pod-d26ea741-8ef0-4fa7-9948-7993ed7eaf6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.226141764s Mar 25 12:26:55.515: INFO: Pod "pod-d26ea741-8ef0-4fa7-9948-7993ed7eaf6f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.663783186s Mar 25 12:26:57.761: INFO: Pod "pod-d26ea741-8ef0-4fa7-9948-7993ed7eaf6f": Phase="Running", Reason="", readiness=true. Elapsed: 6.909439913s Mar 25 12:26:59.855: INFO: Pod "pod-d26ea741-8ef0-4fa7-9948-7993ed7eaf6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.003161154s STEP: Saw pod success Mar 25 12:26:59.855: INFO: Pod "pod-d26ea741-8ef0-4fa7-9948-7993ed7eaf6f" satisfied condition "Succeeded or Failed" Mar 25 12:27:00.059: INFO: Trying to get logs from node latest-worker pod pod-d26ea741-8ef0-4fa7-9948-7993ed7eaf6f container test-container: STEP: delete the pod Mar 25 12:27:01.154: INFO: Waiting for pod pod-d26ea741-8ef0-4fa7-9948-7993ed7eaf6f to disappear Mar 25 12:27:01.208: INFO: Pod pod-d26ea741-8ef0-4fa7-9948-7993ed7eaf6f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:27:01.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1142" for this suite. • [SLOW TEST:11.696 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48 volume on tmpfs should have the correct mode using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:75 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup","total":133,"completed":31,"skipped":1881,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:27:01.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker" using path "/tmp/local-volume-test-b3807d39-fee7-49ad-924b-5f7e9d281ba5" Mar 25 12:27:12.161: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-b3807d39-fee7-49ad-924b-5f7e9d281ba5 && dd if=/dev/zero of=/tmp/local-volume-test-b3807d39-fee7-49ad-924b-5f7e9d281ba5/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-b3807d39-fee7-49ad-924b-5f7e9d281ba5/file] Namespace:persistent-local-volumes-test-3920 PodName:hostexec-latest-worker-2d97r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:27:12.161: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:27:14.192: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-b3807d39-fee7-49ad-924b-5f7e9d281ba5/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-3920 PodName:hostexec-latest-worker-2d97r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:27:14.192: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:27:15.258: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-b3807d39-fee7-49ad-924b-5f7e9d281ba5 && chmod o+rwx /tmp/local-volume-test-b3807d39-fee7-49ad-924b-5f7e9d281ba5] Namespace:persistent-local-volumes-test-3920 PodName:hostexec-latest-worker-2d97r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:27:15.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 12:27:17.020: INFO: Creating a PV followed by a PVC Mar 25 12:27:17.908: INFO: Waiting for PV local-pv2jspj to bind to PVC pvc-d8ts6 Mar 25 12:27:17.908: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-d8ts6] to have phase Bound Mar 25 12:27:18.473: INFO: PersistentVolumeClaim pvc-d8ts6 found but phase is Pending instead of Bound. Mar 25 12:27:20.744: INFO: PersistentVolumeClaim pvc-d8ts6 found but phase is Pending instead of Bound. Mar 25 12:27:23.251: INFO: PersistentVolumeClaim pvc-d8ts6 found but phase is Pending instead of Bound. Mar 25 12:27:25.725: INFO: PersistentVolumeClaim pvc-d8ts6 found but phase is Pending instead of Bound. Mar 25 12:27:27.768: INFO: PersistentVolumeClaim pvc-d8ts6 found but phase is Pending instead of Bound. Mar 25 12:27:31.120: INFO: PersistentVolumeClaim pvc-d8ts6 found but phase is Pending instead of Bound. Mar 25 12:27:33.615: INFO: PersistentVolumeClaim pvc-d8ts6 found but phase is Pending instead of Bound. Mar 25 12:27:35.707: INFO: PersistentVolumeClaim pvc-d8ts6 found and phase=Bound (17.798966969s) Mar 25 12:27:35.707: INFO: Waiting up to 3m0s for PersistentVolume local-pv2jspj to have phase Bound Mar 25 12:27:36.146: INFO: PersistentVolume local-pv2jspj found and phase=Bound (438.869097ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 25 12:27:50.492: INFO: pod "pod-d65feb40-cdaf-4efd-878f-f98485781cf7" created on Node "latest-worker" STEP: Writing in pod1 Mar 25 12:27:50.492: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3920 PodName:pod-d65feb40-cdaf-4efd-878f-f98485781cf7 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:27:50.492: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:27:50.689: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Mar 25 12:27:50.689: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3920 PodName:pod-d65feb40-cdaf-4efd-878f-f98485781cf7 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:27:50.689: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:27:50.852: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Mar 25 12:27:50.852: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-b3807d39-fee7-49ad-924b-5f7e9d281ba5 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3920 PodName:pod-d65feb40-cdaf-4efd-878f-f98485781cf7 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:27:50.852: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:27:50.975: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-b3807d39-fee7-49ad-924b-5f7e9d281ba5 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-d65feb40-cdaf-4efd-878f-f98485781cf7 in namespace persistent-local-volumes-test-3920 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 12:27:51.244: INFO: Deleting PersistentVolumeClaim "pvc-d8ts6" Mar 25 12:27:51.432: INFO: Deleting PersistentVolume "local-pv2jspj" Mar 25 12:27:51.641: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-b3807d39-fee7-49ad-924b-5f7e9d281ba5] Namespace:persistent-local-volumes-test-3920 PodName:hostexec-latest-worker-2d97r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:27:51.642: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:27:51.985: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-b3807d39-fee7-49ad-924b-5f7e9d281ba5/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-3920 PodName:hostexec-latest-worker-2d97r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:27:51.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker" at path /tmp/local-volume-test-b3807d39-fee7-49ad-924b-5f7e9d281ba5/file Mar 25 12:27:52.377: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-3920 PodName:hostexec-latest-worker-2d97r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:27:52.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-b3807d39-fee7-49ad-924b-5f7e9d281ba5 Mar 25 12:27:52.503: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-b3807d39-fee7-49ad-924b-5f7e9d281ba5] Namespace:persistent-local-volumes-test-3920 PodName:hostexec-latest-worker-2d97r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:27:52.503: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:27:53.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3920" for this suite. • [SLOW TEST:52.861 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":133,"completed":32,"skipped":1921,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create volume metrics with the correct PVC ref /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:204 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:27:54.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Mar 25 12:27:55.226: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:27:55.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-497" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.728 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create volume metrics with the correct PVC ref [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:204 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Dynamic Provisioning GlusterDynamicProvisioner should create and delete persistent volumes [fast] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:794 [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:27:55.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should create and delete persistent volumes [fast] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:794 STEP: creating a Gluster DP server Pod STEP: locating the provisioner pod STEP: creating a StorageClass STEP: creating a claim object with a suffix for gluster dynamic provisioner Mar 25 12:28:01.975: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Creating a StorageClass volume-provisioning-5559-glusterdptest STEP: creating claim=&PersistentVolumeClaim{ObjectMeta:{ pvc- volume-provisioning-5559 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {} 2Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*volume-provisioning-5559-glusterdptest,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} Mar 25 12:28:03.298: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-27r8v] to have phase Bound Mar 25 12:28:03.555: INFO: PersistentVolumeClaim pvc-27r8v found but phase is Pending instead of Bound. Mar 25 12:28:05.928: INFO: PersistentVolumeClaim pvc-27r8v found and phase=Bound (2.629343715s) STEP: checking the claim STEP: checking the PV STEP: deleting claim "volume-provisioning-5559"/"pvc-27r8v" STEP: deleting the claim's PV "pvc-f9f7ff75-2060-4ce4-8774-12466f76e1a3" Mar 25 12:28:06.490: INFO: Waiting up to 20m0s for PersistentVolume pvc-f9f7ff75-2060-4ce4-8774-12466f76e1a3 to get deleted Mar 25 12:28:06.556: INFO: PersistentVolume pvc-f9f7ff75-2060-4ce4-8774-12466f76e1a3 found and phase=Bound (66.324554ms) Mar 25 12:28:11.560: INFO: PersistentVolume pvc-f9f7ff75-2060-4ce4-8774-12466f76e1a3 was removed Mar 25 12:28:11.561: INFO: deleting claim "volume-provisioning-5559"/"pvc-27r8v" Mar 25 12:28:11.586: INFO: deleting storage class volume-provisioning-5559-glusterdptest [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:28:11.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-5559" for this suite. • [SLOW TEST:16.568 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 GlusterDynamicProvisioner /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:793 should create and delete persistent volumes [fast] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:794 ------------------------------ {"msg":"PASSED [sig-storage] Dynamic Provisioning GlusterDynamicProvisioner should create and delete persistent volumes [fast]","total":133,"completed":33,"skipped":2060,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:59 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:28:12.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:59 STEP: Creating configMap with name configmap-test-volume-a558b41b-b26c-4069-904c-a0f5947a167f STEP: Creating a pod to test consume configMaps Mar 25 12:28:12.449: INFO: Waiting up to 5m0s for pod "pod-configmaps-e89e516e-f09e-4cb9-a88d-42ec88ddbbfa" in namespace "configmap-7578" to be "Succeeded or Failed" Mar 25 12:28:12.577: INFO: Pod "pod-configmaps-e89e516e-f09e-4cb9-a88d-42ec88ddbbfa": Phase="Pending", Reason="", readiness=false. Elapsed: 127.849054ms Mar 25 12:28:14.614: INFO: Pod "pod-configmaps-e89e516e-f09e-4cb9-a88d-42ec88ddbbfa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.16525479s Mar 25 12:28:17.229: INFO: Pod "pod-configmaps-e89e516e-f09e-4cb9-a88d-42ec88ddbbfa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.779857016s Mar 25 12:28:19.258: INFO: Pod "pod-configmaps-e89e516e-f09e-4cb9-a88d-42ec88ddbbfa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.808792922s Mar 25 12:28:21.300: INFO: Pod "pod-configmaps-e89e516e-f09e-4cb9-a88d-42ec88ddbbfa": Phase="Pending", Reason="", readiness=false. Elapsed: 8.850983855s Mar 25 12:28:23.767: INFO: Pod "pod-configmaps-e89e516e-f09e-4cb9-a88d-42ec88ddbbfa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.318097345s STEP: Saw pod success Mar 25 12:28:23.767: INFO: Pod "pod-configmaps-e89e516e-f09e-4cb9-a88d-42ec88ddbbfa" satisfied condition "Succeeded or Failed" Mar 25 12:28:24.043: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-e89e516e-f09e-4cb9-a88d-42ec88ddbbfa container agnhost-container: STEP: delete the pod Mar 25 12:28:24.827: INFO: Waiting for pod pod-configmaps-e89e516e-f09e-4cb9-a88d-42ec88ddbbfa to disappear Mar 25 12:28:24.922: INFO: Pod pod-configmaps-e89e516e-f09e-4cb9-a88d-42ec88ddbbfa no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:28:24.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7578" for this suite. • [SLOW TEST:13.099 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:59 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":133,"completed":34,"skipped":2076,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSS ------------------------------ [sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:28:25.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] exhausted, late binding, with topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 STEP: Building a driver namespace object, basename csi-mock-volumes-5119 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Mar 25 12:28:28.103: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5119-2815/csi-attacher Mar 25 12:28:28.340: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5119 Mar 25 12:28:28.340: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-5119 Mar 25 12:28:28.350: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5119 Mar 25 12:28:28.355: INFO: creating *v1.Role: csi-mock-volumes-5119-2815/external-attacher-cfg-csi-mock-volumes-5119 Mar 25 12:28:28.843: INFO: creating *v1.RoleBinding: csi-mock-volumes-5119-2815/csi-attacher-role-cfg Mar 25 12:28:29.171: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5119-2815/csi-provisioner Mar 25 12:28:29.238: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5119 Mar 25 12:28:29.238: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-5119 Mar 25 12:28:29.353: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5119 Mar 25 12:28:29.366: INFO: creating *v1.Role: csi-mock-volumes-5119-2815/external-provisioner-cfg-csi-mock-volumes-5119 Mar 25 12:28:29.396: INFO: creating *v1.RoleBinding: csi-mock-volumes-5119-2815/csi-provisioner-role-cfg Mar 25 12:28:29.497: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5119-2815/csi-resizer Mar 25 12:28:29.528: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5119 Mar 25 12:28:29.528: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-5119 Mar 25 12:28:29.558: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5119 Mar 25 12:28:29.677: INFO: creating *v1.Role: csi-mock-volumes-5119-2815/external-resizer-cfg-csi-mock-volumes-5119 Mar 25 12:28:29.744: INFO: creating *v1.RoleBinding: csi-mock-volumes-5119-2815/csi-resizer-role-cfg Mar 25 12:28:30.567: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5119-2815/csi-snapshotter Mar 25 12:28:31.014: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5119 Mar 25 12:28:31.014: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-5119 Mar 25 12:28:31.037: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5119 Mar 25 12:28:31.379: INFO: creating *v1.Role: csi-mock-volumes-5119-2815/external-snapshotter-leaderelection-csi-mock-volumes-5119 Mar 25 12:28:31.397: INFO: creating *v1.RoleBinding: csi-mock-volumes-5119-2815/external-snapshotter-leaderelection Mar 25 12:28:31.459: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5119-2815/csi-mock Mar 25 12:28:31.749: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5119 Mar 25 12:28:31.815: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5119 Mar 25 12:28:31.845: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5119 Mar 25 12:28:32.163: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5119 Mar 25 12:28:32.312: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5119 Mar 25 12:28:32.370: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5119 Mar 25 12:28:32.491: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5119 Mar 25 12:28:32.566: INFO: creating *v1.StatefulSet: csi-mock-volumes-5119-2815/csi-mockplugin Mar 25 12:28:32.587: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-5119 Mar 25 12:28:32.675: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-5119" Mar 25 12:28:32.933: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5119 to register on node latest-worker2 I0325 12:28:50.220619 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I0325 12:28:50.222545 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-5119","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0325 12:28:50.270629 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}},{"Type":{"Service":{"type":2}}}]},"Error":"","FullError":null} I0325 12:28:50.288752 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-5119","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0325 12:28:50.314871 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I0325 12:28:50.455832 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-5119","accessible_topology":{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}},"Error":"","FullError":null} STEP: Creating pod Mar 25 12:29:01.534: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil I0325 12:29:03.364563 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-6cb21b05-b846-49df-b468-ccaee94aa8c0","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}],"accessibility_requirements":{"requisite":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}],"preferred":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}} I0325 12:29:04.524525 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-6cb21b05-b846-49df-b468-ccaee94aa8c0","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}],"accessibility_requirements":{"requisite":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}],"preferred":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-6cb21b05-b846-49df-b468-ccaee94aa8c0"},"accessible_topology":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Error":"","FullError":null} I0325 12:29:05.895826 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Mar 25 12:29:05.899: INFO: >>> kubeConfig: /root/.kube/config I0325 12:29:06.093314 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-6cb21b05-b846-49df-b468-ccaee94aa8c0/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-6cb21b05-b846-49df-b468-ccaee94aa8c0","storage.kubernetes.io/csiProvisionerIdentity":"1616675330357-8081-csi-mock-csi-mock-volumes-5119"}},"Response":{},"Error":"","FullError":null} I0325 12:29:06.812450 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Mar 25 12:29:06.819: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:29:07.118: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:29:07.252: INFO: >>> kubeConfig: /root/.kube/config I0325 12:29:07.895596 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-6cb21b05-b846-49df-b468-ccaee94aa8c0/globalmount","target_path":"/var/lib/kubelet/pods/9e111146-7d19-4f78-924e-fd575f94507c/volumes/kubernetes.io~csi/pvc-6cb21b05-b846-49df-b468-ccaee94aa8c0/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-6cb21b05-b846-49df-b468-ccaee94aa8c0","storage.kubernetes.io/csiProvisionerIdentity":"1616675330357-8081-csi-mock-csi-mock-volumes-5119"}},"Response":{},"Error":"","FullError":null} Mar 25 12:29:17.286: INFO: Deleting pod "pvc-volume-tester-4mxh7" in namespace "csi-mock-volumes-5119" Mar 25 12:29:17.523: INFO: Wait up to 5m0s for pod "pvc-volume-tester-4mxh7" to be fully deleted Mar 25 12:29:21.357: INFO: >>> kubeConfig: /root/.kube/config I0325 12:29:21.667465 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/9e111146-7d19-4f78-924e-fd575f94507c/volumes/kubernetes.io~csi/pvc-6cb21b05-b846-49df-b468-ccaee94aa8c0/mount"},"Response":{},"Error":"","FullError":null} I0325 12:29:21.759954 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0325 12:29:21.762588 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-6cb21b05-b846-49df-b468-ccaee94aa8c0/globalmount"},"Response":{},"Error":"","FullError":null} I0325 12:29:59.686890 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} STEP: Checking PVC events Mar 25 12:29:59.697: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-w2vhh", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5119", SelfLink:"", UID:"6cb21b05-b846-49df-b468-ccaee94aa8c0", ResourceVersion:"1149816", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752272141, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001769e30), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001769e48)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc001ece440), VolumeMode:(*v1.PersistentVolumeMode)(0xc001ece450), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 12:29:59.697: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-w2vhh", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5119", SelfLink:"", UID:"6cb21b05-b846-49df-b468-ccaee94aa8c0", ResourceVersion:"1149825", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752272141, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.kubernetes.io/selected-node":"latest-worker2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001527fb0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001527fc8)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001527fe0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0029ee000)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc00208d1d0), VolumeMode:(*v1.PersistentVolumeMode)(0xc00208d1e0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 12:29:59.697: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-w2vhh", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5119", SelfLink:"", UID:"6cb21b05-b846-49df-b468-ccaee94aa8c0", ResourceVersion:"1149828", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752272141, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5119", "volume.kubernetes.io/selected-node":"latest-worker2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00117a4b0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00117a4c8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00117a4e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00117a4f8)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00117a510), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00117a528)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc001dd3c90), VolumeMode:(*v1.PersistentVolumeMode)(0xc001dd3cb0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 12:29:59.697: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-w2vhh", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5119", SelfLink:"", UID:"6cb21b05-b846-49df-b468-ccaee94aa8c0", ResourceVersion:"1149832", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752272141, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5119"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00117a540), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00117a558)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00117a588), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00117a5a0)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00117a5b8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00117a5d0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc001dd3d20), VolumeMode:(*v1.PersistentVolumeMode)(0xc001dd3d30), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 12:29:59.697: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-w2vhh", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5119", SelfLink:"", UID:"6cb21b05-b846-49df-b468-ccaee94aa8c0", ResourceVersion:"1149851", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752272141, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5119", "volume.kubernetes.io/selected-node":"latest-worker2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00117a600), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00117a618)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00117a630), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00117a648)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00117a660), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00117a678)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc001dd3d90), VolumeMode:(*v1.PersistentVolumeMode)(0xc001dd3dd0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 12:29:59.697: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-w2vhh", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5119", SelfLink:"", UID:"6cb21b05-b846-49df-b468-ccaee94aa8c0", ResourceVersion:"1149857", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752272141, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5119", "volume.kubernetes.io/selected-node":"latest-worker2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00117a6a8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00117a6c0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00117a6d8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00117a6f0)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00117a708), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00117a720)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-6cb21b05-b846-49df-b468-ccaee94aa8c0", StorageClassName:(*string)(0xc001dd3e20), VolumeMode:(*v1.PersistentVolumeMode)(0xc001dd3e40), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 12:29:59.697: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-w2vhh", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5119", SelfLink:"", UID:"6cb21b05-b846-49df-b468-ccaee94aa8c0", ResourceVersion:"1149860", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752272141, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5119", "volume.kubernetes.io/selected-node":"latest-worker2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003934c90), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003934ca8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003934cc0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003934cd8)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003934cf0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003934d08)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-6cb21b05-b846-49df-b468-ccaee94aa8c0", StorageClassName:(*string)(0xc0005ddb10), VolumeMode:(*v1.PersistentVolumeMode)(0xc0005ddb20), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 12:29:59.698: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-w2vhh", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5119", SelfLink:"", UID:"6cb21b05-b846-49df-b468-ccaee94aa8c0", ResourceVersion:"1150549", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752272141, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(0xc003934d38), DeletionGracePeriodSeconds:(*int64)(0xc002febd18), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5119", "volume.kubernetes.io/selected-node":"latest-worker2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003934d50), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003934d68)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003934d80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003934d98)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003934db0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003934dc8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-6cb21b05-b846-49df-b468-ccaee94aa8c0", StorageClassName:(*string)(0xc0005ddba0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0005ddbd0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 12:29:59.698: INFO: PVC event DELETED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-w2vhh", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5119", SelfLink:"", UID:"6cb21b05-b846-49df-b468-ccaee94aa8c0", ResourceVersion:"1150559", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752272141, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(0xc003934df8), DeletionGracePeriodSeconds:(*int64)(0xc002febdf8), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5119", "volume.kubernetes.io/selected-node":"latest-worker2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003934e10), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003934e28)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003934e40), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003934e58)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003934e70), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003934e88)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-6cb21b05-b846-49df-b468-ccaee94aa8c0", StorageClassName:(*string)(0xc0005ddc80), VolumeMode:(*v1.PersistentVolumeMode)(0xc0005ddc90), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} STEP: Deleting pod pvc-volume-tester-4mxh7 Mar 25 12:29:59.698: INFO: Deleting pod "pvc-volume-tester-4mxh7" in namespace "csi-mock-volumes-5119" STEP: Deleting claim pvc-w2vhh STEP: Deleting storageclass csi-mock-volumes-5119-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-5119 STEP: Waiting for namespaces [csi-mock-volumes-5119] to vanish STEP: uninstalling csi mock driver Mar 25 12:30:24.229: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5119-2815/csi-attacher Mar 25 12:30:24.323: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5119 Mar 25 12:30:24.435: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5119 Mar 25 12:30:24.570: INFO: deleting *v1.Role: csi-mock-volumes-5119-2815/external-attacher-cfg-csi-mock-volumes-5119 Mar 25 12:30:24.758: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5119-2815/csi-attacher-role-cfg Mar 25 12:30:24.898: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5119-2815/csi-provisioner Mar 25 12:30:25.037: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5119 Mar 25 12:30:25.099: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5119 Mar 25 12:30:25.187: INFO: deleting *v1.Role: csi-mock-volumes-5119-2815/external-provisioner-cfg-csi-mock-volumes-5119 Mar 25 12:30:25.194: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5119-2815/csi-provisioner-role-cfg Mar 25 12:30:25.232: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5119-2815/csi-resizer Mar 25 12:30:25.378: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5119 Mar 25 12:30:25.519: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5119 Mar 25 12:30:25.585: INFO: deleting *v1.Role: csi-mock-volumes-5119-2815/external-resizer-cfg-csi-mock-volumes-5119 Mar 25 12:30:25.699: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5119-2815/csi-resizer-role-cfg Mar 25 12:30:25.830: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5119-2815/csi-snapshotter Mar 25 12:30:25.883: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5119 Mar 25 12:30:26.010: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5119 Mar 25 12:30:26.203: INFO: deleting *v1.Role: csi-mock-volumes-5119-2815/external-snapshotter-leaderelection-csi-mock-volumes-5119 Mar 25 12:30:26.295: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5119-2815/external-snapshotter-leaderelection Mar 25 12:30:26.320: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5119-2815/csi-mock Mar 25 12:30:26.468: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5119 Mar 25 12:30:26.636: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5119 Mar 25 12:30:26.650: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5119 Mar 25 12:30:26.813: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5119 Mar 25 12:30:26.959: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5119 Mar 25 12:30:27.013: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5119 Mar 25 12:30:27.123: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5119 Mar 25 12:30:27.304: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5119-2815/csi-mockplugin Mar 25 12:30:27.432: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-5119 STEP: deleting the driver namespace: csi-mock-volumes-5119-2815 STEP: Waiting for namespaces [csi-mock-volumes-5119-2815] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:32:05.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:221.093 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 storage capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900 exhausted, late binding, with topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology","total":133,"completed":35,"skipped":2085,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:107 [BeforeEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:32:06.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:51 Mar 25 12:32:07.532: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PV STEP: Waiting for PV to enter phase Available Mar 25 12:32:08.113: INFO: Waiting up to 30s for PersistentVolume hostpath-f5nll to have phase Available Mar 25 12:32:08.349: INFO: PersistentVolume hostpath-f5nll found and phase=Available (236.445424ms) STEP: Checking that PV Protection finalizer is set [It] Verify that PV bound to a PVC is not removed immediately /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:107 STEP: Creating a PVC STEP: Waiting for PVC to become Bound Mar 25 12:32:08.744: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-8kvm5] to have phase Bound Mar 25 12:32:08.984: INFO: PersistentVolumeClaim pvc-8kvm5 found but phase is Pending instead of Bound. Mar 25 12:32:11.189: INFO: PersistentVolumeClaim pvc-8kvm5 found and phase=Bound (2.445412594s) STEP: Deleting the PV, however, the PV must not be removed from the system as it's bound to a PVC STEP: Checking that the PV status is Terminating STEP: Deleting the PVC that is bound to the PV STEP: Checking that the PV is automatically removed from the system because it's no longer bound to a PVC Mar 25 12:32:12.689: INFO: Waiting up to 3m0s for PersistentVolume hostpath-f5nll to get deleted Mar 25 12:32:13.352: INFO: PersistentVolume hostpath-f5nll found and phase=Bound (663.75931ms) Mar 25 12:32:15.735: INFO: PersistentVolume hostpath-f5nll found and phase=Released (3.046399517s) Mar 25 12:32:18.118: INFO: PersistentVolume hostpath-f5nll was removed [AfterEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:32:18.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-protection-9654" for this suite. [AfterEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:92 Mar 25 12:32:18.530: INFO: AfterEach: Cleaning up test resources. Mar 25 12:32:18.530: INFO: Deleting PersistentVolumeClaim "pvc-8kvm5" Mar 25 12:32:18.590: INFO: Deleting PersistentVolume "hostpath-f5nll" • [SLOW TEST:12.346 seconds] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Verify that PV bound to a PVC is not removed immediately /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:107 ------------------------------ {"msg":"PASSED [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately","total":133,"completed":36,"skipped":2106,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:657 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:32:18.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:634 [It] all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:657 STEP: Create a PVC STEP: Create 50 pods to use this PVC STEP: Wait for all pods are running [AfterEach] Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:648 STEP: Clean PV local-pvdnxfn [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:33:57.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2111" for this suite. • [SLOW TEST:99.435 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:629 all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:657 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running","total":133,"completed":37,"skipped":2150,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:33:58.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker2" using path "/tmp/local-volume-test-92031a26-5804-46b4-8187-d1cdce4591bd" Mar 25 12:34:14.134: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-92031a26-5804-46b4-8187-d1cdce4591bd && dd if=/dev/zero of=/tmp/local-volume-test-92031a26-5804-46b4-8187-d1cdce4591bd/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-92031a26-5804-46b4-8187-d1cdce4591bd/file] Namespace:persistent-local-volumes-test-9623 PodName:hostexec-latest-worker2-qtjj7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:34:14.134: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:34:14.786: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-92031a26-5804-46b4-8187-d1cdce4591bd/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9623 PodName:hostexec-latest-worker2-qtjj7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:34:14.787: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:34:15.095: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-92031a26-5804-46b4-8187-d1cdce4591bd && chmod o+rwx /tmp/local-volume-test-92031a26-5804-46b4-8187-d1cdce4591bd] Namespace:persistent-local-volumes-test-9623 PodName:hostexec-latest-worker2-qtjj7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:34:15.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 12:34:15.674: INFO: Creating a PV followed by a PVC Mar 25 12:34:15.696: INFO: Waiting for PV local-pvndftl to bind to PVC pvc-sh86m Mar 25 12:34:15.696: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-sh86m] to have phase Bound Mar 25 12:34:16.162: INFO: PersistentVolumeClaim pvc-sh86m found but phase is Pending instead of Bound. Mar 25 12:34:18.170: INFO: PersistentVolumeClaim pvc-sh86m found and phase=Bound (2.474327159s) Mar 25 12:34:18.170: INFO: Waiting up to 3m0s for PersistentVolume local-pvndftl to have phase Bound Mar 25 12:34:18.292: INFO: PersistentVolume local-pvndftl found and phase=Bound (121.326146ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Mar 25 12:34:27.905: INFO: pod "pod-9a1568ec-73df-41c8-809b-cd7e20a1c2b4" created on Node "latest-worker2" STEP: Writing in pod1 Mar 25 12:34:27.905: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9623 PodName:pod-9a1568ec-73df-41c8-809b-cd7e20a1c2b4 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:34:27.905: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:34:28.298: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Mar 25 12:34:28.299: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9623 PodName:pod-9a1568ec-73df-41c8-809b-cd7e20a1c2b4 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:34:28.299: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:34:28.623: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Mar 25 12:34:35.507: INFO: pod "pod-8524c406-6757-4036-8b88-bd58c8182596" created on Node "latest-worker2" Mar 25 12:34:35.507: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9623 PodName:pod-8524c406-6757-4036-8b88-bd58c8182596 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:34:35.507: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:34:36.042: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Mar 25 12:34:36.042: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-92031a26-5804-46b4-8187-d1cdce4591bd > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9623 PodName:pod-8524c406-6757-4036-8b88-bd58c8182596 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:34:36.042: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:34:36.429: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-92031a26-5804-46b4-8187-d1cdce4591bd > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Mar 25 12:34:36.429: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9623 PodName:pod-9a1568ec-73df-41c8-809b-cd7e20a1c2b4 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:34:36.429: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:34:36.863: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-92031a26-5804-46b4-8187-d1cdce4591bd", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-9a1568ec-73df-41c8-809b-cd7e20a1c2b4 in namespace persistent-local-volumes-test-9623 STEP: Deleting pod2 STEP: Deleting pod pod-8524c406-6757-4036-8b88-bd58c8182596 in namespace persistent-local-volumes-test-9623 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 12:34:37.234: INFO: Deleting PersistentVolumeClaim "pvc-sh86m" Mar 25 12:34:37.451: INFO: Deleting PersistentVolume "local-pvndftl" Mar 25 12:34:38.010: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-92031a26-5804-46b4-8187-d1cdce4591bd] Namespace:persistent-local-volumes-test-9623 PodName:hostexec-latest-worker2-qtjj7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:34:38.010: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:34:38.836: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-92031a26-5804-46b4-8187-d1cdce4591bd/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9623 PodName:hostexec-latest-worker2-qtjj7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:34:38.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker2" at path /tmp/local-volume-test-92031a26-5804-46b4-8187-d1cdce4591bd/file Mar 25 12:34:39.317: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-9623 PodName:hostexec-latest-worker2-qtjj7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:34:39.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-92031a26-5804-46b4-8187-d1cdce4591bd Mar 25 12:34:39.949: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-92031a26-5804-46b4-8187-d1cdce4591bd] Namespace:persistent-local-volumes-test-9623 PodName:hostexec-latest-worker2-qtjj7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:34:39.949: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:34:40.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9623" for this suite. • [SLOW TEST:42.692 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":133,"completed":38,"skipped":2167,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:126 [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:34:40.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pvc-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:72 Mar 25 12:34:41.234: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PVC Mar 25 12:34:41.265: INFO: Default storage class: "standard" Mar 25 12:34:41.265: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Creating a Pod that becomes Running and therefore is actively using the PVC STEP: Waiting for PVC to become Bound Mar 25 12:35:04.438: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-protectionfblv6] to have phase Bound Mar 25 12:35:04.748: INFO: PersistentVolumeClaim pvc-protectionfblv6 found and phase=Bound (309.843213ms) STEP: Checking that PVC Protection finalizer is set [It] Verify that PVC in active use by a pod is not removed immediately /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:126 STEP: Deleting the PVC, however, the PVC must not be removed from the system as it's in active use by a pod STEP: Checking that the PVC status is Terminating STEP: Deleting the pod that uses the PVC Mar 25 12:35:05.667: INFO: Deleting pod "pvc-tester-d9lvj" in namespace "pvc-protection-4945" Mar 25 12:35:06.016: INFO: Wait up to 5m0s for pod "pvc-tester-d9lvj" to be fully deleted STEP: Checking that the PVC is automatically removed from the system because it's no longer in active use by a pod Mar 25 12:35:48.507: INFO: Waiting up to 3m0s for PersistentVolumeClaim pvc-protectionfblv6 to be removed Mar 25 12:35:48.847: INFO: Claim "pvc-protectionfblv6" in namespace "pvc-protection-4945" doesn't exist in the system [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:35:48.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pvc-protection-4945" for this suite. [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:108 • [SLOW TEST:68.698 seconds] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Verify that PVC in active use by a pod is not removed immediately /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:126 ------------------------------ {"msg":"PASSED [sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately","total":133,"completed":39,"skipped":2186,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:35:49.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] contain ephemeral=true when using inline volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 STEP: Building a driver namespace object, basename csi-mock-volumes-3737 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 12:35:53.231: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3737-6583/csi-attacher Mar 25 12:35:53.608: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3737 Mar 25 12:35:53.609: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-3737 Mar 25 12:35:54.033: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3737 Mar 25 12:35:54.241: INFO: creating *v1.Role: csi-mock-volumes-3737-6583/external-attacher-cfg-csi-mock-volumes-3737 Mar 25 12:35:54.279: INFO: creating *v1.RoleBinding: csi-mock-volumes-3737-6583/csi-attacher-role-cfg Mar 25 12:35:54.748: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3737-6583/csi-provisioner Mar 25 12:35:54.826: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3737 Mar 25 12:35:54.826: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-3737 Mar 25 12:35:55.281: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3737 Mar 25 12:35:55.633: INFO: creating *v1.Role: csi-mock-volumes-3737-6583/external-provisioner-cfg-csi-mock-volumes-3737 Mar 25 12:35:55.911: INFO: creating *v1.RoleBinding: csi-mock-volumes-3737-6583/csi-provisioner-role-cfg Mar 25 12:35:56.262: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3737-6583/csi-resizer Mar 25 12:35:56.518: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3737 Mar 25 12:35:56.519: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-3737 Mar 25 12:35:56.840: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3737 Mar 25 12:35:57.106: INFO: creating *v1.Role: csi-mock-volumes-3737-6583/external-resizer-cfg-csi-mock-volumes-3737 Mar 25 12:35:57.653: INFO: creating *v1.RoleBinding: csi-mock-volumes-3737-6583/csi-resizer-role-cfg Mar 25 12:35:57.962: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3737-6583/csi-snapshotter Mar 25 12:35:58.568: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3737 Mar 25 12:35:58.568: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-3737 Mar 25 12:35:58.944: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3737 Mar 25 12:35:59.115: INFO: creating *v1.Role: csi-mock-volumes-3737-6583/external-snapshotter-leaderelection-csi-mock-volumes-3737 Mar 25 12:35:59.535: INFO: creating *v1.RoleBinding: csi-mock-volumes-3737-6583/external-snapshotter-leaderelection Mar 25 12:36:00.057: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3737-6583/csi-mock Mar 25 12:36:00.345: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3737 Mar 25 12:36:00.404: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3737 Mar 25 12:36:00.548: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3737 Mar 25 12:36:00.758: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3737 Mar 25 12:36:01.112: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3737 Mar 25 12:36:01.485: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3737 Mar 25 12:36:01.499: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3737 Mar 25 12:36:01.728: INFO: creating *v1.StatefulSet: csi-mock-volumes-3737-6583/csi-mockplugin Mar 25 12:36:01.743: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-3737 Mar 25 12:36:02.091: INFO: creating *v1.StatefulSet: csi-mock-volumes-3737-6583/csi-mockplugin-attacher Mar 25 12:36:02.406: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-3737" Mar 25 12:36:02.818: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-3737 to register on node latest-worker2 STEP: Creating pod STEP: checking for CSIInlineVolumes feature Mar 25 12:36:29.859: INFO: Error getting logs for pod inline-volume-nxr88: the server rejected our request for an unknown reason (get pods inline-volume-nxr88) Mar 25 12:36:29.922: INFO: Deleting pod "inline-volume-nxr88" in namespace "csi-mock-volumes-3737" Mar 25 12:36:30.022: INFO: Wait up to 5m0s for pod "inline-volume-nxr88" to be fully deleted STEP: Deleting the previously created pod Mar 25 12:36:36.243: INFO: Deleting pod "pvc-volume-tester-vxlsl" in namespace "csi-mock-volumes-3737" Mar 25 12:36:36.318: INFO: Wait up to 5m0s for pod "pvc-volume-tester-vxlsl" to be fully deleted STEP: Checking CSI driver logs Mar 25 12:37:33.676: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-vxlsl Mar 25 12:37:33.676: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-3737 Mar 25 12:37:33.676: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 05e364e6-ef77-46f7-864c-a774bdd3eb33 Mar 25 12:37:33.676: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default Mar 25 12:37:33.676: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: true Mar 25 12:37:33.676: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"csi-498566722d58e2a21e1ee41adf16655ee6900a260dc25d33fb21f5795abc51f8","target_path":"/var/lib/kubelet/pods/05e364e6-ef77-46f7-864c-a774bdd3eb33/volumes/kubernetes.io~csi/my-volume/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-vxlsl Mar 25 12:37:33.676: INFO: Deleting pod "pvc-volume-tester-vxlsl" in namespace "csi-mock-volumes-3737" STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-3737 STEP: Waiting for namespaces [csi-mock-volumes-3737] to vanish STEP: uninstalling csi mock driver Mar 25 12:37:55.078: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3737-6583/csi-attacher Mar 25 12:37:55.327: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3737 Mar 25 12:37:55.530: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3737 Mar 25 12:37:55.893: INFO: deleting *v1.Role: csi-mock-volumes-3737-6583/external-attacher-cfg-csi-mock-volumes-3737 Mar 25 12:37:57.208: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3737-6583/csi-attacher-role-cfg Mar 25 12:37:57.594: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3737-6583/csi-provisioner Mar 25 12:37:57.808: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3737 Mar 25 12:37:57.992: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3737 Mar 25 12:37:58.816: INFO: deleting *v1.Role: csi-mock-volumes-3737-6583/external-provisioner-cfg-csi-mock-volumes-3737 Mar 25 12:37:59.428: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3737-6583/csi-provisioner-role-cfg Mar 25 12:37:59.758: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3737-6583/csi-resizer Mar 25 12:37:59.796: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3737 Mar 25 12:38:00.202: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3737 Mar 25 12:38:00.503: INFO: deleting *v1.Role: csi-mock-volumes-3737-6583/external-resizer-cfg-csi-mock-volumes-3737 Mar 25 12:38:00.962: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3737-6583/csi-resizer-role-cfg Mar 25 12:38:01.297: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3737-6583/csi-snapshotter Mar 25 12:38:02.303: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3737 Mar 25 12:38:02.313: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3737 Mar 25 12:38:03.069: INFO: deleting *v1.Role: csi-mock-volumes-3737-6583/external-snapshotter-leaderelection-csi-mock-volumes-3737 Mar 25 12:38:03.758: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3737-6583/external-snapshotter-leaderelection Mar 25 12:38:04.429: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3737-6583/csi-mock Mar 25 12:38:05.127: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3737 Mar 25 12:38:05.398: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3737 Mar 25 12:38:05.819: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3737 Mar 25 12:38:06.111: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3737 Mar 25 12:38:06.387: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3737 Mar 25 12:38:06.729: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3737 Mar 25 12:38:06.747: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3737 Mar 25 12:38:06.931: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3737-6583/csi-mockplugin Mar 25 12:38:07.783: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-3737 Mar 25 12:38:08.030: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3737-6583/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-3737-6583 STEP: Waiting for namespaces [csi-mock-volumes-3737-6583] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:38:37.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:168.498 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443 contain ephemeral=true when using inline volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","total":133,"completed":40,"skipped":2204,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SS ------------------------------ [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:38:37.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should be passed when podInfoOnMount=true /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 STEP: Building a driver namespace object, basename csi-mock-volumes-6952 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 12:38:39.898: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6952-5526/csi-attacher Mar 25 12:38:39.923: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6952 Mar 25 12:38:39.923: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-6952 Mar 25 12:38:39.969: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6952 Mar 25 12:38:40.057: INFO: creating *v1.Role: csi-mock-volumes-6952-5526/external-attacher-cfg-csi-mock-volumes-6952 Mar 25 12:38:40.073: INFO: creating *v1.RoleBinding: csi-mock-volumes-6952-5526/csi-attacher-role-cfg Mar 25 12:38:40.095: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6952-5526/csi-provisioner Mar 25 12:38:40.129: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6952 Mar 25 12:38:40.129: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-6952 Mar 25 12:38:40.154: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6952 Mar 25 12:38:40.243: INFO: creating *v1.Role: csi-mock-volumes-6952-5526/external-provisioner-cfg-csi-mock-volumes-6952 Mar 25 12:38:40.249: INFO: creating *v1.RoleBinding: csi-mock-volumes-6952-5526/csi-provisioner-role-cfg Mar 25 12:38:40.287: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6952-5526/csi-resizer Mar 25 12:38:40.330: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6952 Mar 25 12:38:40.330: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-6952 Mar 25 12:38:40.374: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6952 Mar 25 12:38:40.406: INFO: creating *v1.Role: csi-mock-volumes-6952-5526/external-resizer-cfg-csi-mock-volumes-6952 Mar 25 12:38:40.430: INFO: creating *v1.RoleBinding: csi-mock-volumes-6952-5526/csi-resizer-role-cfg Mar 25 12:38:40.532: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6952-5526/csi-snapshotter Mar 25 12:38:40.567: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6952 Mar 25 12:38:40.567: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-6952 Mar 25 12:38:40.593: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6952 Mar 25 12:38:40.609: INFO: creating *v1.Role: csi-mock-volumes-6952-5526/external-snapshotter-leaderelection-csi-mock-volumes-6952 Mar 25 12:38:40.692: INFO: creating *v1.RoleBinding: csi-mock-volumes-6952-5526/external-snapshotter-leaderelection Mar 25 12:38:40.717: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6952-5526/csi-mock Mar 25 12:38:40.761: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6952 Mar 25 12:38:40.869: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6952 Mar 25 12:38:40.929: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6952 Mar 25 12:38:40.997: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6952 Mar 25 12:38:41.024: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6952 Mar 25 12:38:41.037: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6952 Mar 25 12:38:41.062: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6952 Mar 25 12:38:41.079: INFO: creating *v1.StatefulSet: csi-mock-volumes-6952-5526/csi-mockplugin Mar 25 12:38:41.136: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-6952 Mar 25 12:38:41.169: INFO: creating *v1.StatefulSet: csi-mock-volumes-6952-5526/csi-mockplugin-attacher Mar 25 12:38:41.205: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-6952" Mar 25 12:38:41.327: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-6952 to register on node latest-worker2 STEP: Creating pod Mar 25 12:38:58.525: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 25 12:38:58.577: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-nkglb] to have phase Bound Mar 25 12:38:58.679: INFO: PersistentVolumeClaim pvc-nkglb found but phase is Pending instead of Bound. Mar 25 12:39:00.692: INFO: PersistentVolumeClaim pvc-nkglb found and phase=Bound (2.114506567s) STEP: checking for CSIInlineVolumes feature Mar 25 12:39:30.917: INFO: Error getting logs for pod inline-volume-65wtp: the server rejected our request for an unknown reason (get pods inline-volume-65wtp) Mar 25 12:39:31.601: INFO: Deleting pod "inline-volume-65wtp" in namespace "csi-mock-volumes-6952" Mar 25 12:39:32.001: INFO: Wait up to 5m0s for pod "inline-volume-65wtp" to be fully deleted STEP: Deleting the previously created pod Mar 25 12:39:37.154: INFO: Deleting pod "pvc-volume-tester-mwpcj" in namespace "csi-mock-volumes-6952" Mar 25 12:39:37.236: INFO: Wait up to 5m0s for pod "pvc-volume-tester-mwpcj" to be fully deleted STEP: Checking CSI driver logs Mar 25 12:40:26.661: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 866eee9c-d975-4f17-872f-8ae328606fe5 Mar 25 12:40:26.661: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default Mar 25 12:40:26.661: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: false Mar 25 12:40:26.661: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-mwpcj Mar 25 12:40:26.661: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-6952 Mar 25 12:40:26.661: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/866eee9c-d975-4f17-872f-8ae328606fe5/volumes/kubernetes.io~csi/pvc-785d8131-68a2-4377-a153-7ab7f8aecc6a/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-mwpcj Mar 25 12:40:26.661: INFO: Deleting pod "pvc-volume-tester-mwpcj" in namespace "csi-mock-volumes-6952" STEP: Deleting claim pvc-nkglb Mar 25 12:40:26.897: INFO: Waiting up to 2m0s for PersistentVolume pvc-785d8131-68a2-4377-a153-7ab7f8aecc6a to get deleted Mar 25 12:40:27.039: INFO: PersistentVolume pvc-785d8131-68a2-4377-a153-7ab7f8aecc6a found and phase=Bound (141.829995ms) Mar 25 12:40:29.563: INFO: PersistentVolume pvc-785d8131-68a2-4377-a153-7ab7f8aecc6a was removed STEP: Deleting storageclass csi-mock-volumes-6952-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-6952 STEP: Waiting for namespaces [csi-mock-volumes-6952] to vanish STEP: uninstalling csi mock driver Mar 25 12:40:42.124: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6952-5526/csi-attacher Mar 25 12:40:42.336: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6952 Mar 25 12:40:42.507: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6952 Mar 25 12:40:43.011: INFO: deleting *v1.Role: csi-mock-volumes-6952-5526/external-attacher-cfg-csi-mock-volumes-6952 Mar 25 12:40:43.040: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6952-5526/csi-attacher-role-cfg Mar 25 12:40:43.239: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6952-5526/csi-provisioner Mar 25 12:40:43.262: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6952 Mar 25 12:40:43.365: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6952 Mar 25 12:40:43.534: INFO: deleting *v1.Role: csi-mock-volumes-6952-5526/external-provisioner-cfg-csi-mock-volumes-6952 Mar 25 12:40:43.582: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6952-5526/csi-provisioner-role-cfg Mar 25 12:40:43.606: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6952-5526/csi-resizer Mar 25 12:40:43.719: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6952 Mar 25 12:40:43.916: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6952 Mar 25 12:40:44.184: INFO: deleting *v1.Role: csi-mock-volumes-6952-5526/external-resizer-cfg-csi-mock-volumes-6952 Mar 25 12:40:44.543: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6952-5526/csi-resizer-role-cfg Mar 25 12:40:44.680: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6952-5526/csi-snapshotter Mar 25 12:40:44.685: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6952 Mar 25 12:40:44.697: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6952 Mar 25 12:40:44.746: INFO: deleting *v1.Role: csi-mock-volumes-6952-5526/external-snapshotter-leaderelection-csi-mock-volumes-6952 Mar 25 12:40:44.777: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6952-5526/external-snapshotter-leaderelection Mar 25 12:40:44.868: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6952-5526/csi-mock Mar 25 12:40:44.890: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6952 Mar 25 12:40:44.923: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6952 Mar 25 12:40:44.962: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6952 Mar 25 12:40:45.002: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6952 Mar 25 12:40:45.042: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6952 Mar 25 12:40:45.054: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6952 Mar 25 12:40:45.060: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6952 Mar 25 12:40:45.073: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6952-5526/csi-mockplugin Mar 25 12:40:45.176: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-6952 Mar 25 12:40:45.202: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6952-5526/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-6952-5526 STEP: Waiting for namespaces [csi-mock-volumes-6952-5526] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:41:29.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:171.459 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443 should be passed when podInfoOnMount=true /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true","total":133,"completed":41,"skipped":2206,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:41:29.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "latest-worker2" at path "/tmp/local-volume-test-5b37d0d3-1d7e-4d75-8a6b-f54feab440c8" Mar 25 12:41:33.615: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-5b37d0d3-1d7e-4d75-8a6b-f54feab440c8" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-5b37d0d3-1d7e-4d75-8a6b-f54feab440c8" "/tmp/local-volume-test-5b37d0d3-1d7e-4d75-8a6b-f54feab440c8"] Namespace:persistent-local-volumes-test-4066 PodName:hostexec-latest-worker2-9kk5b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:41:33.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 12:41:33.763: INFO: Creating a PV followed by a PVC Mar 25 12:41:33.895: INFO: Waiting for PV local-pvk8vgg to bind to PVC pvc-lpbfp Mar 25 12:41:33.895: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-lpbfp] to have phase Bound Mar 25 12:41:34.459: INFO: PersistentVolumeClaim pvc-lpbfp found but phase is Pending instead of Bound. Mar 25 12:41:36.464: INFO: PersistentVolumeClaim pvc-lpbfp found but phase is Pending instead of Bound. Mar 25 12:41:38.541: INFO: PersistentVolumeClaim pvc-lpbfp found but phase is Pending instead of Bound. Mar 25 12:41:40.545: INFO: PersistentVolumeClaim pvc-lpbfp found but phase is Pending instead of Bound. Mar 25 12:41:42.723: INFO: PersistentVolumeClaim pvc-lpbfp found but phase is Pending instead of Bound. Mar 25 12:41:44.983: INFO: PersistentVolumeClaim pvc-lpbfp found but phase is Pending instead of Bound. Mar 25 12:41:47.054: INFO: PersistentVolumeClaim pvc-lpbfp found but phase is Pending instead of Bound. Mar 25 12:41:49.059: INFO: PersistentVolumeClaim pvc-lpbfp found and phase=Bound (15.163902206s) Mar 25 12:41:49.059: INFO: Waiting up to 3m0s for PersistentVolume local-pvk8vgg to have phase Bound Mar 25 12:41:49.061: INFO: PersistentVolume local-pvk8vgg found and phase=Bound (2.240071ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Mar 25 12:41:55.261: INFO: pod "pod-6d8661d9-6d8e-4b34-876f-a48b5e71c00a" created on Node "latest-worker2" STEP: Writing in pod1 Mar 25 12:41:55.261: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4066 PodName:pod-6d8661d9-6d8e-4b34-876f-a48b5e71c00a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:41:55.261: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:41:55.377: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Mar 25 12:41:55.377: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4066 PodName:pod-6d8661d9-6d8e-4b34-876f-a48b5e71c00a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:41:55.377: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:41:55.571: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-6d8661d9-6d8e-4b34-876f-a48b5e71c00a in namespace persistent-local-volumes-test-4066 STEP: Creating pod2 STEP: Creating a pod Mar 25 12:42:01.732: INFO: pod "pod-b3b98fd7-d98d-43a4-952a-98a604cf4561" created on Node "latest-worker2" STEP: Reading in pod2 Mar 25 12:42:01.733: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4066 PodName:pod-b3b98fd7-d98d-43a4-952a-98a604cf4561 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:42:01.733: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:42:01.857: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-b3b98fd7-d98d-43a4-952a-98a604cf4561 in namespace persistent-local-volumes-test-4066 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 12:42:01.863: INFO: Deleting PersistentVolumeClaim "pvc-lpbfp" Mar 25 12:42:01.877: INFO: Deleting PersistentVolume "local-pvk8vgg" STEP: Unmount tmpfs mount point on node "latest-worker2" at path "/tmp/local-volume-test-5b37d0d3-1d7e-4d75-8a6b-f54feab440c8" Mar 25 12:42:01.983: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-5b37d0d3-1d7e-4d75-8a6b-f54feab440c8"] Namespace:persistent-local-volumes-test-4066 PodName:hostexec-latest-worker2-9kk5b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:42:01.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Mar 25 12:42:02.100: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-5b37d0d3-1d7e-4d75-8a6b-f54feab440c8] Namespace:persistent-local-volumes-test-4066 PodName:hostexec-latest-worker2-9kk5b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:42:02.100: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:42:02.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4066" for this suite. • [SLOW TEST:32.921 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":133,"completed":42,"skipped":2353,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Container restart should verify that container can restart successfully after configmaps modified /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:131 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:42:02.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that container can restart successfully after configmaps modified /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:131 STEP: Create configmap STEP: Creating pod pod-subpath-test-configmap-c8hs STEP: Failing liveness probe Mar 25 12:42:14.550: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=subpath-9239 exec pod-subpath-test-configmap-c8hs --container test-container-volume-configmap-c8hs -- /bin/sh -c rm /probe-volume/probe-file' Mar 25 12:42:19.673: INFO: stderr: "" Mar 25 12:42:19.673: INFO: stdout: "" Mar 25 12:42:19.673: INFO: Pod exec output: STEP: Waiting for container to restart Mar 25 12:42:19.676: INFO: Container test-container-subpath-configmap-c8hs, restarts: 0 Mar 25 12:42:29.956: INFO: Container test-container-subpath-configmap-c8hs, restarts: 1 Mar 25 12:42:29.956: INFO: Container has restart count: 1 STEP: Fix liveness probe STEP: Waiting for container to stop restarting Mar 25 12:42:32.048: INFO: Container has restart count: 2 Mar 25 12:42:52.049: INFO: Container has restart count: 3 Mar 25 12:43:54.050: INFO: Container restart has stabilized Mar 25 12:43:54.050: INFO: Deleting pod "pod-subpath-test-configmap-c8hs" in namespace "subpath-9239" Mar 25 12:43:54.055: INFO: Wait up to 5m0s for pod "pod-subpath-test-configmap-c8hs" to be fully deleted [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:44:56.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9239" for this suite. • [SLOW TEST:173.839 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Container restart /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:130 should verify that container can restart successfully after configmaps modified /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:131 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Container restart should verify that container can restart successfully after configmaps modified","total":133,"completed":43,"skipped":2370,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PVC Protection Verify "immediate" deletion of a PVC that is not in active use by a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:114 [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:44:56.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pvc-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:72 Mar 25 12:44:56.239: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PVC Mar 25 12:44:56.280: INFO: Default storage class: "standard" Mar 25 12:44:56.280: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Creating a Pod that becomes Running and therefore is actively using the PVC STEP: Waiting for PVC to become Bound Mar 25 12:45:06.348: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-protectionk25kj] to have phase Bound Mar 25 12:45:06.351: INFO: PersistentVolumeClaim pvc-protectionk25kj found and phase=Bound (3.177409ms) STEP: Checking that PVC Protection finalizer is set [It] Verify "immediate" deletion of a PVC that is not in active use by a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:114 STEP: Deleting the pod using the PVC Mar 25 12:45:06.354: INFO: Deleting pod "pvc-tester-62qz2" in namespace "pvc-protection-7762" Mar 25 12:45:06.359: INFO: Wait up to 5m0s for pod "pvc-tester-62qz2" to be fully deleted STEP: Deleting the PVC Mar 25 12:45:56.446: INFO: Waiting up to 3m0s for PersistentVolumeClaim pvc-protectionk25kj to be removed Mar 25 12:45:58.459: INFO: Claim "pvc-protectionk25kj" in namespace "pvc-protection-7762" doesn't exist in the system [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:45:58.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pvc-protection-7762" for this suite. [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:108 • [SLOW TEST:62.325 seconds] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Verify "immediate" deletion of a PVC that is not in active use by a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:114 ------------------------------ {"msg":"PASSED [sig-storage] PVC Protection Verify \"immediate\" deletion of a PVC that is not in active use by a pod","total":133,"completed":44,"skipped":2415,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:45:58.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "latest-worker" at path "/tmp/local-volume-test-c71a8d3f-2805-4583-9631-025fded6139d" Mar 25 12:46:03.209: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-c71a8d3f-2805-4583-9631-025fded6139d" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-c71a8d3f-2805-4583-9631-025fded6139d" "/tmp/local-volume-test-c71a8d3f-2805-4583-9631-025fded6139d"] Namespace:persistent-local-volumes-test-5428 PodName:hostexec-latest-worker-xhlps ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:46:03.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 12:46:03.314: INFO: Creating a PV followed by a PVC Mar 25 12:46:03.329: INFO: Waiting for PV local-pvnxkfh to bind to PVC pvc-pjlct Mar 25 12:46:03.329: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-pjlct] to have phase Bound Mar 25 12:46:03.348: INFO: PersistentVolumeClaim pvc-pjlct found but phase is Pending instead of Bound. Mar 25 12:46:05.353: INFO: PersistentVolumeClaim pvc-pjlct found but phase is Pending instead of Bound. Mar 25 12:46:07.357: INFO: PersistentVolumeClaim pvc-pjlct found but phase is Pending instead of Bound. Mar 25 12:46:09.362: INFO: PersistentVolumeClaim pvc-pjlct found but phase is Pending instead of Bound. Mar 25 12:46:11.367: INFO: PersistentVolumeClaim pvc-pjlct found but phase is Pending instead of Bound. Mar 25 12:46:13.372: INFO: PersistentVolumeClaim pvc-pjlct found but phase is Pending instead of Bound. Mar 25 12:46:15.376: INFO: PersistentVolumeClaim pvc-pjlct found but phase is Pending instead of Bound. Mar 25 12:46:17.380: INFO: PersistentVolumeClaim pvc-pjlct found and phase=Bound (14.051471422s) Mar 25 12:46:17.381: INFO: Waiting up to 3m0s for PersistentVolume local-pvnxkfh to have phase Bound Mar 25 12:46:17.383: INFO: PersistentVolume local-pvnxkfh found and phase=Bound (2.551927ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 25 12:46:21.436: INFO: pod "pod-fe41ecf4-0ec2-42bb-8fb3-4a78ca92753a" created on Node "latest-worker" STEP: Writing in pod1 Mar 25 12:46:21.437: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5428 PodName:pod-fe41ecf4-0ec2-42bb-8fb3-4a78ca92753a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:46:21.437: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:46:21.550: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Mar 25 12:46:21.550: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5428 PodName:pod-fe41ecf4-0ec2-42bb-8fb3-4a78ca92753a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:46:21.550: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:46:21.645: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Mar 25 12:46:21.645: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-c71a8d3f-2805-4583-9631-025fded6139d > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5428 PodName:pod-fe41ecf4-0ec2-42bb-8fb3-4a78ca92753a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:46:21.646: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:46:21.883: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-c71a8d3f-2805-4583-9631-025fded6139d > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-fe41ecf4-0ec2-42bb-8fb3-4a78ca92753a in namespace persistent-local-volumes-test-5428 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 12:46:21.889: INFO: Deleting PersistentVolumeClaim "pvc-pjlct" Mar 25 12:46:22.086: INFO: Deleting PersistentVolume "local-pvnxkfh" STEP: Unmount tmpfs mount point on node "latest-worker" at path "/tmp/local-volume-test-c71a8d3f-2805-4583-9631-025fded6139d" Mar 25 12:46:22.169: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-c71a8d3f-2805-4583-9631-025fded6139d"] Namespace:persistent-local-volumes-test-5428 PodName:hostexec-latest-worker-xhlps ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:46:22.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Mar 25 12:46:22.304: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c71a8d3f-2805-4583-9631-025fded6139d] Namespace:persistent-local-volumes-test-5428 PodName:hostexec-latest-worker-xhlps ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:46:22.304: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:46:22.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5428" for this suite. • [SLOW TEST:24.211 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":133,"completed":45,"skipped":2487,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Volumes GlusterFS should be mountable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:129 [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:46:22.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:68 Mar 25 12:46:22.877: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:46:22.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-4598" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.276 seconds] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 GlusterFS [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:128 should be mountable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:129 Only supported for node OS distro [gci ubuntu custom] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:69 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create volume metrics in Volume Manager /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:292 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:46:22.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Mar 25 12:46:23.040: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:46:23.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3150" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.125 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create volume metrics in Volume Manager [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:292 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:46:23.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "latest-worker" at path "/tmp/local-volume-test-5bd712b3-7d02-4457-afd5-427684a0d15e" Mar 25 12:46:27.233: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-5bd712b3-7d02-4457-afd5-427684a0d15e" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-5bd712b3-7d02-4457-afd5-427684a0d15e" "/tmp/local-volume-test-5bd712b3-7d02-4457-afd5-427684a0d15e"] Namespace:persistent-local-volumes-test-2669 PodName:hostexec-latest-worker-fd79h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:46:27.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 12:46:27.371: INFO: Creating a PV followed by a PVC Mar 25 12:46:27.383: INFO: Waiting for PV local-pvz6sj2 to bind to PVC pvc-f9lsh Mar 25 12:46:27.383: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-f9lsh] to have phase Bound Mar 25 12:46:27.402: INFO: PersistentVolumeClaim pvc-f9lsh found but phase is Pending instead of Bound. Mar 25 12:46:29.407: INFO: PersistentVolumeClaim pvc-f9lsh found but phase is Pending instead of Bound. Mar 25 12:46:31.412: INFO: PersistentVolumeClaim pvc-f9lsh found but phase is Pending instead of Bound. Mar 25 12:46:33.417: INFO: PersistentVolumeClaim pvc-f9lsh found and phase=Bound (6.033576746s) Mar 25 12:46:33.417: INFO: Waiting up to 3m0s for PersistentVolume local-pvz6sj2 to have phase Bound Mar 25 12:46:33.420: INFO: PersistentVolume local-pvz6sj2 found and phase=Bound (2.980834ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 25 12:46:37.701: INFO: pod "pod-114a348b-4c1c-43f7-89ad-10485ba1dec9" created on Node "latest-worker" STEP: Writing in pod1 Mar 25 12:46:37.701: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2669 PodName:pod-114a348b-4c1c-43f7-89ad-10485ba1dec9 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:46:37.701: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:46:37.956: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Mar 25 12:46:37.957: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2669 PodName:pod-114a348b-4c1c-43f7-89ad-10485ba1dec9 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:46:37.957: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:46:38.129: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-114a348b-4c1c-43f7-89ad-10485ba1dec9 in namespace persistent-local-volumes-test-2669 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 12:46:38.134: INFO: Deleting PersistentVolumeClaim "pvc-f9lsh" Mar 25 12:46:38.205: INFO: Deleting PersistentVolume "local-pvz6sj2" STEP: Unmount tmpfs mount point on node "latest-worker" at path "/tmp/local-volume-test-5bd712b3-7d02-4457-afd5-427684a0d15e" Mar 25 12:46:38.218: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-5bd712b3-7d02-4457-afd5-427684a0d15e"] Namespace:persistent-local-volumes-test-2669 PodName:hostexec-latest-worker-fd79h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:46:38.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Mar 25 12:46:38.360: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-5bd712b3-7d02-4457-afd5-427684a0d15e] Namespace:persistent-local-volumes-test-2669 PodName:hostexec-latest-worker-fd79h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:46:38.360: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:46:38.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2669" for this suite. • [SLOW TEST:16.088 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":133,"completed":46,"skipped":2629,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:46:39.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker2" using path "/tmp/local-volume-test-6467544c-a0ec-4a66-a849-4af8addce046" Mar 25 12:46:43.378: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-6467544c-a0ec-4a66-a849-4af8addce046 && dd if=/dev/zero of=/tmp/local-volume-test-6467544c-a0ec-4a66-a849-4af8addce046/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-6467544c-a0ec-4a66-a849-4af8addce046/file] Namespace:persistent-local-volumes-test-6465 PodName:hostexec-latest-worker2-gzwfw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:46:43.378: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:46:43.544: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-6467544c-a0ec-4a66-a849-4af8addce046/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6465 PodName:hostexec-latest-worker2-gzwfw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:46:43.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 12:46:43.652: INFO: Creating a PV followed by a PVC Mar 25 12:46:43.747: INFO: Waiting for PV local-pvr775m to bind to PVC pvc-vs7f6 Mar 25 12:46:43.747: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-vs7f6] to have phase Bound Mar 25 12:46:43.820: INFO: PersistentVolumeClaim pvc-vs7f6 found but phase is Pending instead of Bound. Mar 25 12:46:45.910: INFO: PersistentVolumeClaim pvc-vs7f6 found but phase is Pending instead of Bound. Mar 25 12:46:47.914: INFO: PersistentVolumeClaim pvc-vs7f6 found and phase=Bound (4.166598637s) Mar 25 12:46:47.914: INFO: Waiting up to 3m0s for PersistentVolume local-pvr775m to have phase Bound Mar 25 12:46:47.916: INFO: PersistentVolume local-pvr775m found and phase=Bound (2.207268ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Mar 25 12:46:53.939: INFO: pod "pod-5dd97984-0598-45ee-a4bf-f10a5e1db3e8" created on Node "latest-worker2" STEP: Writing in pod1 Mar 25 12:46:53.939: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6465 PodName:pod-5dd97984-0598-45ee-a4bf-f10a5e1db3e8 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:46:53.939: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:46:54.045: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Mar 25 12:46:54.045: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6465 PodName:pod-5dd97984-0598-45ee-a4bf-f10a5e1db3e8 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:46:54.045: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:46:54.138: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Mar 25 12:46:58.319: INFO: pod "pod-197eb444-10e1-407b-96a1-308260ddfe87" created on Node "latest-worker2" Mar 25 12:46:58.319: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6465 PodName:pod-197eb444-10e1-407b-96a1-308260ddfe87 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:46:58.319: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:46:58.443: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Mar 25 12:46:58.443: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /dev/loop0 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6465 PodName:pod-197eb444-10e1-407b-96a1-308260ddfe87 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:46:58.443: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:46:58.530: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /dev/loop0 > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Mar 25 12:46:58.530: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6465 PodName:pod-5dd97984-0598-45ee-a4bf-f10a5e1db3e8 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:46:58.530: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:46:58.619: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/dev/loop0", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-5dd97984-0598-45ee-a4bf-f10a5e1db3e8 in namespace persistent-local-volumes-test-6465 STEP: Deleting pod2 STEP: Deleting pod pod-197eb444-10e1-407b-96a1-308260ddfe87 in namespace persistent-local-volumes-test-6465 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 12:46:58.983: INFO: Deleting PersistentVolumeClaim "pvc-vs7f6" Mar 25 12:46:59.387: INFO: Deleting PersistentVolume "local-pvr775m" Mar 25 12:46:59.643: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-6467544c-a0ec-4a66-a849-4af8addce046/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6465 PodName:hostexec-latest-worker2-gzwfw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:46:59.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker2" at path /tmp/local-volume-test-6467544c-a0ec-4a66-a849-4af8addce046/file Mar 25 12:46:59.906: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-6465 PodName:hostexec-latest-worker2-gzwfw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:46:59.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-6467544c-a0ec-4a66-a849-4af8addce046 Mar 25 12:47:00.010: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6467544c-a0ec-4a66-a849-4af8addce046] Namespace:persistent-local-volumes-test-6465 PodName:hostexec-latest-worker2-gzwfw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:47:00.010: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:47:00.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6465" for this suite. • [SLOW TEST:21.108 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":133,"completed":47,"skipped":2649,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:90 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:47:00.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:90 STEP: Creating projection with secret that has name projected-secret-test-a755354d-dfc2-4f55-882a-b195d5365ed6 STEP: Creating a pod to test consume secrets Mar 25 12:47:02.073: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0098ce9f-5b37-441e-a5ad-78338351ce3e" in namespace "projected-23" to be "Succeeded or Failed" Mar 25 12:47:02.167: INFO: Pod "pod-projected-secrets-0098ce9f-5b37-441e-a5ad-78338351ce3e": Phase="Pending", Reason="", readiness=false. Elapsed: 94.602682ms Mar 25 12:47:04.265: INFO: Pod "pod-projected-secrets-0098ce9f-5b37-441e-a5ad-78338351ce3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.192181958s Mar 25 12:47:07.115: INFO: Pod "pod-projected-secrets-0098ce9f-5b37-441e-a5ad-78338351ce3e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.042081463s Mar 25 12:47:09.300: INFO: Pod "pod-projected-secrets-0098ce9f-5b37-441e-a5ad-78338351ce3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.226890235s STEP: Saw pod success Mar 25 12:47:09.300: INFO: Pod "pod-projected-secrets-0098ce9f-5b37-441e-a5ad-78338351ce3e" satisfied condition "Succeeded or Failed" Mar 25 12:47:09.304: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-0098ce9f-5b37-441e-a5ad-78338351ce3e container projected-secret-volume-test: STEP: delete the pod Mar 25 12:47:10.329: INFO: Waiting for pod pod-projected-secrets-0098ce9f-5b37-441e-a5ad-78338351ce3e to disappear Mar 25 12:47:10.408: INFO: Pod pod-projected-secrets-0098ce9f-5b37-441e-a5ad-78338351ce3e no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:47:10.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-23" for this suite. STEP: Destroying namespace "secret-namespace-9624" for this suite. • [SLOW TEST:10.231 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:90 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]","total":133,"completed":48,"skipped":2726,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] One pod requesting one prebound PVC should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:47:10.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Mar 25 12:47:18.394: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-7558 PodName:hostexec-latest-worker-8vhpx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:47:18.394: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:47:18.502: INFO: exec latest-worker: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Mar 25 12:47:18.502: INFO: exec latest-worker: stdout: "0\n" Mar 25 12:47:18.502: INFO: exec latest-worker: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Mar 25 12:47:18.502: INFO: exec latest-worker: exit code: 0 Mar 25 12:47:18.502: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:47:18.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7558" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [8.000 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1256 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:47:18.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 25 12:47:22.643: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-1a13f981-cb9b-4c5f-af1e-07dd314f584e] Namespace:persistent-local-volumes-test-5353 PodName:hostexec-latest-worker-ggjct ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:47:22.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 12:47:23.167: INFO: Creating a PV followed by a PVC Mar 25 12:47:23.982: INFO: Waiting for PV local-pvlkzm6 to bind to PVC pvc-f7jc8 Mar 25 12:47:23.982: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-f7jc8] to have phase Bound Mar 25 12:47:24.149: INFO: PersistentVolumeClaim pvc-f7jc8 found but phase is Pending instead of Bound. Mar 25 12:47:26.534: INFO: PersistentVolumeClaim pvc-f7jc8 found but phase is Pending instead of Bound. Mar 25 12:47:28.554: INFO: PersistentVolumeClaim pvc-f7jc8 found but phase is Pending instead of Bound. Mar 25 12:47:30.559: INFO: PersistentVolumeClaim pvc-f7jc8 found but phase is Pending instead of Bound. Mar 25 12:47:32.574: INFO: PersistentVolumeClaim pvc-f7jc8 found but phase is Pending instead of Bound. Mar 25 12:47:34.579: INFO: PersistentVolumeClaim pvc-f7jc8 found and phase=Bound (10.597180021s) Mar 25 12:47:34.579: INFO: Waiting up to 3m0s for PersistentVolume local-pvlkzm6 to have phase Bound Mar 25 12:47:34.582: INFO: PersistentVolume local-pvlkzm6 found and phase=Bound (2.772777ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Mar 25 12:47:40.859: INFO: pod "pod-98acd4bc-3b80-4534-ae90-1888e2c9d495" created on Node "latest-worker" STEP: Writing in pod1 Mar 25 12:47:40.859: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5353 PodName:pod-98acd4bc-3b80-4534-ae90-1888e2c9d495 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:47:40.859: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:47:40.968: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Mar 25 12:47:40.968: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5353 PodName:pod-98acd4bc-3b80-4534-ae90-1888e2c9d495 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:47:40.968: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:47:41.053: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-98acd4bc-3b80-4534-ae90-1888e2c9d495 in namespace persistent-local-volumes-test-5353 STEP: Creating pod2 STEP: Creating a pod Mar 25 12:47:47.533: INFO: pod "pod-1c5e0e47-6fe3-45be-bff9-7837e8df4b53" created on Node "latest-worker" STEP: Reading in pod2 Mar 25 12:47:47.533: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5353 PodName:pod-1c5e0e47-6fe3-45be-bff9-7837e8df4b53 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:47:47.533: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:47:47.630: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-1c5e0e47-6fe3-45be-bff9-7837e8df4b53 in namespace persistent-local-volumes-test-5353 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 12:47:47.635: INFO: Deleting PersistentVolumeClaim "pvc-f7jc8" Mar 25 12:47:47.656: INFO: Deleting PersistentVolume "local-pvlkzm6" STEP: Removing the test directory Mar 25 12:47:47.799: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-1a13f981-cb9b-4c5f-af1e-07dd314f584e] Namespace:persistent-local-volumes-test-5353 PodName:hostexec-latest-worker-ggjct ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:47:47.799: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:47:47.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5353" for this suite. • [SLOW TEST:29.483 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":133,"completed":49,"skipped":2810,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:47:47.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker2" using path "/tmp/local-volume-test-485ff393-bb19-4317-aa47-5be55c8b8d04" Mar 25 12:47:52.970: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-485ff393-bb19-4317-aa47-5be55c8b8d04 && dd if=/dev/zero of=/tmp/local-volume-test-485ff393-bb19-4317-aa47-5be55c8b8d04/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-485ff393-bb19-4317-aa47-5be55c8b8d04/file] Namespace:persistent-local-volumes-test-4635 PodName:hostexec-latest-worker2-qg22g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:47:52.970: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:47:53.460: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-485ff393-bb19-4317-aa47-5be55c8b8d04/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-4635 PodName:hostexec-latest-worker2-qg22g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:47:53.460: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:47:53.831: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-485ff393-bb19-4317-aa47-5be55c8b8d04 && chmod o+rwx /tmp/local-volume-test-485ff393-bb19-4317-aa47-5be55c8b8d04] Namespace:persistent-local-volumes-test-4635 PodName:hostexec-latest-worker2-qg22g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:47:53.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 12:47:54.352: INFO: Creating a PV followed by a PVC Mar 25 12:47:54.805: INFO: Waiting for PV local-pvrcnrr to bind to PVC pvc-wk7ph Mar 25 12:47:54.805: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-wk7ph] to have phase Bound Mar 25 12:47:55.111: INFO: PersistentVolumeClaim pvc-wk7ph found but phase is Pending instead of Bound. Mar 25 12:47:57.386: INFO: PersistentVolumeClaim pvc-wk7ph found but phase is Pending instead of Bound. Mar 25 12:47:59.609: INFO: PersistentVolumeClaim pvc-wk7ph found but phase is Pending instead of Bound. Mar 25 12:48:01.632: INFO: PersistentVolumeClaim pvc-wk7ph found but phase is Pending instead of Bound. Mar 25 12:48:03.637: INFO: PersistentVolumeClaim pvc-wk7ph found and phase=Bound (8.83122608s) Mar 25 12:48:03.637: INFO: Waiting up to 3m0s for PersistentVolume local-pvrcnrr to have phase Bound Mar 25 12:48:03.640: INFO: PersistentVolume local-pvrcnrr found and phase=Bound (2.766118ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Mar 25 12:48:03.645: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 12:48:03.646: INFO: Deleting PersistentVolumeClaim "pvc-wk7ph" Mar 25 12:48:03.651: INFO: Deleting PersistentVolume "local-pvrcnrr" Mar 25 12:48:03.694: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-485ff393-bb19-4317-aa47-5be55c8b8d04] Namespace:persistent-local-volumes-test-4635 PodName:hostexec-latest-worker2-qg22g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:48:03.695: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:48:03.884: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-485ff393-bb19-4317-aa47-5be55c8b8d04/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-4635 PodName:hostexec-latest-worker2-qg22g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:48:03.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker2" at path /tmp/local-volume-test-485ff393-bb19-4317-aa47-5be55c8b8d04/file Mar 25 12:48:04.005: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-4635 PodName:hostexec-latest-worker2-qg22g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:48:04.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-485ff393-bb19-4317-aa47-5be55c8b8d04 Mar 25 12:48:04.121: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-485ff393-bb19-4317-aa47-5be55c8b8d04] Namespace:persistent-local-volumes-test-4635 PodName:hostexec-latest-worker2-qg22g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:48:04.121: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:48:04.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4635" for this suite. S [SKIPPING] [16.364 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:48:04.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "latest-worker2" at path "/tmp/local-volume-test-0fe70474-1499-4ba0-9b11-ab70ac8afbee" Mar 25 12:48:09.092: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-0fe70474-1499-4ba0-9b11-ab70ac8afbee" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-0fe70474-1499-4ba0-9b11-ab70ac8afbee" "/tmp/local-volume-test-0fe70474-1499-4ba0-9b11-ab70ac8afbee"] Namespace:persistent-local-volumes-test-73 PodName:hostexec-latest-worker2-w6j4z ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:48:09.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 12:48:09.209: INFO: Creating a PV followed by a PVC Mar 25 12:48:09.261: INFO: Waiting for PV local-pv5rzzv to bind to PVC pvc-szmz4 Mar 25 12:48:09.261: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-szmz4] to have phase Bound Mar 25 12:48:09.330: INFO: PersistentVolumeClaim pvc-szmz4 found but phase is Pending instead of Bound. Mar 25 12:48:11.539: INFO: PersistentVolumeClaim pvc-szmz4 found but phase is Pending instead of Bound. Mar 25 12:48:13.543: INFO: PersistentVolumeClaim pvc-szmz4 found but phase is Pending instead of Bound. Mar 25 12:48:15.548: INFO: PersistentVolumeClaim pvc-szmz4 found but phase is Pending instead of Bound. Mar 25 12:48:17.553: INFO: PersistentVolumeClaim pvc-szmz4 found and phase=Bound (8.291808824s) Mar 25 12:48:17.553: INFO: Waiting up to 3m0s for PersistentVolume local-pv5rzzv to have phase Bound Mar 25 12:48:17.556: INFO: PersistentVolume local-pv5rzzv found and phase=Bound (3.02753ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Mar 25 12:48:17.671: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 12:48:17.671: INFO: Deleting PersistentVolumeClaim "pvc-szmz4" Mar 25 12:48:17.676: INFO: Deleting PersistentVolume "local-pv5rzzv" STEP: Unmount tmpfs mount point on node "latest-worker2" at path "/tmp/local-volume-test-0fe70474-1499-4ba0-9b11-ab70ac8afbee" Mar 25 12:48:17.734: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-0fe70474-1499-4ba0-9b11-ab70ac8afbee"] Namespace:persistent-local-volumes-test-73 PodName:hostexec-latest-worker2-w6j4z ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:48:17.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Mar 25 12:48:18.017: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0fe70474-1499-4ba0-9b11-ab70ac8afbee] Namespace:persistent-local-volumes-test-73 PodName:hostexec-latest-worker2-w6j4z ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:48:18.017: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:48:18.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-73" for this suite. S [SKIPPING] [13.824 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:48:18.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:49 [It] should allow deletion of pod with invalid volume : configmap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 Mar 25 12:48:48.697: INFO: Deleting pod "pv-4058"/"pod-ephm-test-projected-v7jc" Mar 25 12:48:48.698: INFO: Deleting pod "pod-ephm-test-projected-v7jc" in namespace "pv-4058" Mar 25 12:48:48.704: INFO: Wait up to 5m0s for pod "pod-ephm-test-projected-v7jc" to be fully deleted [AfterEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:48:56.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-4058" for this suite. • [SLOW TEST:38.552 seconds] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 When pod refers to non-existent ephemeral storage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53 should allow deletion of pod with invalid volume : configmap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 ------------------------------ {"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap","total":133,"completed":50,"skipped":3009,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:48:56.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker" using path "/tmp/local-volume-test-fbfaa4cf-0473-452e-b180-f2e130621598" Mar 25 12:48:58.941: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-fbfaa4cf-0473-452e-b180-f2e130621598 && dd if=/dev/zero of=/tmp/local-volume-test-fbfaa4cf-0473-452e-b180-f2e130621598/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-fbfaa4cf-0473-452e-b180-f2e130621598/file] Namespace:persistent-local-volumes-test-1217 PodName:hostexec-latest-worker-ncr5k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:48:58.941: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:48:59.092: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-fbfaa4cf-0473-452e-b180-f2e130621598/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1217 PodName:hostexec-latest-worker-ncr5k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:48:59.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 12:48:59.190: INFO: Creating a PV followed by a PVC Mar 25 12:48:59.202: INFO: Waiting for PV local-pv5jc9f to bind to PVC pvc-cjbqd Mar 25 12:48:59.202: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-cjbqd] to have phase Bound Mar 25 12:48:59.244: INFO: PersistentVolumeClaim pvc-cjbqd found but phase is Pending instead of Bound. Mar 25 12:49:01.253: INFO: PersistentVolumeClaim pvc-cjbqd found but phase is Pending instead of Bound. Mar 25 12:49:03.259: INFO: PersistentVolumeClaim pvc-cjbqd found and phase=Bound (4.056705803s) Mar 25 12:49:03.259: INFO: Waiting up to 3m0s for PersistentVolume local-pv5jc9f to have phase Bound Mar 25 12:49:03.262: INFO: PersistentVolume local-pv5jc9f found and phase=Bound (3.245488ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Mar 25 12:49:03.271: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 12:49:03.273: INFO: Deleting PersistentVolumeClaim "pvc-cjbqd" Mar 25 12:49:03.293: INFO: Deleting PersistentVolume "local-pv5jc9f" Mar 25 12:49:03.305: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-fbfaa4cf-0473-452e-b180-f2e130621598/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1217 PodName:hostexec-latest-worker-ncr5k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:49:03.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker" at path /tmp/local-volume-test-fbfaa4cf-0473-452e-b180-f2e130621598/file Mar 25 12:49:03.436: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-1217 PodName:hostexec-latest-worker-ncr5k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:49:03.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-fbfaa4cf-0473-452e-b180-f2e130621598 Mar 25 12:49:03.564: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-fbfaa4cf-0473-452e-b180-f2e130621598] Namespace:persistent-local-volumes-test-1217 PodName:hostexec-latest-worker-ncr5k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:49:03.564: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:49:03.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1217" for this suite. S [SKIPPING] [6.962 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume storage capacity exhausted, late binding, no topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:49:03.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] exhausted, late binding, no topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 STEP: Building a driver namespace object, basename csi-mock-volumes-4442 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Mar 25 12:49:03.948: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4442-8406/csi-attacher Mar 25 12:49:03.952: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4442 Mar 25 12:49:03.952: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4442 Mar 25 12:49:03.962: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4442 Mar 25 12:49:03.968: INFO: creating *v1.Role: csi-mock-volumes-4442-8406/external-attacher-cfg-csi-mock-volumes-4442 Mar 25 12:49:03.990: INFO: creating *v1.RoleBinding: csi-mock-volumes-4442-8406/csi-attacher-role-cfg Mar 25 12:49:04.004: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4442-8406/csi-provisioner Mar 25 12:49:04.020: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4442 Mar 25 12:49:04.020: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4442 Mar 25 12:49:04.054: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4442 Mar 25 12:49:04.058: INFO: creating *v1.Role: csi-mock-volumes-4442-8406/external-provisioner-cfg-csi-mock-volumes-4442 Mar 25 12:49:04.064: INFO: creating *v1.RoleBinding: csi-mock-volumes-4442-8406/csi-provisioner-role-cfg Mar 25 12:49:04.091: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4442-8406/csi-resizer Mar 25 12:49:04.121: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4442 Mar 25 12:49:04.121: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4442 Mar 25 12:49:04.135: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4442 Mar 25 12:49:04.141: INFO: creating *v1.Role: csi-mock-volumes-4442-8406/external-resizer-cfg-csi-mock-volumes-4442 Mar 25 12:49:04.147: INFO: creating *v1.RoleBinding: csi-mock-volumes-4442-8406/csi-resizer-role-cfg Mar 25 12:49:04.191: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4442-8406/csi-snapshotter Mar 25 12:49:04.200: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4442 Mar 25 12:49:04.200: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4442 Mar 25 12:49:04.207: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4442 Mar 25 12:49:04.243: INFO: creating *v1.Role: csi-mock-volumes-4442-8406/external-snapshotter-leaderelection-csi-mock-volumes-4442 Mar 25 12:49:04.261: INFO: creating *v1.RoleBinding: csi-mock-volumes-4442-8406/external-snapshotter-leaderelection Mar 25 12:49:04.330: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4442-8406/csi-mock Mar 25 12:49:04.334: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4442 Mar 25 12:49:04.345: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4442 Mar 25 12:49:04.350: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4442 Mar 25 12:49:04.357: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4442 Mar 25 12:49:04.385: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4442 Mar 25 12:49:04.427: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4442 Mar 25 12:49:04.467: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4442 Mar 25 12:49:04.471: INFO: creating *v1.StatefulSet: csi-mock-volumes-4442-8406/csi-mockplugin Mar 25 12:49:04.482: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4442 Mar 25 12:49:04.542: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4442" Mar 25 12:49:04.589: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4442 to register on node latest-worker2 I0325 12:49:14.398460 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I0325 12:49:14.400636 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-4442","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0325 12:49:14.443101 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I0325 12:49:14.487200 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I0325 12:49:14.505758 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-4442","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0325 12:49:14.765109 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-4442"},"Error":"","FullError":null} STEP: Creating pod Mar 25 12:49:20.906: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil I0325 12:49:21.017357 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-a0198abe-dc98-49f5-a8ce-c771afdebe7a","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}} I0325 12:49:21.023312 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-a0198abe-dc98-49f5-a8ce-c771afdebe7a","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-a0198abe-dc98-49f5-a8ce-c771afdebe7a"}}},"Error":"","FullError":null} I0325 12:49:22.249305 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Mar 25 12:49:22.252: INFO: >>> kubeConfig: /root/.kube/config I0325 12:49:22.386126 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a0198abe-dc98-49f5-a8ce-c771afdebe7a/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-a0198abe-dc98-49f5-a8ce-c771afdebe7a","storage.kubernetes.io/csiProvisionerIdentity":"1616676554529-8081-csi-mock-csi-mock-volumes-4442"}},"Response":{},"Error":"","FullError":null} I0325 12:49:22.394979 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Mar 25 12:49:22.398: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:49:22.513: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:49:22.608: INFO: >>> kubeConfig: /root/.kube/config I0325 12:49:22.699780 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a0198abe-dc98-49f5-a8ce-c771afdebe7a/globalmount","target_path":"/var/lib/kubelet/pods/a15dcc20-d040-4ae6-885e-a22a445644e3/volumes/kubernetes.io~csi/pvc-a0198abe-dc98-49f5-a8ce-c771afdebe7a/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-a0198abe-dc98-49f5-a8ce-c771afdebe7a","storage.kubernetes.io/csiProvisionerIdentity":"1616676554529-8081-csi-mock-csi-mock-volumes-4442"}},"Response":{},"Error":"","FullError":null} Mar 25 12:49:26.990: INFO: Deleting pod "pvc-volume-tester-6rfkf" in namespace "csi-mock-volumes-4442" Mar 25 12:49:26.993: INFO: Wait up to 5m0s for pod "pvc-volume-tester-6rfkf" to be fully deleted Mar 25 12:49:30.687: INFO: >>> kubeConfig: /root/.kube/config I0325 12:49:30.781186 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/a15dcc20-d040-4ae6-885e-a22a445644e3/volumes/kubernetes.io~csi/pvc-a0198abe-dc98-49f5-a8ce-c771afdebe7a/mount"},"Response":{},"Error":"","FullError":null} I0325 12:49:30.790046 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0325 12:49:30.792199 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a0198abe-dc98-49f5-a8ce-c771afdebe7a/globalmount"},"Response":{},"Error":"","FullError":null} I0325 12:50:37.045102 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} STEP: Checking PVC events Mar 25 12:50:38.018: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-nl6zf", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4442", SelfLink:"", UID:"a0198abe-dc98-49f5-a8ce-c771afdebe7a", ResourceVersion:"1160475", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752273360, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003b23818), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003b23830)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc002e5a0c0), VolumeMode:(*v1.PersistentVolumeMode)(0xc002e5a0d0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 12:50:38.018: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-nl6zf", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4442", SelfLink:"", UID:"a0198abe-dc98-49f5-a8ce-c771afdebe7a", ResourceVersion:"1160478", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752273360, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.kubernetes.io/selected-node":"latest-worker2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003b238c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003b238d8)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003b238f0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003b23908)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc002e5a100), VolumeMode:(*v1.PersistentVolumeMode)(0xc002e5a110), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 12:50:38.018: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-nl6zf", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4442", SelfLink:"", UID:"a0198abe-dc98-49f5-a8ce-c771afdebe7a", ResourceVersion:"1160479", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752273360, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4442", "volume.kubernetes.io/selected-node":"latest-worker2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc005481bf0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc005481c08)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc005481c20), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc005481c38)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc005481c50), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc005481c68)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc001fb00e0), VolumeMode:(*v1.PersistentVolumeMode)(0xc001fb00f0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 12:50:38.019: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-nl6zf", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4442", SelfLink:"", UID:"a0198abe-dc98-49f5-a8ce-c771afdebe7a", ResourceVersion:"1160487", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752273360, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4442", "volume.kubernetes.io/selected-node":"latest-worker2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0028fc4b0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0028fc4c8)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0028fc540), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0028fc558)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0028fc570), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0028fc588)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-a0198abe-dc98-49f5-a8ce-c771afdebe7a", StorageClassName:(*string)(0xc003b0fbf0), VolumeMode:(*v1.PersistentVolumeMode)(0xc003b0fc00), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 12:50:38.019: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-nl6zf", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4442", SelfLink:"", UID:"a0198abe-dc98-49f5-a8ce-c771afdebe7a", ResourceVersion:"1160488", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752273360, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4442", "volume.kubernetes.io/selected-node":"latest-worker2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0028fc5b8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0028fc5d0)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0028fc5e8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0028fc600)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0028fc618), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0028fc630)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-a0198abe-dc98-49f5-a8ce-c771afdebe7a", StorageClassName:(*string)(0xc003b0fc30), VolumeMode:(*v1.PersistentVolumeMode)(0xc003b0fc40), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 12:50:38.019: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-nl6zf", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4442", SelfLink:"", UID:"a0198abe-dc98-49f5-a8ce-c771afdebe7a", ResourceVersion:"1160877", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752273360, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(0xc003292f78), DeletionGracePeriodSeconds:(*int64)(0xc000397b48), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4442", "volume.kubernetes.io/selected-node":"latest-worker2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003292f90), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003292fa8)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003292fc0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003292fd8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003292ff0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003293008)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-a0198abe-dc98-49f5-a8ce-c771afdebe7a", StorageClassName:(*string)(0xc001f5b100), VolumeMode:(*v1.PersistentVolumeMode)(0xc001f5b110), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 12:50:38.019: INFO: PVC event DELETED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-nl6zf", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4442", SelfLink:"", UID:"a0198abe-dc98-49f5-a8ce-c771afdebe7a", ResourceVersion:"1160878", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752273360, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(0xc003293038), DeletionGracePeriodSeconds:(*int64)(0xc004f84158), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4442", "volume.kubernetes.io/selected-node":"latest-worker2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003293050), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003293068)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003293080), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003293098)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0032930b0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0032930c8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-a0198abe-dc98-49f5-a8ce-c771afdebe7a", StorageClassName:(*string)(0xc001f5b150), VolumeMode:(*v1.PersistentVolumeMode)(0xc001f5b160), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} STEP: Deleting pod pvc-volume-tester-6rfkf Mar 25 12:50:38.019: INFO: Deleting pod "pvc-volume-tester-6rfkf" in namespace "csi-mock-volumes-4442" STEP: Deleting claim pvc-nl6zf STEP: Deleting storageclass csi-mock-volumes-4442-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4442 STEP: Waiting for namespaces [csi-mock-volumes-4442] to vanish STEP: uninstalling csi mock driver Mar 25 12:50:44.062: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4442-8406/csi-attacher Mar 25 12:50:44.067: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4442 Mar 25 12:50:44.097: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4442 Mar 25 12:50:44.110: INFO: deleting *v1.Role: csi-mock-volumes-4442-8406/external-attacher-cfg-csi-mock-volumes-4442 Mar 25 12:50:44.122: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4442-8406/csi-attacher-role-cfg Mar 25 12:50:44.134: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4442-8406/csi-provisioner Mar 25 12:50:44.140: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4442 Mar 25 12:50:44.194: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4442 Mar 25 12:50:44.230: INFO: deleting *v1.Role: csi-mock-volumes-4442-8406/external-provisioner-cfg-csi-mock-volumes-4442 Mar 25 12:50:44.249: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4442-8406/csi-provisioner-role-cfg Mar 25 12:50:44.254: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4442-8406/csi-resizer Mar 25 12:50:44.266: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4442 Mar 25 12:50:44.318: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4442 Mar 25 12:50:44.323: INFO: deleting *v1.Role: csi-mock-volumes-4442-8406/external-resizer-cfg-csi-mock-volumes-4442 Mar 25 12:50:44.333: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4442-8406/csi-resizer-role-cfg Mar 25 12:50:44.344: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4442-8406/csi-snapshotter Mar 25 12:50:44.387: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4442 Mar 25 12:50:44.463: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4442 Mar 25 12:50:44.481: INFO: deleting *v1.Role: csi-mock-volumes-4442-8406/external-snapshotter-leaderelection-csi-mock-volumes-4442 Mar 25 12:50:44.500: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4442-8406/external-snapshotter-leaderelection Mar 25 12:50:44.529: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4442-8406/csi-mock Mar 25 12:50:44.553: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4442 Mar 25 12:50:44.575: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4442 Mar 25 12:50:44.591: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4442 Mar 25 12:50:44.607: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4442 Mar 25 12:50:44.614: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4442 Mar 25 12:50:44.619: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4442 Mar 25 12:50:44.635: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4442 Mar 25 12:50:44.640: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4442-8406/csi-mockplugin Mar 25 12:50:44.643: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-4442 STEP: deleting the driver namespace: csi-mock-volumes-4442-8406 STEP: Waiting for namespaces [csi-mock-volumes-4442-8406] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:51:38.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:155.066 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 storage capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900 exhausted, late binding, no topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, no topology","total":133,"completed":51,"skipped":3147,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume storage capacity exhausted, immediate binding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:51:38.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] exhausted, immediate binding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 STEP: Building a driver namespace object, basename csi-mock-volumes-2339 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Mar 25 12:51:39.035: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2339-4630/csi-attacher Mar 25 12:51:39.038: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2339 Mar 25 12:51:39.038: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-2339 Mar 25 12:51:39.098: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2339 Mar 25 12:51:39.109: INFO: creating *v1.Role: csi-mock-volumes-2339-4630/external-attacher-cfg-csi-mock-volumes-2339 Mar 25 12:51:39.145: INFO: creating *v1.RoleBinding: csi-mock-volumes-2339-4630/csi-attacher-role-cfg Mar 25 12:51:39.158: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2339-4630/csi-provisioner Mar 25 12:51:39.164: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2339 Mar 25 12:51:39.164: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-2339 Mar 25 12:51:39.182: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2339 Mar 25 12:51:39.235: INFO: creating *v1.Role: csi-mock-volumes-2339-4630/external-provisioner-cfg-csi-mock-volumes-2339 Mar 25 12:51:39.253: INFO: creating *v1.RoleBinding: csi-mock-volumes-2339-4630/csi-provisioner-role-cfg Mar 25 12:51:39.272: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2339-4630/csi-resizer Mar 25 12:51:39.278: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2339 Mar 25 12:51:39.278: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-2339 Mar 25 12:51:39.284: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2339 Mar 25 12:51:39.290: INFO: creating *v1.Role: csi-mock-volumes-2339-4630/external-resizer-cfg-csi-mock-volumes-2339 Mar 25 12:51:39.324: INFO: creating *v1.RoleBinding: csi-mock-volumes-2339-4630/csi-resizer-role-cfg Mar 25 12:51:39.384: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2339-4630/csi-snapshotter Mar 25 12:51:39.409: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2339 Mar 25 12:51:39.409: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-2339 Mar 25 12:51:39.428: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2339 Mar 25 12:51:39.434: INFO: creating *v1.Role: csi-mock-volumes-2339-4630/external-snapshotter-leaderelection-csi-mock-volumes-2339 Mar 25 12:51:39.472: INFO: creating *v1.RoleBinding: csi-mock-volumes-2339-4630/external-snapshotter-leaderelection Mar 25 12:51:39.541: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2339-4630/csi-mock Mar 25 12:51:39.571: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2339 Mar 25 12:51:39.590: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2339 Mar 25 12:51:39.614: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2339 Mar 25 12:51:39.703: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2339 Mar 25 12:51:39.707: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2339 Mar 25 12:51:39.794: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2339 Mar 25 12:51:39.833: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2339 Mar 25 12:51:39.884: INFO: creating *v1.StatefulSet: csi-mock-volumes-2339-4630/csi-mockplugin Mar 25 12:51:39.908: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-2339 Mar 25 12:51:39.983: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-2339" Mar 25 12:51:40.004: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-2339 to register on node latest-worker2 I0325 12:51:53.794412 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I0325 12:51:53.796773 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-2339","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0325 12:51:53.839188 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I0325 12:51:53.883031 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I0325 12:51:53.898621 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-2339","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0325 12:51:54.793635 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-2339"},"Error":"","FullError":null} STEP: Creating pod Mar 25 12:51:56.598: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 25 12:51:56.633: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-5lq6t] to have phase Bound Mar 25 12:51:56.828: INFO: PersistentVolumeClaim pvc-5lq6t found but phase is Pending instead of Bound. I0325 12:51:56.835391 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-43da7f65-594c-4c11-b9d6-26dca2d3d0b0","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}} I0325 12:51:56.837539 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-43da7f65-594c-4c11-b9d6-26dca2d3d0b0","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-43da7f65-594c-4c11-b9d6-26dca2d3d0b0"}}},"Error":"","FullError":null} Mar 25 12:51:58.870: INFO: PersistentVolumeClaim pvc-5lq6t found and phase=Bound (2.236980173s) I0325 12:51:59.315165 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Mar 25 12:51:59.317: INFO: >>> kubeConfig: /root/.kube/config I0325 12:51:59.558672 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-43da7f65-594c-4c11-b9d6-26dca2d3d0b0/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-43da7f65-594c-4c11-b9d6-26dca2d3d0b0","storage.kubernetes.io/csiProvisionerIdentity":"1616676713926-8081-csi-mock-csi-mock-volumes-2339"}},"Response":{},"Error":"","FullError":null} I0325 12:51:59.591356 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Mar 25 12:51:59.594: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:51:59.773: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:51:59.997: INFO: >>> kubeConfig: /root/.kube/config I0325 12:52:00.156676 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-43da7f65-594c-4c11-b9d6-26dca2d3d0b0/globalmount","target_path":"/var/lib/kubelet/pods/bcadd054-442f-491b-90ca-9dae5a1c2591/volumes/kubernetes.io~csi/pvc-43da7f65-594c-4c11-b9d6-26dca2d3d0b0/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-43da7f65-594c-4c11-b9d6-26dca2d3d0b0","storage.kubernetes.io/csiProvisionerIdentity":"1616676713926-8081-csi-mock-csi-mock-volumes-2339"}},"Response":{},"Error":"","FullError":null} Mar 25 12:52:05.032: INFO: Deleting pod "pvc-volume-tester-5r64g" in namespace "csi-mock-volumes-2339" Mar 25 12:52:05.112: INFO: Wait up to 5m0s for pod "pvc-volume-tester-5r64g" to be fully deleted Mar 25 12:52:07.249: INFO: >>> kubeConfig: /root/.kube/config I0325 12:52:07.450168 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/bcadd054-442f-491b-90ca-9dae5a1c2591/volumes/kubernetes.io~csi/pvc-43da7f65-594c-4c11-b9d6-26dca2d3d0b0/mount"},"Response":{},"Error":"","FullError":null} I0325 12:52:07.551830 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0325 12:52:07.553891 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-43da7f65-594c-4c11-b9d6-26dca2d3d0b0/globalmount"},"Response":{},"Error":"","FullError":null} I0325 12:52:39.435811 7 csi.go:380] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} STEP: Checking PVC events Mar 25 12:52:40.193: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-5lq6t", GenerateName:"pvc-", Namespace:"csi-mock-volumes-2339", SelfLink:"", UID:"43da7f65-594c-4c11-b9d6-26dca2d3d0b0", ResourceVersion:"1161419", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752273516, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0028fcb70), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0028fcb88)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc002863f10), VolumeMode:(*v1.PersistentVolumeMode)(0xc002863f20), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 12:52:40.193: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-5lq6t", GenerateName:"pvc-", Namespace:"csi-mock-volumes-2339", SelfLink:"", UID:"43da7f65-594c-4c11-b9d6-26dca2d3d0b0", ResourceVersion:"1161420", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752273516, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-2339"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003292e88), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003292ea0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003292eb8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003292ed0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc003047e70), VolumeMode:(*v1.PersistentVolumeMode)(0xc003047e80), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 12:52:40.193: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-5lq6t", GenerateName:"pvc-", Namespace:"csi-mock-volumes-2339", SelfLink:"", UID:"43da7f65-594c-4c11-b9d6-26dca2d3d0b0", ResourceVersion:"1161428", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752273516, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-2339"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0054ea558), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0054ea570)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0054ea588), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0054ea5a0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-43da7f65-594c-4c11-b9d6-26dca2d3d0b0", StorageClassName:(*string)(0xc0016871f0), VolumeMode:(*v1.PersistentVolumeMode)(0xc001687200), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 12:52:40.193: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-5lq6t", GenerateName:"pvc-", Namespace:"csi-mock-volumes-2339", SelfLink:"", UID:"43da7f65-594c-4c11-b9d6-26dca2d3d0b0", ResourceVersion:"1161430", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752273516, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-2339"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001bff260), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001bff278)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001bff290), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001bff2a8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-43da7f65-594c-4c11-b9d6-26dca2d3d0b0", StorageClassName:(*string)(0xc001ece740), VolumeMode:(*v1.PersistentVolumeMode)(0xc001ece750), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 12:52:40.193: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-5lq6t", GenerateName:"pvc-", Namespace:"csi-mock-volumes-2339", SelfLink:"", UID:"43da7f65-594c-4c11-b9d6-26dca2d3d0b0", ResourceVersion:"1162940", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752273516, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(0xc001bff2d8), DeletionGracePeriodSeconds:(*int64)(0xc0033ca828), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-2339"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001bff2f0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001bff308)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001bff320), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001bff338)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-43da7f65-594c-4c11-b9d6-26dca2d3d0b0", StorageClassName:(*string)(0xc001ece790), VolumeMode:(*v1.PersistentVolumeMode)(0xc001ece7a0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 12:52:40.193: INFO: PVC event DELETED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-5lq6t", GenerateName:"pvc-", Namespace:"csi-mock-volumes-2339", SelfLink:"", UID:"43da7f65-594c-4c11-b9d6-26dca2d3d0b0", ResourceVersion:"1162972", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752273516, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(0xc001bff368), DeletionGracePeriodSeconds:(*int64)(0xc0033ca908), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-2339"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001bff380), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001bff398)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001bff3b0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001bff3c8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-43da7f65-594c-4c11-b9d6-26dca2d3d0b0", StorageClassName:(*string)(0xc001ece860), VolumeMode:(*v1.PersistentVolumeMode)(0xc001ece870), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} STEP: Deleting pod pvc-volume-tester-5r64g Mar 25 12:52:40.194: INFO: Deleting pod "pvc-volume-tester-5r64g" in namespace "csi-mock-volumes-2339" STEP: Deleting claim pvc-5lq6t STEP: Deleting storageclass csi-mock-volumes-2339-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-2339 STEP: Waiting for namespaces [csi-mock-volumes-2339] to vanish STEP: uninstalling csi mock driver Mar 25 12:52:58.370: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2339-4630/csi-attacher Mar 25 12:52:58.505: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2339 Mar 25 12:52:58.587: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2339 Mar 25 12:52:58.814: INFO: deleting *v1.Role: csi-mock-volumes-2339-4630/external-attacher-cfg-csi-mock-volumes-2339 Mar 25 12:52:58.843: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2339-4630/csi-attacher-role-cfg Mar 25 12:52:59.580: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2339-4630/csi-provisioner Mar 25 12:53:00.109: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2339 Mar 25 12:53:00.286: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2339 Mar 25 12:53:00.362: INFO: deleting *v1.Role: csi-mock-volumes-2339-4630/external-provisioner-cfg-csi-mock-volumes-2339 Mar 25 12:53:00.494: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2339-4630/csi-provisioner-role-cfg Mar 25 12:53:00.547: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2339-4630/csi-resizer Mar 25 12:53:00.639: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2339 Mar 25 12:53:00.703: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2339 Mar 25 12:53:00.799: INFO: deleting *v1.Role: csi-mock-volumes-2339-4630/external-resizer-cfg-csi-mock-volumes-2339 Mar 25 12:53:00.831: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2339-4630/csi-resizer-role-cfg Mar 25 12:53:00.860: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2339-4630/csi-snapshotter Mar 25 12:53:00.876: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2339 Mar 25 12:53:01.045: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2339 Mar 25 12:53:01.062: INFO: deleting *v1.Role: csi-mock-volumes-2339-4630/external-snapshotter-leaderelection-csi-mock-volumes-2339 Mar 25 12:53:01.180: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2339-4630/external-snapshotter-leaderelection Mar 25 12:53:01.210: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2339-4630/csi-mock Mar 25 12:53:01.366: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2339 Mar 25 12:53:01.396: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2339 Mar 25 12:53:01.472: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2339 Mar 25 12:53:01.648: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2339 Mar 25 12:53:01.725: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2339 Mar 25 12:53:01.813: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2339 Mar 25 12:53:01.859: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2339 Mar 25 12:53:01.872: INFO: deleting *v1.StatefulSet: csi-mock-volumes-2339-4630/csi-mockplugin Mar 25 12:53:01.981: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-2339 STEP: deleting the driver namespace: csi-mock-volumes-2339-4630 STEP: Waiting for namespaces [csi-mock-volumes-2339-4630] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:55:11.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:212.669 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 storage capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900 exhausted, immediate binding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, immediate binding","total":133,"completed":52,"skipped":3163,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:55:11.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker" using path "/tmp/local-volume-test-ddb981dc-f4f6-4342-b518-1280d3be84c5" Mar 25 12:55:15.573: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-ddb981dc-f4f6-4342-b518-1280d3be84c5 && dd if=/dev/zero of=/tmp/local-volume-test-ddb981dc-f4f6-4342-b518-1280d3be84c5/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-ddb981dc-f4f6-4342-b518-1280d3be84c5/file] Namespace:persistent-local-volumes-test-3499 PodName:hostexec-latest-worker-6k8h6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:55:15.574: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:55:15.744: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-ddb981dc-f4f6-4342-b518-1280d3be84c5/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-3499 PodName:hostexec-latest-worker-6k8h6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:55:15.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 12:55:15.841: INFO: Creating a PV followed by a PVC Mar 25 12:55:16.076: INFO: Waiting for PV local-pvjq6sj to bind to PVC pvc-m7bt7 Mar 25 12:55:16.076: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-m7bt7] to have phase Bound Mar 25 12:55:16.102: INFO: PersistentVolumeClaim pvc-m7bt7 found but phase is Pending instead of Bound. Mar 25 12:55:18.107: INFO: PersistentVolumeClaim pvc-m7bt7 found and phase=Bound (2.030892479s) Mar 25 12:55:18.107: INFO: Waiting up to 3m0s for PersistentVolume local-pvjq6sj to have phase Bound Mar 25 12:55:18.110: INFO: PersistentVolume local-pvjq6sj found and phase=Bound (2.204478ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Mar 25 12:55:24.146: INFO: pod "pod-3bbfa8a2-186f-42eb-b4c0-0fd6a7ad1507" created on Node "latest-worker" STEP: Writing in pod1 Mar 25 12:55:24.146: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3499 PodName:pod-3bbfa8a2-186f-42eb-b4c0-0fd6a7ad1507 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:55:24.146: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:55:24.245: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n18 bytes (18B) copied, 0.000090 seconds, 195.3KB/s", err: Mar 25 12:55:24.245: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-3499 PodName:pod-3bbfa8a2-186f-42eb-b4c0-0fd6a7ad1507 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:55:24.245: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:55:24.345: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Mar 25 12:55:28.410: INFO: pod "pod-a77bdb0e-7510-422b-8c2a-c94cf5674e29" created on Node "latest-worker" Mar 25 12:55:28.410: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-3499 PodName:pod-a77bdb0e-7510-422b-8c2a-c94cf5674e29 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:55:28.410: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:55:28.509: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Writing in pod2 Mar 25 12:55:28.509: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo /dev/loop0 > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3499 PodName:pod-a77bdb0e-7510-422b-8c2a-c94cf5674e29 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:55:28.509: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:55:28.610: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo /dev/loop0 > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n11 bytes (11B) copied, 0.000077 seconds, 139.5KB/s", err: STEP: Reading in pod1 Mar 25 12:55:28.610: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-3499 PodName:pod-3bbfa8a2-186f-42eb-b4c0-0fd6a7ad1507 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:55:28.610: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:55:28.722: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "/dev/loop0.ontent...................................................................................", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-3bbfa8a2-186f-42eb-b4c0-0fd6a7ad1507 in namespace persistent-local-volumes-test-3499 STEP: Deleting pod2 STEP: Deleting pod pod-a77bdb0e-7510-422b-8c2a-c94cf5674e29 in namespace persistent-local-volumes-test-3499 [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 12:55:28.758: INFO: Deleting PersistentVolumeClaim "pvc-m7bt7" Mar 25 12:55:28.775: INFO: Deleting PersistentVolume "local-pvjq6sj" Mar 25 12:55:28.825: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-ddb981dc-f4f6-4342-b518-1280d3be84c5/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-3499 PodName:hostexec-latest-worker-6k8h6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:55:28.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker" at path /tmp/local-volume-test-ddb981dc-f4f6-4342-b518-1280d3be84c5/file Mar 25 12:55:28.930: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-3499 PodName:hostexec-latest-worker-6k8h6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:55:28.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-ddb981dc-f4f6-4342-b518-1280d3be84c5 Mar 25 12:55:29.041: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ddb981dc-f4f6-4342-b518-1280d3be84c5] Namespace:persistent-local-volumes-test-3499 PodName:hostexec-latest-worker-6k8h6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:55:29.041: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:55:29.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3499" for this suite. • [SLOW TEST:17.731 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":133,"completed":53,"skipped":3248,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Volumes NFSv4 should be mountable for NFSv4 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:79 [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:55:29.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:68 Mar 25 12:55:29.221: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:55:29.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-7687" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.066 seconds] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 NFSv4 [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:78 should be mountable for NFSv4 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:79 Only supported for node OS distro [gci ubuntu custom] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:69 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:55:29.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:354 STEP: Initializing test volumes Mar 25 12:55:33.463: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-b61f98b1-aea8-483a-8d71-e9961eb49815] Namespace:persistent-local-volumes-test-4389 PodName:hostexec-latest-worker-nkbsm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:55:33.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 12:55:33.581: INFO: Creating a PV followed by a PVC Mar 25 12:55:33.594: INFO: Waiting for PV local-pv7vnc6 to bind to PVC pvc-ldmgf Mar 25 12:55:33.594: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-ldmgf] to have phase Bound Mar 25 12:55:33.611: INFO: PersistentVolumeClaim pvc-ldmgf found but phase is Pending instead of Bound. Mar 25 12:55:36.001: INFO: PersistentVolumeClaim pvc-ldmgf found and phase=Bound (2.406777137s) Mar 25 12:55:36.001: INFO: Waiting up to 3m0s for PersistentVolume local-pv7vnc6 to have phase Bound Mar 25 12:55:36.044: INFO: PersistentVolume local-pv7vnc6 found and phase=Bound (42.985486ms) [It] should fail scheduling due to different NodeSelector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379 STEP: local-volume-type: dir STEP: Initializing test volumes Mar 25 12:55:36.181: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-3e124ba0-2ce6-4a9b-a18a-e824a4e995ef] Namespace:persistent-local-volumes-test-4389 PodName:hostexec-latest-worker-nkbsm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:55:36.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 12:55:36.269: INFO: Creating a PV followed by a PVC Mar 25 12:55:36.645: INFO: Waiting for PV local-pvrbt26 to bind to PVC pvc-9f9km Mar 25 12:55:36.645: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-9f9km] to have phase Bound Mar 25 12:55:36.649: INFO: PersistentVolumeClaim pvc-9f9km found but phase is Pending instead of Bound. Mar 25 12:55:38.728: INFO: PersistentVolumeClaim pvc-9f9km found and phase=Bound (2.082717452s) Mar 25 12:55:38.728: INFO: Waiting up to 3m0s for PersistentVolume local-pvrbt26 to have phase Bound Mar 25 12:55:38.735: INFO: PersistentVolume local-pvrbt26 found and phase=Bound (6.436762ms) Mar 25 12:55:39.171: INFO: Waiting up to 5m0s for pod "pod-5280e376-e8c5-45f9-a30d-0cf28b2872e3" in namespace "persistent-local-volumes-test-4389" to be "Unschedulable" Mar 25 12:55:39.243: INFO: Pod "pod-5280e376-e8c5-45f9-a30d-0cf28b2872e3": Phase="Pending", Reason="", readiness=false. Elapsed: 71.595543ms Mar 25 12:55:41.686: INFO: Pod "pod-5280e376-e8c5-45f9-a30d-0cf28b2872e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.514428021s Mar 25 12:55:41.686: INFO: Pod "pod-5280e376-e8c5-45f9-a30d-0cf28b2872e3" satisfied condition "Unschedulable" [AfterEach] Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:370 STEP: Cleaning up PVC and PV Mar 25 12:55:41.686: INFO: Deleting PersistentVolumeClaim "pvc-ldmgf" Mar 25 12:55:41.739: INFO: Deleting PersistentVolume "local-pv7vnc6" STEP: Removing the test directory Mar 25 12:55:42.237: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-b61f98b1-aea8-483a-8d71-e9961eb49815] Namespace:persistent-local-volumes-test-4389 PodName:hostexec-latest-worker-nkbsm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:55:42.237: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:55:42.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4389" for this suite. • [SLOW TEST:13.390 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347 should fail scheduling due to different NodeSelector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector","total":133,"completed":54,"skipped":3374,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSS ------------------------------ [sig-storage] Multi-AZ Cluster Volumes should schedule pods in the same zones as statically provisioned PVs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:57 [BeforeEach] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:55:42.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename multi-az STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:46 Mar 25 12:55:42.909: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:55:42.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "multi-az-8185" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.307 seconds] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should schedule pods in the same zones as statically provisioned PVs [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:57 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:47 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should support subPath [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93 [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:55:42.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37 [It] should support subPath [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93 STEP: Creating a pod to test hostPath subPath Mar 25 12:55:43.154: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-337" to be "Succeeded or Failed" Mar 25 12:55:43.228: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 74.449147ms Mar 25 12:55:45.400: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.246252823s Mar 25 12:55:47.405: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.251826858s Mar 25 12:55:49.698: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.544547152s Mar 25 12:55:51.807: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 8.653037288s Mar 25 12:55:53.931: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.777255293s STEP: Saw pod success Mar 25 12:55:53.931: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Mar 25 12:55:54.118: INFO: Trying to get logs from node latest-worker2 pod pod-host-path-test container test-container-2: STEP: delete the pod Mar 25 12:55:54.491: INFO: Waiting for pod pod-host-path-test to disappear Mar 25 12:55:54.535: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:55:54.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-337" for this suite. • [SLOW TEST:11.632 seconds] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support subPath [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should support subPath [NodeConformance]","total":133,"completed":55,"skipped":3427,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:55:54.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume without restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687 STEP: Building a driver namespace object, basename csi-mock-volumes-6772 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 12:55:55.424: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6772-1138/csi-attacher Mar 25 12:55:55.489: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6772 Mar 25 12:55:55.489: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-6772 Mar 25 12:55:55.508: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6772 Mar 25 12:55:55.560: INFO: creating *v1.Role: csi-mock-volumes-6772-1138/external-attacher-cfg-csi-mock-volumes-6772 Mar 25 12:55:55.651: INFO: creating *v1.RoleBinding: csi-mock-volumes-6772-1138/csi-attacher-role-cfg Mar 25 12:55:55.689: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6772-1138/csi-provisioner Mar 25 12:55:55.725: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6772 Mar 25 12:55:55.726: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-6772 Mar 25 12:55:55.745: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6772 Mar 25 12:55:55.793: INFO: creating *v1.Role: csi-mock-volumes-6772-1138/external-provisioner-cfg-csi-mock-volumes-6772 Mar 25 12:55:55.798: INFO: creating *v1.RoleBinding: csi-mock-volumes-6772-1138/csi-provisioner-role-cfg Mar 25 12:55:55.811: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6772-1138/csi-resizer Mar 25 12:55:55.851: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6772 Mar 25 12:55:55.851: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-6772 Mar 25 12:55:55.877: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6772 Mar 25 12:55:55.882: INFO: creating *v1.Role: csi-mock-volumes-6772-1138/external-resizer-cfg-csi-mock-volumes-6772 Mar 25 12:55:55.888: INFO: creating *v1.RoleBinding: csi-mock-volumes-6772-1138/csi-resizer-role-cfg Mar 25 12:55:55.944: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6772-1138/csi-snapshotter Mar 25 12:55:55.961: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6772 Mar 25 12:55:55.961: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-6772 Mar 25 12:55:55.978: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6772 Mar 25 12:55:55.984: INFO: creating *v1.Role: csi-mock-volumes-6772-1138/external-snapshotter-leaderelection-csi-mock-volumes-6772 Mar 25 12:55:55.990: INFO: creating *v1.RoleBinding: csi-mock-volumes-6772-1138/external-snapshotter-leaderelection Mar 25 12:55:55.996: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6772-1138/csi-mock Mar 25 12:55:56.018: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6772 Mar 25 12:55:56.099: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6772 Mar 25 12:55:56.104: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6772 Mar 25 12:55:56.116: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6772 Mar 25 12:55:56.134: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6772 Mar 25 12:55:56.146: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6772 Mar 25 12:55:56.152: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6772 Mar 25 12:55:56.188: INFO: creating *v1.StatefulSet: csi-mock-volumes-6772-1138/csi-mockplugin Mar 25 12:55:56.273: INFO: creating *v1.StatefulSet: csi-mock-volumes-6772-1138/csi-mockplugin-attacher Mar 25 12:55:56.285: INFO: creating *v1.StatefulSet: csi-mock-volumes-6772-1138/csi-mockplugin-resizer Mar 25 12:55:56.307: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-6772 to register on node latest-worker2 STEP: Creating pod Mar 25 12:56:12.829: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 25 12:56:12.837: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-7lwpm] to have phase Bound Mar 25 12:56:12.841: INFO: PersistentVolumeClaim pvc-7lwpm found but phase is Pending instead of Bound. Mar 25 12:56:14.847: INFO: PersistentVolumeClaim pvc-7lwpm found and phase=Bound (2.009510665s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-m8mbn Mar 25 12:57:51.133: INFO: Deleting pod "pvc-volume-tester-m8mbn" in namespace "csi-mock-volumes-6772" Mar 25 12:57:51.377: INFO: Wait up to 5m0s for pod "pvc-volume-tester-m8mbn" to be fully deleted STEP: Deleting claim pvc-7lwpm Mar 25 12:58:37.434: INFO: Waiting up to 2m0s for PersistentVolume pvc-32856398-7bc8-42e2-9e5d-079d6eff7547 to get deleted Mar 25 12:58:37.458: INFO: PersistentVolume pvc-32856398-7bc8-42e2-9e5d-079d6eff7547 found and phase=Bound (23.558134ms) Mar 25 12:58:39.462: INFO: PersistentVolume pvc-32856398-7bc8-42e2-9e5d-079d6eff7547 was removed STEP: Deleting storageclass csi-mock-volumes-6772-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-6772 STEP: Waiting for namespaces [csi-mock-volumes-6772] to vanish STEP: uninstalling csi mock driver Mar 25 12:58:45.502: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6772-1138/csi-attacher Mar 25 12:58:45.508: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6772 Mar 25 12:58:45.515: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6772 Mar 25 12:58:45.536: INFO: deleting *v1.Role: csi-mock-volumes-6772-1138/external-attacher-cfg-csi-mock-volumes-6772 Mar 25 12:58:45.559: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6772-1138/csi-attacher-role-cfg Mar 25 12:58:45.587: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6772-1138/csi-provisioner Mar 25 12:58:45.598: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6772 Mar 25 12:58:45.608: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6772 Mar 25 12:58:45.616: INFO: deleting *v1.Role: csi-mock-volumes-6772-1138/external-provisioner-cfg-csi-mock-volumes-6772 Mar 25 12:58:45.625: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6772-1138/csi-provisioner-role-cfg Mar 25 12:58:45.633: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6772-1138/csi-resizer Mar 25 12:58:45.640: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6772 Mar 25 12:58:45.658: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6772 Mar 25 12:58:45.706: INFO: deleting *v1.Role: csi-mock-volumes-6772-1138/external-resizer-cfg-csi-mock-volumes-6772 Mar 25 12:58:45.717: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6772-1138/csi-resizer-role-cfg Mar 25 12:58:45.723: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6772-1138/csi-snapshotter Mar 25 12:58:45.730: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6772 Mar 25 12:58:45.778: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6772 Mar 25 12:58:45.833: INFO: deleting *v1.Role: csi-mock-volumes-6772-1138/external-snapshotter-leaderelection-csi-mock-volumes-6772 Mar 25 12:58:45.883: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6772-1138/external-snapshotter-leaderelection Mar 25 12:58:45.892: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6772-1138/csi-mock Mar 25 12:58:45.903: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6772 Mar 25 12:58:45.920: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6772 Mar 25 12:58:45.982: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6772 Mar 25 12:58:45.993: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6772 Mar 25 12:58:45.999: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6772 Mar 25 12:58:46.004: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6772 Mar 25 12:58:46.011: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6772 Mar 25 12:58:46.016: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6772-1138/csi-mockplugin Mar 25 12:58:46.023: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6772-1138/csi-mockplugin-attacher Mar 25 12:58:46.030: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6772-1138/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-6772-1138 STEP: Waiting for namespaces [csi-mock-volumes-6772-1138] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:59:42.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:227.490 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI online volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:672 should expand volume without restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=on, nodeExpansion=on","total":133,"completed":56,"skipped":3496,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create unbound pvc count metrics for pvc controller after creating pvc only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:494 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:59:42.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Mar 25 12:59:42.199: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:59:42.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-6728" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.144 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create unbound pvc count metrics for pvc controller after creating pvc only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:494 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:59:42.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 25 12:59:48.374: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-653d2275-deff-467c-9cbd-4b0b7a1f2b9f-backend && ln -s /tmp/local-volume-test-653d2275-deff-467c-9cbd-4b0b7a1f2b9f-backend /tmp/local-volume-test-653d2275-deff-467c-9cbd-4b0b7a1f2b9f] Namespace:persistent-local-volumes-test-6951 PodName:hostexec-latest-worker-6dqqw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:59:48.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 12:59:48.469: INFO: Creating a PV followed by a PVC Mar 25 12:59:48.516: INFO: Waiting for PV local-pvvj28s to bind to PVC pvc-v7fsw Mar 25 12:59:48.517: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-v7fsw] to have phase Bound Mar 25 12:59:48.522: INFO: PersistentVolumeClaim pvc-v7fsw found but phase is Pending instead of Bound. Mar 25 12:59:50.682: INFO: PersistentVolumeClaim pvc-v7fsw found but phase is Pending instead of Bound. Mar 25 12:59:52.781: INFO: PersistentVolumeClaim pvc-v7fsw found but phase is Pending instead of Bound. Mar 25 12:59:54.786: INFO: PersistentVolumeClaim pvc-v7fsw found but phase is Pending instead of Bound. Mar 25 12:59:56.790: INFO: PersistentVolumeClaim pvc-v7fsw found but phase is Pending instead of Bound. Mar 25 12:59:58.837: INFO: PersistentVolumeClaim pvc-v7fsw found but phase is Pending instead of Bound. Mar 25 13:00:00.914: INFO: PersistentVolumeClaim pvc-v7fsw found but phase is Pending instead of Bound. Mar 25 13:00:02.919: INFO: PersistentVolumeClaim pvc-v7fsw found and phase=Bound (14.402923157s) Mar 25 13:00:02.920: INFO: Waiting up to 3m0s for PersistentVolume local-pvvj28s to have phase Bound Mar 25 13:00:02.922: INFO: PersistentVolume local-pvvj28s found and phase=Bound (2.233249ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 25 13:00:09.009: INFO: pod "pod-3dd03da2-5ec6-41d8-a5a3-de4428b91e58" created on Node "latest-worker" STEP: Writing in pod1 Mar 25 13:00:09.009: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6951 PodName:pod-3dd03da2-5ec6-41d8-a5a3-de4428b91e58 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 13:00:09.009: INFO: >>> kubeConfig: /root/.kube/config Mar 25 13:00:09.094: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Mar 25 13:00:09.094: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6951 PodName:pod-3dd03da2-5ec6-41d8-a5a3-de4428b91e58 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 13:00:09.094: INFO: >>> kubeConfig: /root/.kube/config Mar 25 13:00:09.193: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-3dd03da2-5ec6-41d8-a5a3-de4428b91e58 in namespace persistent-local-volumes-test-6951 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 13:00:09.200: INFO: Deleting PersistentVolumeClaim "pvc-v7fsw" Mar 25 13:00:09.237: INFO: Deleting PersistentVolume "local-pvvj28s" STEP: Removing the test directory Mar 25 13:00:09.244: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-653d2275-deff-467c-9cbd-4b0b7a1f2b9f && rm -r /tmp/local-volume-test-653d2275-deff-467c-9cbd-4b0b7a1f2b9f-backend] Namespace:persistent-local-volumes-test-6951 PodName:hostexec-latest-worker-6dqqw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 13:00:09.244: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:00:09.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6951" for this suite. • [SLOW TEST:27.472 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":133,"completed":57,"skipped":3565,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:00:09.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:49 [It] should allow deletion of pod with invalid volume : projected /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 Mar 25 13:00:39.838: INFO: Deleting pod "pv-5975"/"pod-ephm-test-projected-vfpt" Mar 25 13:00:39.838: INFO: Deleting pod "pod-ephm-test-projected-vfpt" in namespace "pv-5975" Mar 25 13:00:39.846: INFO: Wait up to 5m0s for pod "pod-ephm-test-projected-vfpt" to be fully deleted [AfterEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:00:45.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-5975" for this suite. • [SLOW TEST:36.189 seconds] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 When pod refers to non-existent ephemeral storage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53 should allow deletion of pod with invalid volume : projected /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 ------------------------------ {"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected","total":133,"completed":58,"skipped":3636,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:00:45.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 25 13:00:50.002: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-27c22482-0a6a-4baa-8174-61013574f379 && mount --bind /tmp/local-volume-test-27c22482-0a6a-4baa-8174-61013574f379 /tmp/local-volume-test-27c22482-0a6a-4baa-8174-61013574f379] Namespace:persistent-local-volumes-test-6871 PodName:hostexec-latest-worker2-l5vz8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 13:00:50.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 13:00:50.121: INFO: Creating a PV followed by a PVC Mar 25 13:00:50.136: INFO: Waiting for PV local-pv9trzs to bind to PVC pvc-xsmg8 Mar 25 13:00:50.136: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-xsmg8] to have phase Bound Mar 25 13:00:50.227: INFO: PersistentVolumeClaim pvc-xsmg8 found but phase is Pending instead of Bound. Mar 25 13:00:52.233: INFO: PersistentVolumeClaim pvc-xsmg8 found but phase is Pending instead of Bound. Mar 25 13:00:54.239: INFO: PersistentVolumeClaim pvc-xsmg8 found but phase is Pending instead of Bound. Mar 25 13:00:56.245: INFO: PersistentVolumeClaim pvc-xsmg8 found but phase is Pending instead of Bound. Mar 25 13:00:58.249: INFO: PersistentVolumeClaim pvc-xsmg8 found but phase is Pending instead of Bound. Mar 25 13:01:00.254: INFO: PersistentVolumeClaim pvc-xsmg8 found but phase is Pending instead of Bound. Mar 25 13:01:02.277: INFO: PersistentVolumeClaim pvc-xsmg8 found and phase=Bound (12.140994244s) Mar 25 13:01:02.277: INFO: Waiting up to 3m0s for PersistentVolume local-pv9trzs to have phase Bound Mar 25 13:01:02.280: INFO: PersistentVolume local-pv9trzs found and phase=Bound (2.743086ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Mar 25 13:01:02.284: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 13:01:02.285: INFO: Deleting PersistentVolumeClaim "pvc-xsmg8" Mar 25 13:01:02.289: INFO: Deleting PersistentVolume "local-pv9trzs" STEP: Removing the test directory Mar 25 13:01:02.331: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-27c22482-0a6a-4baa-8174-61013574f379 && rm -r /tmp/local-volume-test-27c22482-0a6a-4baa-8174-61013574f379] Namespace:persistent-local-volumes-test-6871 PodName:hostexec-latest-worker2-l5vz8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 13:01:02.331: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:01:02.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6871" for this suite. S [SKIPPING] [16.637 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:01:02.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 25 13:01:06.657: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-e14ef1d4-4e86-4a56-bf1a-8b0b7ea44806-backend && mount --bind /tmp/local-volume-test-e14ef1d4-4e86-4a56-bf1a-8b0b7ea44806-backend /tmp/local-volume-test-e14ef1d4-4e86-4a56-bf1a-8b0b7ea44806-backend && ln -s /tmp/local-volume-test-e14ef1d4-4e86-4a56-bf1a-8b0b7ea44806-backend /tmp/local-volume-test-e14ef1d4-4e86-4a56-bf1a-8b0b7ea44806] Namespace:persistent-local-volumes-test-1349 PodName:hostexec-latest-worker-7hj6t ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 13:01:06.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 13:01:06.791: INFO: Creating a PV followed by a PVC Mar 25 13:01:06.815: INFO: Waiting for PV local-pvc8lq6 to bind to PVC pvc-k4wxp Mar 25 13:01:06.815: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-k4wxp] to have phase Bound Mar 25 13:01:06.836: INFO: PersistentVolumeClaim pvc-k4wxp found but phase is Pending instead of Bound. Mar 25 13:01:08.885: INFO: PersistentVolumeClaim pvc-k4wxp found and phase=Bound (2.070121369s) Mar 25 13:01:08.885: INFO: Waiting up to 3m0s for PersistentVolume local-pvc8lq6 to have phase Bound Mar 25 13:01:08.888: INFO: PersistentVolume local-pvc8lq6 found and phase=Bound (2.311463ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 25 13:01:15.090: INFO: pod "pod-3e38acef-2350-429b-8ca7-196bed2b2b83" created on Node "latest-worker" STEP: Writing in pod1 Mar 25 13:01:15.090: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1349 PodName:pod-3e38acef-2350-429b-8ca7-196bed2b2b83 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 13:01:15.090: INFO: >>> kubeConfig: /root/.kube/config Mar 25 13:01:15.230: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Mar 25 13:01:15.231: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1349 PodName:pod-3e38acef-2350-429b-8ca7-196bed2b2b83 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 13:01:15.231: INFO: >>> kubeConfig: /root/.kube/config Mar 25 13:01:15.328: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Mar 25 13:01:15.328: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-e14ef1d4-4e86-4a56-bf1a-8b0b7ea44806 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1349 PodName:pod-3e38acef-2350-429b-8ca7-196bed2b2b83 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 13:01:15.328: INFO: >>> kubeConfig: /root/.kube/config Mar 25 13:01:15.465: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-e14ef1d4-4e86-4a56-bf1a-8b0b7ea44806 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-3e38acef-2350-429b-8ca7-196bed2b2b83 in namespace persistent-local-volumes-test-1349 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 13:01:15.482: INFO: Deleting PersistentVolumeClaim "pvc-k4wxp" Mar 25 13:01:15.574: INFO: Deleting PersistentVolume "local-pvc8lq6" STEP: Removing the test directory Mar 25 13:01:15.590: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-e14ef1d4-4e86-4a56-bf1a-8b0b7ea44806 && umount /tmp/local-volume-test-e14ef1d4-4e86-4a56-bf1a-8b0b7ea44806-backend && rm -r /tmp/local-volume-test-e14ef1d4-4e86-4a56-bf1a-8b0b7ea44806-backend] Namespace:persistent-local-volumes-test-1349 PodName:hostexec-latest-worker-7hj6t ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 13:01:15.590: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:01:16.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1349" for this suite. • [SLOW TEST:13.822 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":133,"completed":59,"skipped":3685,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:01:16.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 25 13:01:23.075: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-baca0ee9-fb4d-46ed-a12b-5830ee87b6e8-backend && ln -s /tmp/local-volume-test-baca0ee9-fb4d-46ed-a12b-5830ee87b6e8-backend /tmp/local-volume-test-baca0ee9-fb4d-46ed-a12b-5830ee87b6e8] Namespace:persistent-local-volumes-test-3210 PodName:hostexec-latest-worker-sz4ff ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 13:01:23.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 13:01:23.190: INFO: Creating a PV followed by a PVC Mar 25 13:01:23.203: INFO: Waiting for PV local-pvp47wz to bind to PVC pvc-ldpvn Mar 25 13:01:23.203: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-ldpvn] to have phase Bound Mar 25 13:01:23.237: INFO: PersistentVolumeClaim pvc-ldpvn found but phase is Pending instead of Bound. Mar 25 13:01:25.244: INFO: PersistentVolumeClaim pvc-ldpvn found and phase=Bound (2.041325873s) Mar 25 13:01:25.244: INFO: Waiting up to 3m0s for PersistentVolume local-pvp47wz to have phase Bound Mar 25 13:01:25.247: INFO: PersistentVolume local-pvp47wz found and phase=Bound (3.000724ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Mar 25 13:01:31.330: INFO: pod "pod-5b8f3bdd-a01c-4b23-8219-5a7b6879e6e0" created on Node "latest-worker" STEP: Writing in pod1 Mar 25 13:01:31.330: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3210 PodName:pod-5b8f3bdd-a01c-4b23-8219-5a7b6879e6e0 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 13:01:31.330: INFO: >>> kubeConfig: /root/.kube/config Mar 25 13:01:31.489: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Mar 25 13:01:31.489: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3210 PodName:pod-5b8f3bdd-a01c-4b23-8219-5a7b6879e6e0 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 13:01:31.489: INFO: >>> kubeConfig: /root/.kube/config Mar 25 13:01:31.581: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Mar 25 13:01:35.634: INFO: pod "pod-f416468b-d2a6-44f8-976a-8729793da59b" created on Node "latest-worker" Mar 25 13:01:35.634: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3210 PodName:pod-f416468b-d2a6-44f8-976a-8729793da59b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 13:01:35.634: INFO: >>> kubeConfig: /root/.kube/config Mar 25 13:01:35.757: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Mar 25 13:01:35.757: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-baca0ee9-fb4d-46ed-a12b-5830ee87b6e8 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3210 PodName:pod-f416468b-d2a6-44f8-976a-8729793da59b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 13:01:35.757: INFO: >>> kubeConfig: /root/.kube/config Mar 25 13:01:35.848: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-baca0ee9-fb4d-46ed-a12b-5830ee87b6e8 > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Mar 25 13:01:35.848: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3210 PodName:pod-5b8f3bdd-a01c-4b23-8219-5a7b6879e6e0 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 13:01:35.848: INFO: >>> kubeConfig: /root/.kube/config Mar 25 13:01:35.939: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-baca0ee9-fb4d-46ed-a12b-5830ee87b6e8", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-5b8f3bdd-a01c-4b23-8219-5a7b6879e6e0 in namespace persistent-local-volumes-test-3210 STEP: Deleting pod2 STEP: Deleting pod pod-f416468b-d2a6-44f8-976a-8729793da59b in namespace persistent-local-volumes-test-3210 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 13:01:35.996: INFO: Deleting PersistentVolumeClaim "pvc-ldpvn" Mar 25 13:01:36.013: INFO: Deleting PersistentVolume "local-pvp47wz" STEP: Removing the test directory Mar 25 13:01:36.027: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-baca0ee9-fb4d-46ed-a12b-5830ee87b6e8 && rm -r /tmp/local-volume-test-baca0ee9-fb4d-46ed-a12b-5830ee87b6e8-backend] Namespace:persistent-local-volumes-test-3210 PodName:hostexec-latest-worker-sz4ff ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 13:01:36.028: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:01:36.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3210" for this suite. • [SLOW TEST:19.836 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":133,"completed":60,"skipped":3752,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [sig-storage] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:533 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:01:36.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [sig-storage] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:533 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a PersistentVolumeClaim with storage class STEP: Ensuring resource quota status captures persistent volume claim creation STEP: Deleting a PersistentVolumeClaim STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:01:47.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-210" for this suite. • [SLOW TEST:11.231 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [sig-storage] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:533 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [sig-storage]","total":133,"completed":61,"skipped":3770,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create total pv count metrics for with plugin and volume mode labels after creating pv /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:513 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:01:47.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Mar 25 13:01:47.531: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:01:47.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-6455" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.142 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create total pv count metrics for with plugin and volume mode labels after creating pv /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:513 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:01:47.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 25 13:01:51.673: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-114620ff-3ef2-4044-9dec-8c5aac9f90fb] Namespace:persistent-local-volumes-test-9524 PodName:hostexec-latest-worker2-9xjbv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 13:01:51.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 13:01:51.812: INFO: Creating a PV followed by a PVC Mar 25 13:01:51.829: INFO: Waiting for PV local-pvxckrd to bind to PVC pvc-hr4zq Mar 25 13:01:51.829: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-hr4zq] to have phase Bound Mar 25 13:01:51.847: INFO: PersistentVolumeClaim pvc-hr4zq found but phase is Pending instead of Bound. Mar 25 13:01:53.852: INFO: PersistentVolumeClaim pvc-hr4zq found and phase=Bound (2.022857961s) Mar 25 13:01:53.852: INFO: Waiting up to 3m0s for PersistentVolume local-pvxckrd to have phase Bound Mar 25 13:01:53.855: INFO: PersistentVolume local-pvxckrd found and phase=Bound (3.144079ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 25 13:01:57.908: INFO: pod "pod-7168e98a-9933-42de-8110-e88a3ea8e308" created on Node "latest-worker2" STEP: Writing in pod1 Mar 25 13:01:57.908: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9524 PodName:pod-7168e98a-9933-42de-8110-e88a3ea8e308 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 13:01:57.908: INFO: >>> kubeConfig: /root/.kube/config Mar 25 13:01:58.016: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Mar 25 13:01:58.016: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9524 PodName:pod-7168e98a-9933-42de-8110-e88a3ea8e308 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 13:01:58.016: INFO: >>> kubeConfig: /root/.kube/config Mar 25 13:01:58.118: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Mar 25 13:01:58.118: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-114620ff-3ef2-4044-9dec-8c5aac9f90fb > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9524 PodName:pod-7168e98a-9933-42de-8110-e88a3ea8e308 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 13:01:58.118: INFO: >>> kubeConfig: /root/.kube/config Mar 25 13:01:58.225: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-114620ff-3ef2-4044-9dec-8c5aac9f90fb > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-7168e98a-9933-42de-8110-e88a3ea8e308 in namespace persistent-local-volumes-test-9524 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 13:01:58.232: INFO: Deleting PersistentVolumeClaim "pvc-hr4zq" Mar 25 13:01:58.252: INFO: Deleting PersistentVolume "local-pvxckrd" STEP: Removing the test directory Mar 25 13:01:58.265: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-114620ff-3ef2-4044-9dec-8c5aac9f90fb] Namespace:persistent-local-volumes-test-9524 PodName:hostexec-latest-worker2-9xjbv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 13:01:58.265: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:01:58.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9524" for this suite. • [SLOW TEST:10.858 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":133,"completed":62,"skipped":3890,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:01:58.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91 STEP: Creating a pod to test downward API volume plugin Mar 25 13:01:58.532: INFO: Waiting up to 5m0s for pod "metadata-volume-a59690e5-7941-4d5c-a00d-dfb0f02f2b80" in namespace "projected-812" to be "Succeeded or Failed" Mar 25 13:01:58.554: INFO: Pod "metadata-volume-a59690e5-7941-4d5c-a00d-dfb0f02f2b80": Phase="Pending", Reason="", readiness=false. Elapsed: 22.291651ms Mar 25 13:02:00.560: INFO: Pod "metadata-volume-a59690e5-7941-4d5c-a00d-dfb0f02f2b80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027953811s Mar 25 13:02:02.565: INFO: Pod "metadata-volume-a59690e5-7941-4d5c-a00d-dfb0f02f2b80": Phase="Running", Reason="", readiness=true. Elapsed: 4.033253696s Mar 25 13:02:04.594: INFO: Pod "metadata-volume-a59690e5-7941-4d5c-a00d-dfb0f02f2b80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.061688599s STEP: Saw pod success Mar 25 13:02:04.594: INFO: Pod "metadata-volume-a59690e5-7941-4d5c-a00d-dfb0f02f2b80" satisfied condition "Succeeded or Failed" Mar 25 13:02:04.662: INFO: Trying to get logs from node latest-worker pod metadata-volume-a59690e5-7941-4d5c-a00d-dfb0f02f2b80 container client-container: STEP: delete the pod Mar 25 13:02:04.859: INFO: Waiting for pod metadata-volume-a59690e5-7941-4d5c-a00d-dfb0f02f2b80 to disappear Mar 25 13:02:04.865: INFO: Pod metadata-volume-a59690e5-7941-4d5c-a00d-dfb0f02f2b80 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:02:04.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-812" for this suite. • [SLOW TEST:6.497 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":133,"completed":63,"skipped":3902,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:91 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:02:04.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:91 STEP: Creating a pod to test downward API volume plugin Mar 25 13:02:05.038: INFO: Waiting up to 5m0s for pod "metadata-volume-877f1b25-8f57-42b9-b472-f484bf426fbe" in namespace "downward-api-4266" to be "Succeeded or Failed" Mar 25 13:02:05.071: INFO: Pod "metadata-volume-877f1b25-8f57-42b9-b472-f484bf426fbe": Phase="Pending", Reason="", readiness=false. Elapsed: 33.193152ms Mar 25 13:02:07.077: INFO: Pod "metadata-volume-877f1b25-8f57-42b9-b472-f484bf426fbe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038650958s Mar 25 13:02:09.082: INFO: Pod "metadata-volume-877f1b25-8f57-42b9-b472-f484bf426fbe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044315072s Mar 25 13:02:11.089: INFO: Pod "metadata-volume-877f1b25-8f57-42b9-b472-f484bf426fbe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.050761003s STEP: Saw pod success Mar 25 13:02:11.089: INFO: Pod "metadata-volume-877f1b25-8f57-42b9-b472-f484bf426fbe" satisfied condition "Succeeded or Failed" Mar 25 13:02:11.093: INFO: Trying to get logs from node latest-worker2 pod metadata-volume-877f1b25-8f57-42b9-b472-f484bf426fbe container client-container: STEP: delete the pod Mar 25 13:02:11.192: INFO: Waiting for pod metadata-volume-877f1b25-8f57-42b9-b472-f484bf426fbe to disappear Mar 25 13:02:11.195: INFO: Pod metadata-volume-877f1b25-8f57-42b9-b472-f484bf426fbe no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:02:11.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4266" for this suite. • [SLOW TEST:6.303 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:91 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":133,"completed":64,"skipped":3920,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:55 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:02:11.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] new files should be created with FSGroup ownership when container is root /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:55 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 25 13:02:11.263: INFO: Waiting up to 5m0s for pod "pod-a971af37-83fc-4863-81df-94af86194ae9" in namespace "emptydir-4114" to be "Succeeded or Failed" Mar 25 13:02:11.277: INFO: Pod "pod-a971af37-83fc-4863-81df-94af86194ae9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.392854ms Mar 25 13:02:13.378: INFO: Pod "pod-a971af37-83fc-4863-81df-94af86194ae9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11492984s Mar 25 13:02:15.456: INFO: Pod "pod-a971af37-83fc-4863-81df-94af86194ae9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.192846515s STEP: Saw pod success Mar 25 13:02:15.456: INFO: Pod "pod-a971af37-83fc-4863-81df-94af86194ae9" satisfied condition "Succeeded or Failed" Mar 25 13:02:15.468: INFO: Trying to get logs from node latest-worker2 pod pod-a971af37-83fc-4863-81df-94af86194ae9 container test-container: STEP: delete the pod Mar 25 13:02:16.082: INFO: Waiting for pod pod-a971af37-83fc-4863-81df-94af86194ae9 to disappear Mar 25 13:02:16.288: INFO: Pod pod-a971af37-83fc-4863-81df-94af86194ae9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:02:16.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4114" for this suite. • [SLOW TEST:5.121 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48 new files should be created with FSGroup ownership when container is root /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:55 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root","total":133,"completed":65,"skipped":3929,"failed":1,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:02:16.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity used, no capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 STEP: Building a driver namespace object, basename csi-mock-volumes-7393 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 13:02:16.944: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7393-8691/csi-attacher Mar 25 13:02:16.953: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7393 Mar 25 13:02:16.953: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-7393 Mar 25 13:02:16.985: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7393 Mar 25 13:02:17.018: INFO: creating *v1.Role: csi-mock-volumes-7393-8691/external-attacher-cfg-csi-mock-volumes-7393 Mar 25 13:02:17.021: INFO: creating *v1.RoleBinding: csi-mock-volumes-7393-8691/csi-attacher-role-cfg Mar 25 13:02:17.031: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7393-8691/csi-provisioner Mar 25 13:02:17.047: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7393 Mar 25 13:02:17.047: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-7393 Mar 25 13:02:17.061: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7393 Mar 25 13:02:17.067: INFO: creating *v1.Role: csi-mock-volumes-7393-8691/external-provisioner-cfg-csi-mock-volumes-7393 Mar 25 13:02:17.089: INFO: creating *v1.RoleBinding: csi-mock-volumes-7393-8691/csi-provisioner-role-cfg Mar 25 13:02:17.168: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7393-8691/csi-resizer Mar 25 13:02:17.171: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7393 Mar 25 13:02:17.172: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-7393 Mar 25 13:02:17.176: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7393 Mar 25 13:02:17.182: INFO: creating *v1.Role: csi-mock-volumes-7393-8691/external-resizer-cfg-csi-mock-volumes-7393 Mar 25 13:02:17.203: INFO: creating *v1.RoleBinding: csi-mock-volumes-7393-8691/csi-resizer-role-cfg Mar 25 13:02:17.219: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7393-8691/csi-snapshotter Mar 25 13:02:17.237: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7393 Mar 25 13:02:17.237: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-7393 Mar 25 13:02:17.250: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7393 Mar 25 13:02:17.254: INFO: creating *v1.Role: csi-mock-volumes-7393-8691/external-snapshotter-leaderelection-csi-mock-volumes-7393 Mar 25 13:02:17.260: INFO: creating *v1.RoleBinding: csi-mock-volumes-7393-8691/external-snapshotter-leaderelection Mar 25 13:02:17.306: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7393-8691/csi-mock Mar 25 13:02:17.314: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7393 Mar 25 13:02:17.346: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7393 Mar 25 13:02:17.375: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7393 Mar 25 13:02:17.395: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7393 Mar 25 13:02:17.449: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7393 Mar 25 13:02:17.464: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7393 Mar 25 13:02:17.476: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7393 Mar 25 13:02:17.509: INFO: creating *v1.StatefulSet: csi-mock-volumes-7393-8691/csi-mockplugin Mar 25 13:02:17.525: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-7393 Mar 25 13:02:17.546: INFO: creating *v1.StatefulSet: csi-mock-volumes-7393-8691/csi-mockplugin-attacher Mar 25 13:02:17.587: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-7393" Mar 25 13:02:17.621: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-7393 to register on node latest-worker STEP: Creating pod Mar 25 13:02:32.685: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 25 13:02:54.801: FAIL: pod unexpectedly started to run Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func1.14.1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1232 +0xad9 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc003264a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc003264a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc003264a80, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 STEP: Deleting pod pvc-volume-tester-8ltmt Mar 25 13:02:54.802: INFO: Deleting pod "pvc-volume-tester-8ltmt" in namespace "csi-mock-volumes-7393" Mar 25 13:02:54.809: INFO: Wait up to 5m0s for pod "pvc-volume-tester-8ltmt" to be fully deleted STEP: Deleting claim pvc-xt9fh Mar 25 13:03:06.831: INFO: Waiting up to 2m0s for PersistentVolume pvc-8c51f028-b7e0-4399-953e-dcd29c73d27e to get deleted Mar 25 13:03:06.856: INFO: PersistentVolume pvc-8c51f028-b7e0-4399-953e-dcd29c73d27e found and phase=Bound (24.993141ms) Mar 25 13:03:08.859: INFO: PersistentVolume pvc-8c51f028-b7e0-4399-953e-dcd29c73d27e was removed STEP: Deleting storageclass mock-csi-storage-capacity-csi-mock-volumes-7393 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-7393 STEP: Waiting for namespaces [csi-mock-volumes-7393] to vanish STEP: uninstalling csi mock driver Mar 25 13:03:16.882: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7393-8691/csi-attacher Mar 25 13:03:16.887: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7393 Mar 25 13:03:17.005: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7393 Mar 25 13:03:17.037: INFO: deleting *v1.Role: csi-mock-volumes-7393-8691/external-attacher-cfg-csi-mock-volumes-7393 Mar 25 13:03:17.068: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7393-8691/csi-attacher-role-cfg Mar 25 13:03:17.072: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7393-8691/csi-provisioner Mar 25 13:03:17.146: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7393 Mar 25 13:03:17.407: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7393 Mar 25 13:03:17.426: INFO: deleting *v1.Role: csi-mock-volumes-7393-8691/external-provisioner-cfg-csi-mock-volumes-7393 Mar 25 13:03:17.627: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7393-8691/csi-provisioner-role-cfg Mar 25 13:03:17.883: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7393-8691/csi-resizer Mar 25 13:03:17.893: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7393 Mar 25 13:03:17.904: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7393 Mar 25 13:03:17.953: INFO: deleting *v1.Role: csi-mock-volumes-7393-8691/external-resizer-cfg-csi-mock-volumes-7393 Mar 25 13:03:18.082: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7393-8691/csi-resizer-role-cfg Mar 25 13:03:18.292: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7393-8691/csi-snapshotter Mar 25 13:03:18.590: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7393 Mar 25 13:03:18.815: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7393 Mar 25 13:03:18.903: INFO: deleting *v1.Role: csi-mock-volumes-7393-8691/external-snapshotter-leaderelection-csi-mock-volumes-7393 Mar 25 13:03:19.258: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7393-8691/external-snapshotter-leaderelection Mar 25 13:03:19.434: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7393-8691/csi-mock Mar 25 13:03:19.906: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7393 Mar 25 13:03:20.291: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7393 Mar 25 13:03:20.454: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7393 Mar 25 13:03:20.554: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7393 Mar 25 13:03:20.894: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7393 Mar 25 13:03:21.063: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7393 Mar 25 13:03:21.089: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7393 Mar 25 13:03:21.100: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7393-8691/csi-mockplugin Mar 25 13:03:21.155: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-7393 Mar 25 13:03:21.303: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7393-8691/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-7393-8691 STEP: Waiting for namespaces [csi-mock-volumes-7393-8691] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:04:17.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • Failure [121.088 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134 CSIStorageCapacity used, no capacity [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 Mar 25 13:02:54.801: pod unexpectedly started to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1232 ------------------------------ {"msg":"FAILED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","total":133,"completed":65,"skipped":4018,"failed":2,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:04:17.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume without restarting pod if nodeExpansion=off /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 STEP: Building a driver namespace object, basename csi-mock-volumes-3728 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 13:04:18.050: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3728-2062/csi-attacher Mar 25 13:04:18.054: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3728 Mar 25 13:04:18.054: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-3728 Mar 25 13:04:18.061: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3728 Mar 25 13:04:18.067: INFO: creating *v1.Role: csi-mock-volumes-3728-2062/external-attacher-cfg-csi-mock-volumes-3728 Mar 25 13:04:18.253: INFO: creating *v1.RoleBinding: csi-mock-volumes-3728-2062/csi-attacher-role-cfg Mar 25 13:04:18.379: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3728-2062/csi-provisioner Mar 25 13:04:18.423: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3728 Mar 25 13:04:18.423: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-3728 Mar 25 13:04:18.466: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3728 Mar 25 13:04:18.477: INFO: creating *v1.Role: csi-mock-volumes-3728-2062/external-provisioner-cfg-csi-mock-volumes-3728 Mar 25 13:04:18.526: INFO: creating *v1.RoleBinding: csi-mock-volumes-3728-2062/csi-provisioner-role-cfg Mar 25 13:04:18.537: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3728-2062/csi-resizer Mar 25 13:04:18.544: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3728 Mar 25 13:04:18.544: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-3728 Mar 25 13:04:18.549: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3728 Mar 25 13:04:18.599: INFO: creating *v1.Role: csi-mock-volumes-3728-2062/external-resizer-cfg-csi-mock-volumes-3728 Mar 25 13:04:18.659: INFO: creating *v1.RoleBinding: csi-mock-volumes-3728-2062/csi-resizer-role-cfg Mar 25 13:04:18.663: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3728-2062/csi-snapshotter Mar 25 13:04:18.681: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3728 Mar 25 13:04:18.681: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-3728 Mar 25 13:04:18.687: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3728 Mar 25 13:04:18.724: INFO: creating *v1.Role: csi-mock-volumes-3728-2062/external-snapshotter-leaderelection-csi-mock-volumes-3728 Mar 25 13:04:18.735: INFO: creating *v1.RoleBinding: csi-mock-volumes-3728-2062/external-snapshotter-leaderelection Mar 25 13:04:18.779: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3728-2062/csi-mock Mar 25 13:04:18.783: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3728 Mar 25 13:04:18.794: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3728 Mar 25 13:04:18.800: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3728 Mar 25 13:04:18.806: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3728 Mar 25 13:04:18.833: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3728 Mar 25 13:04:18.850: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3728 Mar 25 13:04:18.872: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3728 Mar 25 13:04:18.878: INFO: creating *v1.StatefulSet: csi-mock-volumes-3728-2062/csi-mockplugin Mar 25 13:04:18.911: INFO: creating *v1.StatefulSet: csi-mock-volumes-3728-2062/csi-mockplugin-attacher Mar 25 13:04:18.916: INFO: creating *v1.StatefulSet: csi-mock-volumes-3728-2062/csi-mockplugin-resizer Mar 25 13:04:18.966: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-3728 to register on node latest-worker2 STEP: Creating pod Mar 25 13:04:35.334: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 25 13:04:35.436: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-jwkj9] to have phase Bound Mar 25 13:04:35.493: INFO: PersistentVolumeClaim pvc-jwkj9 found but phase is Pending instead of Bound. Mar 25 13:04:38.013: INFO: PersistentVolumeClaim pvc-jwkj9 found and phase=Bound (2.577257319s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-xqhp4 Mar 25 13:05:00.377: INFO: Deleting pod "pvc-volume-tester-xqhp4" in namespace "csi-mock-volumes-3728" Mar 25 13:05:00.383: INFO: Wait up to 5m0s for pod "pvc-volume-tester-xqhp4" to be fully deleted STEP: Deleting claim pvc-jwkj9 Mar 25 13:05:36.556: INFO: Waiting up to 2m0s for PersistentVolume pvc-df1be3b1-f801-4199-ab59-8893ad7cb6a1 to get deleted Mar 25 13:05:36.978: INFO: PersistentVolume pvc-df1be3b1-f801-4199-ab59-8893ad7cb6a1 found and phase=Bound (422.479383ms) Mar 25 13:05:38.982: INFO: PersistentVolume pvc-df1be3b1-f801-4199-ab59-8893ad7cb6a1 was removed STEP: Deleting storageclass csi-mock-volumes-3728-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-3728 STEP: Waiting for namespaces [csi-mock-volumes-3728] to vanish STEP: uninstalling csi mock driver Mar 25 13:05:45.081: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3728-2062/csi-attacher Mar 25 13:05:45.088: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3728 Mar 25 13:05:45.100: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3728 Mar 25 13:05:45.107: INFO: deleting *v1.Role: csi-mock-volumes-3728-2062/external-attacher-cfg-csi-mock-volumes-3728 Mar 25 13:05:45.113: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3728-2062/csi-attacher-role-cfg Mar 25 13:05:45.118: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3728-2062/csi-provisioner Mar 25 13:05:45.158: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3728 Mar 25 13:05:45.174: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3728 Mar 25 13:05:45.186: INFO: deleting *v1.Role: csi-mock-volumes-3728-2062/external-provisioner-cfg-csi-mock-volumes-3728 Mar 25 13:05:45.193: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3728-2062/csi-provisioner-role-cfg Mar 25 13:05:45.199: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3728-2062/csi-resizer Mar 25 13:05:45.223: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3728 Mar 25 13:05:45.300: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3728 Mar 25 13:05:45.326: INFO: deleting *v1.Role: csi-mock-volumes-3728-2062/external-resizer-cfg-csi-mock-volumes-3728 Mar 25 13:05:45.339: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3728-2062/csi-resizer-role-cfg Mar 25 13:05:45.355: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3728-2062/csi-snapshotter Mar 25 13:05:45.362: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3728 Mar 25 13:05:45.420: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3728 Mar 25 13:05:45.442: INFO: deleting *v1.Role: csi-mock-volumes-3728-2062/external-snapshotter-leaderelection-csi-mock-volumes-3728 Mar 25 13:05:45.464: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3728-2062/external-snapshotter-leaderelection Mar 25 13:05:45.537: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3728-2062/csi-mock Mar 25 13:05:45.553: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3728 Mar 25 13:05:45.571: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3728 Mar 25 13:05:45.693: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3728 Mar 25 13:05:45.702: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3728 Mar 25 13:05:45.715: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3728 Mar 25 13:05:45.721: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3728 Mar 25 13:05:45.727: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3728 Mar 25 13:05:45.733: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3728-2062/csi-mockplugin Mar 25 13:05:45.739: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3728-2062/csi-mockplugin-attacher Mar 25 13:05:45.745: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3728-2062/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-3728-2062 STEP: Waiting for namespaces [csi-mock-volumes-3728-2062] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:06:37.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:140.418 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI Volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561 should expand volume without restarting pod if nodeExpansion=off /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off","total":133,"completed":66,"skipped":4035,"failed":2,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Multi-AZ Cluster Volumes should only be allowed to provision PDs in zones where nodes exist /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:61 [BeforeEach] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:06:37.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename multi-az STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:46 Mar 25 13:06:37.914: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:06:37.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "multi-az-7135" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.087 seconds] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should only be allowed to provision PDs in zones where nodes exist [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:61 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:47 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:106 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:06:37.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:106 STEP: Creating a pod to test downward API volume plugin Mar 25 13:06:38.122: INFO: Waiting up to 5m0s for pod "metadata-volume-bcc3fab9-4677-4e94-a546-a636517f37d7" in namespace "downward-api-5816" to be "Succeeded or Failed" Mar 25 13:06:38.158: INFO: Pod "metadata-volume-bcc3fab9-4677-4e94-a546-a636517f37d7": Phase="Pending", Reason="", readiness=false. Elapsed: 35.113931ms Mar 25 13:06:40.162: INFO: Pod "metadata-volume-bcc3fab9-4677-4e94-a546-a636517f37d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039197815s Mar 25 13:06:42.166: INFO: Pod "metadata-volume-bcc3fab9-4677-4e94-a546-a636517f37d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043897116s Mar 25 13:06:44.451: INFO: Pod "metadata-volume-bcc3fab9-4677-4e94-a546-a636517f37d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.328790024s STEP: Saw pod success Mar 25 13:06:44.451: INFO: Pod "metadata-volume-bcc3fab9-4677-4e94-a546-a636517f37d7" satisfied condition "Succeeded or Failed" Mar 25 13:06:44.455: INFO: Trying to get logs from node latest-worker pod metadata-volume-bcc3fab9-4677-4e94-a546-a636517f37d7 container client-container: STEP: delete the pod Mar 25 13:06:44.682: INFO: Waiting for pod metadata-volume-bcc3fab9-4677-4e94-a546-a636517f37d7 to disappear Mar 25 13:06:44.725: INFO: Pod metadata-volume-bcc3fab9-4677-4e94-a546-a636517f37d7 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:06:44.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5816" for this suite. • [SLOW TEST:6.810 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:106 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":133,"completed":67,"skipped":4092,"failed":2,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity"]} SSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:06:44.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should require VolumeAttach for drivers with attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 STEP: Building a driver namespace object, basename csi-mock-volumes-3867 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 13:06:45.016: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3867-3787/csi-attacher Mar 25 13:06:45.019: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3867 Mar 25 13:06:45.019: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-3867 Mar 25 13:06:45.032: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3867 Mar 25 13:06:45.036: INFO: creating *v1.Role: csi-mock-volumes-3867-3787/external-attacher-cfg-csi-mock-volumes-3867 Mar 25 13:06:45.042: INFO: creating *v1.RoleBinding: csi-mock-volumes-3867-3787/csi-attacher-role-cfg Mar 25 13:06:45.100: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3867-3787/csi-provisioner Mar 25 13:06:45.104: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3867 Mar 25 13:06:45.104: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-3867 Mar 25 13:06:45.115: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3867 Mar 25 13:06:45.164: INFO: creating *v1.Role: csi-mock-volumes-3867-3787/external-provisioner-cfg-csi-mock-volumes-3867 Mar 25 13:06:45.176: INFO: creating *v1.RoleBinding: csi-mock-volumes-3867-3787/csi-provisioner-role-cfg Mar 25 13:06:45.194: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3867-3787/csi-resizer Mar 25 13:06:45.229: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3867 Mar 25 13:06:45.229: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-3867 Mar 25 13:06:45.236: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3867 Mar 25 13:06:45.241: INFO: creating *v1.Role: csi-mock-volumes-3867-3787/external-resizer-cfg-csi-mock-volumes-3867 Mar 25 13:06:45.247: INFO: creating *v1.RoleBinding: csi-mock-volumes-3867-3787/csi-resizer-role-cfg Mar 25 13:06:45.283: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3867-3787/csi-snapshotter Mar 25 13:06:45.314: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3867 Mar 25 13:06:45.314: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-3867 Mar 25 13:06:45.361: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3867 Mar 25 13:06:45.365: INFO: creating *v1.Role: csi-mock-volumes-3867-3787/external-snapshotter-leaderelection-csi-mock-volumes-3867 Mar 25 13:06:45.373: INFO: creating *v1.RoleBinding: csi-mock-volumes-3867-3787/external-snapshotter-leaderelection Mar 25 13:06:45.391: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3867-3787/csi-mock Mar 25 13:06:45.403: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3867 Mar 25 13:06:45.409: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3867 Mar 25 13:06:45.535: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3867 Mar 25 13:06:45.539: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3867 Mar 25 13:06:45.547: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3867 Mar 25 13:06:45.614: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3867 Mar 25 13:06:45.625: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3867 Mar 25 13:06:45.631: INFO: creating *v1.StatefulSet: csi-mock-volumes-3867-3787/csi-mockplugin Mar 25 13:06:45.673: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-3867 Mar 25 13:06:45.691: INFO: creating *v1.StatefulSet: csi-mock-volumes-3867-3787/csi-mockplugin-attacher Mar 25 13:06:45.703: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-3867" Mar 25 13:06:45.721: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-3867 to register on node latest-worker STEP: Creating pod Mar 25 13:06:55.618: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 25 13:06:55.692: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-l2775] to have phase Bound Mar 25 13:06:55.709: INFO: PersistentVolumeClaim pvc-l2775 found but phase is Pending instead of Bound. Mar 25 13:06:57.892: INFO: PersistentVolumeClaim pvc-l2775 found and phase=Bound (2.199944744s) STEP: Checking if VolumeAttachment was created for the pod STEP: Deleting pod pvc-volume-tester-spxb9 Mar 25 13:07:12.229: INFO: Deleting pod "pvc-volume-tester-spxb9" in namespace "csi-mock-volumes-3867" Mar 25 13:07:12.234: INFO: Wait up to 5m0s for pod "pvc-volume-tester-spxb9" to be fully deleted STEP: Deleting claim pvc-l2775 Mar 25 13:08:16.324: INFO: Waiting up to 2m0s for PersistentVolume pvc-2a4ccec2-5de9-48c0-a81e-8fe560ebf813 to get deleted Mar 25 13:08:16.344: INFO: PersistentVolume pvc-2a4ccec2-5de9-48c0-a81e-8fe560ebf813 found and phase=Bound (19.07205ms) Mar 25 13:08:18.349: INFO: PersistentVolume pvc-2a4ccec2-5de9-48c0-a81e-8fe560ebf813 was removed STEP: Deleting storageclass csi-mock-volumes-3867-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-3867 STEP: Waiting for namespaces [csi-mock-volumes-3867] to vanish STEP: uninstalling csi mock driver Mar 25 13:08:24.521: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3867-3787/csi-attacher Mar 25 13:08:24.528: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3867 Mar 25 13:08:24.539: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3867 Mar 25 13:08:24.596: INFO: deleting *v1.Role: csi-mock-volumes-3867-3787/external-attacher-cfg-csi-mock-volumes-3867 Mar 25 13:08:24.605: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3867-3787/csi-attacher-role-cfg Mar 25 13:08:24.616: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3867-3787/csi-provisioner Mar 25 13:08:24.622: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3867 Mar 25 13:08:24.642: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3867 Mar 25 13:08:24.651: INFO: deleting *v1.Role: csi-mock-volumes-3867-3787/external-provisioner-cfg-csi-mock-volumes-3867 Mar 25 13:08:24.658: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3867-3787/csi-provisioner-role-cfg Mar 25 13:08:24.664: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3867-3787/csi-resizer Mar 25 13:08:24.684: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3867 Mar 25 13:08:24.724: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3867 Mar 25 13:08:24.740: INFO: deleting *v1.Role: csi-mock-volumes-3867-3787/external-resizer-cfg-csi-mock-volumes-3867 Mar 25 13:08:24.745: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3867-3787/csi-resizer-role-cfg Mar 25 13:08:24.748: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3867-3787/csi-snapshotter Mar 25 13:08:24.771: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3867 Mar 25 13:08:24.782: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3867 Mar 25 13:08:24.790: INFO: deleting *v1.Role: csi-mock-volumes-3867-3787/external-snapshotter-leaderelection-csi-mock-volumes-3867 Mar 25 13:08:24.795: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3867-3787/external-snapshotter-leaderelection Mar 25 13:08:24.816: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3867-3787/csi-mock Mar 25 13:08:24.843: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3867 Mar 25 13:08:24.855: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3867 Mar 25 13:08:24.863: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3867 Mar 25 13:08:24.869: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3867 Mar 25 13:08:24.875: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3867 Mar 25 13:08:24.881: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3867 Mar 25 13:08:24.887: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3867 Mar 25 13:08:24.893: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3867-3787/csi-mockplugin Mar 25 13:08:24.900: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-3867 Mar 25 13:08:24.997: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3867-3787/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-3867-3787 STEP: Waiting for namespaces [csi-mock-volumes-3867-3787] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:09:23.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:158.544 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI attach test using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:316 should require VolumeAttach for drivers with attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment","total":133,"completed":68,"skipped":4099,"failed":2,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity"]} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:59 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:09:23.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] new files should be created with FSGroup ownership when container is non-root /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:59 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 25 13:09:23.383: INFO: Waiting up to 5m0s for pod "pod-ad1eb6bd-29a9-4649-b6e8-51ad37399cdd" in namespace "emptydir-4688" to be "Succeeded or Failed" Mar 25 13:09:23.388: INFO: Pod "pod-ad1eb6bd-29a9-4649-b6e8-51ad37399cdd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.360412ms Mar 25 13:09:25.835: INFO: Pod "pod-ad1eb6bd-29a9-4649-b6e8-51ad37399cdd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.45146254s Mar 25 13:09:28.478: INFO: Pod "pod-ad1eb6bd-29a9-4649-b6e8-51ad37399cdd": Phase="Pending", Reason="", readiness=false. Elapsed: 5.094581827s Mar 25 13:09:30.482: INFO: Pod "pod-ad1eb6bd-29a9-4649-b6e8-51ad37399cdd": Phase="Running", Reason="", readiness=true. Elapsed: 7.098813532s Mar 25 13:09:32.487: INFO: Pod "pod-ad1eb6bd-29a9-4649-b6e8-51ad37399cdd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.103055971s STEP: Saw pod success Mar 25 13:09:32.487: INFO: Pod "pod-ad1eb6bd-29a9-4649-b6e8-51ad37399cdd" satisfied condition "Succeeded or Failed" Mar 25 13:09:32.489: INFO: Trying to get logs from node latest-worker pod pod-ad1eb6bd-29a9-4649-b6e8-51ad37399cdd container test-container: STEP: delete the pod Mar 25 13:09:33.701: INFO: Waiting for pod pod-ad1eb6bd-29a9-4649-b6e8-51ad37399cdd to disappear Mar 25 13:09:33.743: INFO: Pod pod-ad1eb6bd-29a9-4649-b6e8-51ad37399cdd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:09:33.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4688" for this suite. • [SLOW TEST:10.481 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48 new files should be created with FSGroup ownership when container is non-root /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:59 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root","total":133,"completed":69,"skipped":4108,"failed":2,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create prometheus metrics for volume provisioning and attach/detach /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:101 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:09:33.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Mar 25 13:09:35.195: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:09:35.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-2890" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [1.467 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create prometheus metrics for volume provisioning and attach/detach [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:101 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PVC Protection Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:145 [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:09:35.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pvc-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:72 Mar 25 13:09:35.563: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PVC Mar 25 13:09:35.638: INFO: Default storage class: "standard" Mar 25 13:09:35.638: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Creating a Pod that becomes Running and therefore is actively using the PVC STEP: Waiting for PVC to become Bound Mar 25 13:10:04.164: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-protectionb49x9] to have phase Bound Mar 25 13:10:04.614: INFO: PersistentVolumeClaim pvc-protectionb49x9 found and phase=Bound (450.295445ms) STEP: Checking that PVC Protection finalizer is set [It] Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:145 STEP: Deleting the PVC, however, the PVC must not be removed from the system as it's in active use by a pod STEP: Checking that the PVC status is Terminating STEP: Creating second Pod whose scheduling fails because it uses a PVC that is being deleted Mar 25 13:10:05.055: INFO: Waiting up to 5m0s for pod "pvc-tester-ksb75" in namespace "pvc-protection-2806" to be "Unschedulable" Mar 25 13:10:05.065: INFO: Pod "pvc-tester-ksb75": Phase="Pending", Reason="", readiness=false. Elapsed: 10.586767ms Mar 25 13:10:07.069: INFO: Pod "pvc-tester-ksb75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01438162s Mar 25 13:10:07.069: INFO: Pod "pvc-tester-ksb75" satisfied condition "Unschedulable" STEP: Deleting the second pod that uses the PVC that is being deleted Mar 25 13:10:07.072: INFO: Deleting pod "pvc-tester-ksb75" in namespace "pvc-protection-2806" Mar 25 13:10:07.934: INFO: Wait up to 5m0s for pod "pvc-tester-ksb75" to be fully deleted STEP: Checking again that the PVC status is Terminating STEP: Deleting the first pod that uses the PVC Mar 25 13:10:08.410: INFO: Deleting pod "pvc-tester-fr6vq" in namespace "pvc-protection-2806" Mar 25 13:10:08.415: INFO: Wait up to 5m0s for pod "pvc-tester-fr6vq" to be fully deleted STEP: Checking that the PVC is automatically removed from the system because it's no longer in active use by a pod Mar 25 13:10:26.776: INFO: Waiting up to 3m0s for PersistentVolumeClaim pvc-protectionb49x9 to be removed Mar 25 13:10:26.779: INFO: Claim "pvc-protectionb49x9" in namespace "pvc-protection-2806" doesn't exist in the system [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:10:26.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pvc-protection-2806" for this suite. [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:108 • [SLOW TEST:51.833 seconds] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:145 ------------------------------ {"msg":"PASSED [sig-storage] PVC Protection Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable","total":133,"completed":70,"skipped":4235,"failed":2,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity"]} SSSSS ------------------------------ [sig-storage] PersistentVolumes GCEPD should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:126 [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:10:27.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:77 Mar 25 13:10:27.575: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:10:27.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-6944" for this suite. [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:110 Mar 25 13:10:27.583: INFO: AfterEach: Cleaning up test resources Mar 25 13:10:27.583: INFO: pvc is nil Mar 25 13:10:27.583: INFO: pv is nil S [SKIPPING] in Spec Setup (BeforeEach) [0.522 seconds] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:126 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:10:27.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 25 13:10:34.098: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-e7fde516-2869-4c98-b166-f8afbde26f98 && mount --bind /tmp/local-volume-test-e7fde516-2869-4c98-b166-f8afbde26f98 /tmp/local-volume-test-e7fde516-2869-4c98-b166-f8afbde26f98] Namespace:persistent-local-volumes-test-2940 PodName:hostexec-latest-worker2-zzfcm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 13:10:34.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 13:10:34.738: INFO: Creating a PV followed by a PVC Mar 25 13:10:34.802: INFO: Waiting for PV local-pvxkq9z to bind to PVC pvc-29vfh Mar 25 13:10:34.802: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-29vfh] to have phase Bound Mar 25 13:10:34.830: INFO: PersistentVolumeClaim pvc-29vfh found but phase is Pending instead of Bound. Mar 25 13:10:36.883: INFO: PersistentVolumeClaim pvc-29vfh found and phase=Bound (2.0806637s) Mar 25 13:10:36.883: INFO: Waiting up to 3m0s for PersistentVolume local-pvxkq9z to have phase Bound Mar 25 13:10:36.955: INFO: PersistentVolume local-pvxkq9z found and phase=Bound (72.281981ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 25 13:10:46.128: INFO: pod "pod-2b717977-a0b6-4701-a8d9-e926e29682ec" created on Node "latest-worker2" STEP: Writing in pod1 Mar 25 13:10:46.128: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2940 PodName:pod-2b717977-a0b6-4701-a8d9-e926e29682ec ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 13:10:46.128: INFO: >>> kubeConfig: /root/.kube/config Mar 25 13:10:46.260: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Mar 25 13:10:46.260: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2940 PodName:pod-2b717977-a0b6-4701-a8d9-e926e29682ec ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 13:10:46.260: INFO: >>> kubeConfig: /root/.kube/config Mar 25 13:10:46.338: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-2b717977-a0b6-4701-a8d9-e926e29682ec in namespace persistent-local-volumes-test-2940 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 13:10:46.344: INFO: Deleting PersistentVolumeClaim "pvc-29vfh" Mar 25 13:10:46.441: INFO: Deleting PersistentVolume "local-pvxkq9z" STEP: Removing the test directory Mar 25 13:10:46.477: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-e7fde516-2869-4c98-b166-f8afbde26f98 && rm -r /tmp/local-volume-test-e7fde516-2869-4c98-b166-f8afbde26f98] Namespace:persistent-local-volumes-test-2940 PodName:hostexec-latest-worker2-zzfcm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 13:10:46.477: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:10:46.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2940" for this suite. • [SLOW TEST:19.077 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":133,"completed":71,"skipped":4287,"failed":2,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:106 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:10:46.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:106 STEP: Creating a pod to test downward API volume plugin Mar 25 13:10:47.854: INFO: Waiting up to 5m0s for pod "metadata-volume-a45138a3-e958-4c87-b2b7-9a4c46cd5d65" in namespace "projected-9547" to be "Succeeded or Failed" Mar 25 13:10:47.858: INFO: Pod "metadata-volume-a45138a3-e958-4c87-b2b7-9a4c46cd5d65": Phase="Pending", Reason="", readiness=false. Elapsed: 3.558391ms Mar 25 13:10:50.101: INFO: Pod "metadata-volume-a45138a3-e958-4c87-b2b7-9a4c46cd5d65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.247316273s Mar 25 13:10:52.260: INFO: Pod "metadata-volume-a45138a3-e958-4c87-b2b7-9a4c46cd5d65": Phase="Pending", Reason="", readiness=false. Elapsed: 4.405960539s Mar 25 13:10:54.327: INFO: Pod "metadata-volume-a45138a3-e958-4c87-b2b7-9a4c46cd5d65": Phase="Pending", Reason="", readiness=false. Elapsed: 6.472840284s Mar 25 13:10:56.404: INFO: Pod "metadata-volume-a45138a3-e958-4c87-b2b7-9a4c46cd5d65": Phase="Running", Reason="", readiness=true. Elapsed: 8.550533369s Mar 25 13:10:58.410: INFO: Pod "metadata-volume-a45138a3-e958-4c87-b2b7-9a4c46cd5d65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.555773417s STEP: Saw pod success Mar 25 13:10:58.410: INFO: Pod "metadata-volume-a45138a3-e958-4c87-b2b7-9a4c46cd5d65" satisfied condition "Succeeded or Failed" Mar 25 13:10:58.412: INFO: Trying to get logs from node latest-worker pod metadata-volume-a45138a3-e958-4c87-b2b7-9a4c46cd5d65 container client-container: STEP: delete the pod Mar 25 13:10:58.460: INFO: Waiting for pod metadata-volume-a45138a3-e958-4c87-b2b7-9a4c46cd5d65 to disappear Mar 25 13:10:58.463: INFO: Pod metadata-volume-a45138a3-e958-4c87-b2b7-9a4c46cd5d65 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:10:58.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9547" for this suite. • [SLOW TEST:11.807 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:106 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":133,"completed":72,"skipped":4307,"failed":2,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity"]} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Pod Disks [Serial] attach on previously attached volumes should work /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:458 [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:10:58.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 [It] [Serial] attach on previously attached volumes should work /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:458 Mar 25 13:10:58.539: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:10:58.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-7245" for this suite. S [SKIPPING] [0.077 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Serial] attach on previously attached volumes should work [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:458 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:459 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1457 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:10:58.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should modify fsGroup if fsGroupPolicy=File /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1457 STEP: Building a driver namespace object, basename csi-mock-volumes-821 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 13:10:59.223: INFO: creating *v1.ServiceAccount: csi-mock-volumes-821-4865/csi-attacher Mar 25 13:10:59.227: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-821 Mar 25 13:10:59.227: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-821 Mar 25 13:10:59.248: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-821 Mar 25 13:10:59.260: INFO: creating *v1.Role: csi-mock-volumes-821-4865/external-attacher-cfg-csi-mock-volumes-821 Mar 25 13:10:59.292: INFO: creating *v1.RoleBinding: csi-mock-volumes-821-4865/csi-attacher-role-cfg Mar 25 13:10:59.296: INFO: creating *v1.ServiceAccount: csi-mock-volumes-821-4865/csi-provisioner Mar 25 13:10:59.333: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-821 Mar 25 13:10:59.333: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-821 Mar 25 13:10:59.340: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-821 Mar 25 13:10:59.343: INFO: creating *v1.Role: csi-mock-volumes-821-4865/external-provisioner-cfg-csi-mock-volumes-821 Mar 25 13:10:59.364: INFO: creating *v1.RoleBinding: csi-mock-volumes-821-4865/csi-provisioner-role-cfg Mar 25 13:10:59.367: INFO: creating *v1.ServiceAccount: csi-mock-volumes-821-4865/csi-resizer Mar 25 13:10:59.373: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-821 Mar 25 13:10:59.373: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-821 Mar 25 13:10:59.379: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-821 Mar 25 13:10:59.385: INFO: creating *v1.Role: csi-mock-volumes-821-4865/external-resizer-cfg-csi-mock-volumes-821 Mar 25 13:10:59.458: INFO: creating *v1.RoleBinding: csi-mock-volumes-821-4865/csi-resizer-role-cfg Mar 25 13:10:59.503: INFO: creating *v1.ServiceAccount: csi-mock-volumes-821-4865/csi-snapshotter Mar 25 13:10:59.511: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-821 Mar 25 13:10:59.511: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-821 Mar 25 13:10:59.517: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-821 Mar 25 13:10:59.523: INFO: creating *v1.Role: csi-mock-volumes-821-4865/external-snapshotter-leaderelection-csi-mock-volumes-821 Mar 25 13:10:59.540: INFO: creating *v1.RoleBinding: csi-mock-volumes-821-4865/external-snapshotter-leaderelection Mar 25 13:10:59.547: INFO: creating *v1.ServiceAccount: csi-mock-volumes-821-4865/csi-mock Mar 25 13:10:59.590: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-821 Mar 25 13:10:59.601: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-821 Mar 25 13:10:59.607: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-821 Mar 25 13:10:59.629: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-821 Mar 25 13:10:59.643: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-821 Mar 25 13:10:59.649: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-821 Mar 25 13:10:59.655: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-821 Mar 25 13:10:59.806: INFO: creating *v1.StatefulSet: csi-mock-volumes-821-4865/csi-mockplugin Mar 25 13:10:59.813: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-821 Mar 25 13:10:59.863: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-821" Mar 25 13:10:59.899: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-821 to register on node latest-worker STEP: Creating pod with fsGroup Mar 25 13:11:15.032: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 25 13:11:15.078: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-z7ktz] to have phase Bound Mar 25 13:11:15.087: INFO: PersistentVolumeClaim pvc-z7ktz found but phase is Pending instead of Bound. Mar 25 13:11:17.093: INFO: PersistentVolumeClaim pvc-z7ktz found and phase=Bound (2.014743483s) Mar 25 13:11:25.263: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir /mnt/test/csi-mock-volumes-821] Namespace:csi-mock-volumes-821 PodName:pvc-volume-tester-r92vr ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 13:11:25.263: INFO: >>> kubeConfig: /root/.kube/config Mar 25 13:11:25.392: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'filecontents' > '/mnt/test/csi-mock-volumes-821/csi-mock-volumes-821'; sync] Namespace:csi-mock-volumes-821 PodName:pvc-volume-tester-r92vr ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 13:11:25.392: INFO: >>> kubeConfig: /root/.kube/config Mar 25 13:12:29.606: INFO: ExecWithOptions {Command:[/bin/sh -c ls -l /mnt/test/csi-mock-volumes-821/csi-mock-volumes-821] Namespace:csi-mock-volumes-821 PodName:pvc-volume-tester-r92vr ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 13:12:29.606: INFO: >>> kubeConfig: /root/.kube/config Mar 25 13:12:29.716: INFO: pod csi-mock-volumes-821/pvc-volume-tester-r92vr exec for cmd ls -l /mnt/test/csi-mock-volumes-821/csi-mock-volumes-821, stdout: -rw-r--r-- 1 root 17042 13 Mar 25 13:11 /mnt/test/csi-mock-volumes-821/csi-mock-volumes-821, stderr: Mar 25 13:12:29.716: INFO: ExecWithOptions {Command:[/bin/sh -c rm -fr /mnt/test/csi-mock-volumes-821] Namespace:csi-mock-volumes-821 PodName:pvc-volume-tester-r92vr ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 13:12:29.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod pvc-volume-tester-r92vr Mar 25 13:12:29.812: INFO: Deleting pod "pvc-volume-tester-r92vr" in namespace "csi-mock-volumes-821" Mar 25 13:12:29.819: INFO: Wait up to 5m0s for pod "pvc-volume-tester-r92vr" to be fully deleted STEP: Deleting claim pvc-z7ktz Mar 25 13:13:15.951: INFO: Waiting up to 2m0s for PersistentVolume pvc-2bdf69e6-26c1-4329-a2dd-acb478d8d433 to get deleted Mar 25 13:13:15.999: INFO: PersistentVolume pvc-2bdf69e6-26c1-4329-a2dd-acb478d8d433 found and phase=Bound (47.682059ms) Mar 25 13:13:18.003: INFO: PersistentVolume pvc-2bdf69e6-26c1-4329-a2dd-acb478d8d433 was removed STEP: Deleting storageclass csi-mock-volumes-821-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-821 STEP: Waiting for namespaces [csi-mock-volumes-821] to vanish STEP: uninstalling csi mock driver Mar 25 13:13:24.030: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-821-4865/csi-attacher Mar 25 13:13:24.036: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-821 Mar 25 13:13:24.059: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-821 Mar 25 13:13:24.072: INFO: deleting *v1.Role: csi-mock-volumes-821-4865/external-attacher-cfg-csi-mock-volumes-821 Mar 25 13:13:24.079: INFO: deleting *v1.RoleBinding: csi-mock-volumes-821-4865/csi-attacher-role-cfg Mar 25 13:13:24.085: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-821-4865/csi-provisioner Mar 25 13:13:24.101: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-821 Mar 25 13:13:24.109: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-821 Mar 25 13:13:24.119: INFO: deleting *v1.Role: csi-mock-volumes-821-4865/external-provisioner-cfg-csi-mock-volumes-821 Mar 25 13:13:24.126: INFO: deleting *v1.RoleBinding: csi-mock-volumes-821-4865/csi-provisioner-role-cfg Mar 25 13:13:24.132: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-821-4865/csi-resizer Mar 25 13:13:24.150: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-821 Mar 25 13:13:24.166: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-821 Mar 25 13:13:24.174: INFO: deleting *v1.Role: csi-mock-volumes-821-4865/external-resizer-cfg-csi-mock-volumes-821 Mar 25 13:13:24.199: INFO: deleting *v1.RoleBinding: csi-mock-volumes-821-4865/csi-resizer-role-cfg Mar 25 13:13:24.241: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-821-4865/csi-snapshotter Mar 25 13:13:24.253: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-821 Mar 25 13:13:24.286: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-821 Mar 25 13:13:24.294: INFO: deleting *v1.Role: csi-mock-volumes-821-4865/external-snapshotter-leaderelection-csi-mock-volumes-821 Mar 25 13:13:24.306: INFO: deleting *v1.RoleBinding: csi-mock-volumes-821-4865/external-snapshotter-leaderelection Mar 25 13:13:24.312: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-821-4865/csi-mock Mar 25 13:13:24.378: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-821 Mar 25 13:13:24.385: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-821 Mar 25 13:13:24.419: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-821 Mar 25 13:13:24.426: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-821 Mar 25 13:13:24.432: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-821 Mar 25 13:13:24.458: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-821 Mar 25 13:13:24.473: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-821 Mar 25 13:13:24.490: INFO: deleting *v1.StatefulSet: csi-mock-volumes-821-4865/csi-mockplugin Mar 25 13:13:24.502: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-821 STEP: deleting the driver namespace: csi-mock-volumes-821-4865 STEP: Waiting for namespaces [csi-mock-volumes-821-4865] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:14:20.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:201.990 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI FSGroupPolicy [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1433 should modify fsGroup if fsGroupPolicy=File /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1457 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File","total":133,"completed":73,"skipped":4431,"failed":2,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:14:20.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 25 13:14:22.681: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-7dd254d9-99e1-4526-a49b-96890133fc03-backend && ln -s /tmp/local-volume-test-7dd254d9-99e1-4526-a49b-96890133fc03-backend /tmp/local-volume-test-7dd254d9-99e1-4526-a49b-96890133fc03] Namespace:persistent-local-volumes-test-4289 PodName:hostexec-latest-worker2-449q5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 13:14:22.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 13:14:22.784: INFO: Creating a PV followed by a PVC Mar 25 13:14:22.797: INFO: Waiting for PV local-pvn4gx8 to bind to PVC pvc-7qhgc Mar 25 13:14:22.797: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-7qhgc] to have phase Bound Mar 25 13:14:22.854: INFO: PersistentVolumeClaim pvc-7qhgc found but phase is Pending instead of Bound. Mar 25 13:14:24.859: INFO: PersistentVolumeClaim pvc-7qhgc found and phase=Bound (2.061680026s) Mar 25 13:14:24.859: INFO: Waiting up to 3m0s for PersistentVolume local-pvn4gx8 to have phase Bound Mar 25 13:14:24.862: INFO: PersistentVolume local-pvn4gx8 found and phase=Bound (2.603809ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Mar 25 13:14:28.938: INFO: pod "pod-cf322c70-ac98-4ad7-b697-e9bca611c684" created on Node "latest-worker2" STEP: Writing in pod1 Mar 25 13:14:28.938: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4289 PodName:pod-cf322c70-ac98-4ad7-b697-e9bca611c684 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 13:14:28.938: INFO: >>> kubeConfig: /root/.kube/config Mar 25 13:14:29.034: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Mar 25 13:14:29.034: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4289 PodName:pod-cf322c70-ac98-4ad7-b697-e9bca611c684 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 13:14:29.034: INFO: >>> kubeConfig: /root/.kube/config Mar 25 13:14:29.155: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-cf322c70-ac98-4ad7-b697-e9bca611c684 in namespace persistent-local-volumes-test-4289 STEP: Creating pod2 STEP: Creating a pod Mar 25 13:14:33.214: INFO: pod "pod-d3e50791-31dd-4f4b-b8c2-29addaa92532" created on Node "latest-worker2" STEP: Reading in pod2 Mar 25 13:14:33.214: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4289 PodName:pod-d3e50791-31dd-4f4b-b8c2-29addaa92532 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 13:14:33.214: INFO: >>> kubeConfig: /root/.kube/config Mar 25 13:14:33.307: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-d3e50791-31dd-4f4b-b8c2-29addaa92532 in namespace persistent-local-volumes-test-4289 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 13:14:33.314: INFO: Deleting PersistentVolumeClaim "pvc-7qhgc" Mar 25 13:14:33.336: INFO: Deleting PersistentVolume "local-pvn4gx8" STEP: Removing the test directory Mar 25 13:14:33.375: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-7dd254d9-99e1-4526-a49b-96890133fc03 && rm -r /tmp/local-volume-test-7dd254d9-99e1-4526-a49b-96890133fc03-backend] Namespace:persistent-local-volumes-test-4289 PodName:hostexec-latest-worker2-449q5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 13:14:33.375: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:14:33.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4289" for this suite. • [SLOW TEST:12.980 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":133,"completed":74,"skipped":4499,"failed":2,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity"]} SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:75 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:14:33.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:75 STEP: Creating configMap with name configmap-test-volume-5bca17d9-fe79-45ba-a58f-eb5100fe8686 STEP: Creating a pod to test consume configMaps Mar 25 13:14:33.635: INFO: Waiting up to 5m0s for pod "pod-configmaps-37096ac8-60a8-4e19-853d-e2a436f28f3c" in namespace "configmap-2085" to be "Succeeded or Failed" Mar 25 13:14:33.673: INFO: Pod "pod-configmaps-37096ac8-60a8-4e19-853d-e2a436f28f3c": Phase="Pending", Reason="", readiness=false. Elapsed: 38.37278ms Mar 25 13:14:35.699: INFO: Pod "pod-configmaps-37096ac8-60a8-4e19-853d-e2a436f28f3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063920666s Mar 25 13:14:37.722: INFO: Pod "pod-configmaps-37096ac8-60a8-4e19-853d-e2a436f28f3c": Phase="Running", Reason="", readiness=true. Elapsed: 4.087497839s Mar 25 13:14:39.747: INFO: Pod "pod-configmaps-37096ac8-60a8-4e19-853d-e2a436f28f3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.11182326s STEP: Saw pod success Mar 25 13:14:39.747: INFO: Pod "pod-configmaps-37096ac8-60a8-4e19-853d-e2a436f28f3c" satisfied condition "Succeeded or Failed" Mar 25 13:14:39.750: INFO: Trying to get logs from node latest-worker pod pod-configmaps-37096ac8-60a8-4e19-853d-e2a436f28f3c container agnhost-container: STEP: delete the pod Mar 25 13:14:39.879: INFO: Waiting for pod pod-configmaps-37096ac8-60a8-4e19-853d-e2a436f28f3c to disappear Mar 25 13:14:39.893: INFO: Pod pod-configmaps-37096ac8-60a8-4e19-853d-e2a436f28f3c no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:14:39.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2085" for this suite. • [SLOW TEST:6.381 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:75 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":133,"completed":75,"skipped":4504,"failed":2,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:59 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:14:39.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:59 STEP: Creating configMap with name projected-configmap-test-volume-884c5353-a304-4b67-8360-a64d3ceeaff5 STEP: Creating a pod to test consume configMaps Mar 25 13:14:40.010: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-44ab1263-f6d3-4da3-ba55-a3d33be925b0" in namespace "projected-1132" to be "Succeeded or Failed" Mar 25 13:14:40.014: INFO: Pod "pod-projected-configmaps-44ab1263-f6d3-4da3-ba55-a3d33be925b0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.27275ms Mar 25 13:14:42.017: INFO: Pod "pod-projected-configmaps-44ab1263-f6d3-4da3-ba55-a3d33be925b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006757926s Mar 25 13:14:44.026: INFO: Pod "pod-projected-configmaps-44ab1263-f6d3-4da3-ba55-a3d33be925b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015570187s Mar 25 13:14:46.030: INFO: Pod "pod-projected-configmaps-44ab1263-f6d3-4da3-ba55-a3d33be925b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019937431s STEP: Saw pod success Mar 25 13:14:46.030: INFO: Pod "pod-projected-configmaps-44ab1263-f6d3-4da3-ba55-a3d33be925b0" satisfied condition "Succeeded or Failed" Mar 25 13:14:46.054: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-44ab1263-f6d3-4da3-ba55-a3d33be925b0 container agnhost-container: STEP: delete the pod Mar 25 13:14:46.159: INFO: Waiting for pod pod-projected-configmaps-44ab1263-f6d3-4da3-ba55-a3d33be925b0 to disappear Mar 25 13:14:46.174: INFO: Pod pod-projected-configmaps-44ab1263-f6d3-4da3-ba55-a3d33be925b0 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:14:46.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1132" for this suite. • [SLOW TEST:6.514 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:59 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":133,"completed":76,"skipped":4529,"failed":2,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:14:46.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 25 13:14:51.064: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-ceb2b221-b72d-4cf6-a463-caa574a46e81-backend && mount --bind /tmp/local-volume-test-ceb2b221-b72d-4cf6-a463-caa574a46e81-backend /tmp/local-volume-test-ceb2b221-b72d-4cf6-a463-caa574a46e81-backend && ln -s /tmp/local-volume-test-ceb2b221-b72d-4cf6-a463-caa574a46e81-backend /tmp/local-volume-test-ceb2b221-b72d-4cf6-a463-caa574a46e81] Namespace:persistent-local-volumes-test-610 PodName:hostexec-latest-worker2-kvp6k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 13:14:51.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 13:14:51.172: INFO: Creating a PV followed by a PVC Mar 25 13:14:51.260: INFO: Waiting for PV local-pvzv4g2 to bind to PVC pvc-psgbx Mar 25 13:14:51.260: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-psgbx] to have phase Bound Mar 25 13:14:51.277: INFO: PersistentVolumeClaim pvc-psgbx found but phase is Pending instead of Bound. Mar 25 13:14:53.281: INFO: PersistentVolumeClaim pvc-psgbx found but phase is Pending instead of Bound. Mar 25 13:14:55.286: INFO: PersistentVolumeClaim pvc-psgbx found but phase is Pending instead of Bound. Mar 25 13:14:57.291: INFO: PersistentVolumeClaim pvc-psgbx found but phase is Pending instead of Bound. Mar 25 13:14:59.296: INFO: PersistentVolumeClaim pvc-psgbx found but phase is Pending instead of Bound. Mar 25 13:15:01.301: INFO: PersistentVolumeClaim pvc-psgbx found but phase is Pending instead of Bound. Mar 25 13:15:03.307: INFO: PersistentVolumeClaim pvc-psgbx found and phase=Bound (12.046944396s) Mar 25 13:15:03.307: INFO: Waiting up to 3m0s for PersistentVolume local-pvzv4g2 to have phase Bound Mar 25 13:15:03.310: INFO: PersistentVolume local-pvzv4g2 found and phase=Bound (3.084877ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Mar 25 13:15:09.409: INFO: pod "pod-a6e62fe6-56fe-4ef2-87b3-3a446c8825b1" created on Node "latest-worker2" STEP: Writing in pod1 Mar 25 13:15:09.409: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-610 PodName:pod-a6e62fe6-56fe-4ef2-87b3-3a446c8825b1 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 13:15:09.410: INFO: >>> kubeConfig: /root/.kube/config Mar 25 13:15:09.517: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Mar 25 13:15:09.517: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-610 PodName:pod-a6e62fe6-56fe-4ef2-87b3-3a446c8825b1 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 13:15:09.517: INFO: >>> kubeConfig: /root/.kube/config Mar 25 13:15:09.607: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-a6e62fe6-56fe-4ef2-87b3-3a446c8825b1 in namespace persistent-local-volumes-test-610 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 13:15:09.635: INFO: Deleting PersistentVolumeClaim "pvc-psgbx" Mar 25 13:15:09.668: INFO: Deleting PersistentVolume "local-pvzv4g2" STEP: Removing the test directory Mar 25 13:15:09.686: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-ceb2b221-b72d-4cf6-a463-caa574a46e81 && umount /tmp/local-volume-test-ceb2b221-b72d-4cf6-a463-caa574a46e81-backend && rm -r /tmp/local-volume-test-ceb2b221-b72d-4cf6-a463-caa574a46e81-backend] Namespace:persistent-local-volumes-test-610 PodName:hostexec-latest-worker2-kvp6k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 13:15:09.687: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:15:09.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-610" for this suite. • [SLOW TEST:23.433 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":133,"completed":77,"skipped":4558,"failed":2,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:15:09.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume by restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 STEP: Building a driver namespace object, basename csi-mock-volumes-2426 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 13:15:10.050: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2426-3367/csi-attacher Mar 25 13:15:10.053: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2426 Mar 25 13:15:10.053: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-2426 Mar 25 13:15:10.056: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2426 Mar 25 13:15:10.085: INFO: creating *v1.Role: csi-mock-volumes-2426-3367/external-attacher-cfg-csi-mock-volumes-2426 Mar 25 13:15:10.108: INFO: creating *v1.RoleBinding: csi-mock-volumes-2426-3367/csi-attacher-role-cfg Mar 25 13:15:10.121: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2426-3367/csi-provisioner Mar 25 13:15:10.163: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2426 Mar 25 13:15:10.163: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-2426 Mar 25 13:15:10.169: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2426 Mar 25 13:15:10.175: INFO: creating *v1.Role: csi-mock-volumes-2426-3367/external-provisioner-cfg-csi-mock-volumes-2426 Mar 25 13:15:10.181: INFO: creating *v1.RoleBinding: csi-mock-volumes-2426-3367/csi-provisioner-role-cfg Mar 25 13:15:10.202: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2426-3367/csi-resizer Mar 25 13:15:10.226: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2426 Mar 25 13:15:10.226: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-2426 Mar 25 13:15:10.241: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2426 Mar 25 13:15:10.280: INFO: creating *v1.Role: csi-mock-volumes-2426-3367/external-resizer-cfg-csi-mock-volumes-2426 Mar 25 13:15:10.295: INFO: creating *v1.RoleBinding: csi-mock-volumes-2426-3367/csi-resizer-role-cfg Mar 25 13:15:10.312: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2426-3367/csi-snapshotter Mar 25 13:15:10.325: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2426 Mar 25 13:15:10.325: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-2426 Mar 25 13:15:10.331: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2426 Mar 25 13:15:10.337: INFO: creating *v1.Role: csi-mock-volumes-2426-3367/external-snapshotter-leaderelection-csi-mock-volumes-2426 Mar 25 13:15:10.366: INFO: creating *v1.RoleBinding: csi-mock-volumes-2426-3367/external-snapshotter-leaderelection Mar 25 13:15:10.411: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2426-3367/csi-mock Mar 25 13:15:10.415: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2426 Mar 25 13:15:10.421: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2426 Mar 25 13:15:10.426: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2426 Mar 25 13:15:10.433: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2426 Mar 25 13:15:10.478: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2426 Mar 25 13:15:10.555: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2426 Mar 25 13:15:10.559: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2426 Mar 25 13:15:10.565: INFO: creating *v1.StatefulSet: csi-mock-volumes-2426-3367/csi-mockplugin Mar 25 13:15:10.571: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-2426 Mar 25 13:15:10.618: INFO: creating *v1.StatefulSet: csi-mock-volumes-2426-3367/csi-mockplugin-resizer Mar 25 13:15:10.907: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-2426" Mar 25 13:15:11.173: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-2426 to register on node latest-worker STEP: Creating pod Mar 25 13:15:20.883: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 25 13:15:20.891: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-d7j2h] to have phase Bound Mar 25 13:15:20.907: INFO: PersistentVolumeClaim pvc-d7j2h found but phase is Pending instead of Bound. Mar 25 13:15:22.911: INFO: PersistentVolumeClaim pvc-d7j2h found and phase=Bound (2.020610536s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Checking for conditions on pvc STEP: Deleting the previously created pod Mar 25 13:15:29.009: INFO: Deleting pod "pvc-volume-tester-l9cvr" in namespace "csi-mock-volumes-2426" Mar 25 13:15:29.014: INFO: Wait up to 5m0s for pod "pvc-volume-tester-l9cvr" to be fully deleted STEP: Creating a new pod with same volume STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-l9cvr Mar 25 13:16:27.136: INFO: Deleting pod "pvc-volume-tester-l9cvr" in namespace "csi-mock-volumes-2426" STEP: Deleting pod pvc-volume-tester-wwqd6 Mar 25 13:16:27.219: INFO: Deleting pod "pvc-volume-tester-wwqd6" in namespace "csi-mock-volumes-2426" Mar 25 13:16:27.351: INFO: Wait up to 5m0s for pod "pvc-volume-tester-wwqd6" to be fully deleted STEP: Deleting claim pvc-d7j2h Mar 25 13:16:29.431: INFO: Waiting up to 2m0s for PersistentVolume pvc-e7fab31c-ae26-4174-ae46-b297f1763c84 to get deleted Mar 25 13:16:29.435: INFO: PersistentVolume pvc-e7fab31c-ae26-4174-ae46-b297f1763c84 found and phase=Bound (3.742686ms) Mar 25 13:16:31.439: INFO: PersistentVolume pvc-e7fab31c-ae26-4174-ae46-b297f1763c84 was removed STEP: Deleting storageclass csi-mock-volumes-2426-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-2426 STEP: Waiting for namespaces [csi-mock-volumes-2426] to vanish STEP: uninstalling csi mock driver Mar 25 13:16:37.459: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2426-3367/csi-attacher Mar 25 13:16:37.465: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2426 Mar 25 13:16:37.474: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2426 Mar 25 13:16:37.508: INFO: deleting *v1.Role: csi-mock-volumes-2426-3367/external-attacher-cfg-csi-mock-volumes-2426 Mar 25 13:16:37.516: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2426-3367/csi-attacher-role-cfg Mar 25 13:16:37.522: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2426-3367/csi-provisioner Mar 25 13:16:37.527: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2426 Mar 25 13:16:37.537: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2426 Mar 25 13:16:37.563: INFO: deleting *v1.Role: csi-mock-volumes-2426-3367/external-provisioner-cfg-csi-mock-volumes-2426 Mar 25 13:16:37.576: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2426-3367/csi-provisioner-role-cfg Mar 25 13:16:37.581: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2426-3367/csi-resizer Mar 25 13:16:37.593: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2426 Mar 25 13:16:37.599: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2426 Mar 25 13:16:37.641: INFO: deleting *v1.Role: csi-mock-volumes-2426-3367/external-resizer-cfg-csi-mock-volumes-2426 Mar 25 13:16:37.647: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2426-3367/csi-resizer-role-cfg Mar 25 13:16:37.652: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2426-3367/csi-snapshotter Mar 25 13:16:37.659: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2426 Mar 25 13:16:37.693: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2426 Mar 25 13:16:37.708: INFO: deleting *v1.Role: csi-mock-volumes-2426-3367/external-snapshotter-leaderelection-csi-mock-volumes-2426 Mar 25 13:16:37.714: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2426-3367/external-snapshotter-leaderelection Mar 25 13:16:37.719: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2426-3367/csi-mock Mar 25 13:16:37.725: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2426 Mar 25 13:16:37.735: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2426 Mar 25 13:16:37.755: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2426 Mar 25 13:16:37.761: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2426 Mar 25 13:16:37.767: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2426 Mar 25 13:16:37.773: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2426 Mar 25 13:16:37.826: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2426 Mar 25 13:16:37.839: INFO: deleting *v1.StatefulSet: csi-mock-volumes-2426-3367/csi-mockplugin Mar 25 13:16:37.846: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-2426 Mar 25 13:16:37.851: INFO: deleting *v1.StatefulSet: csi-mock-volumes-2426-3367/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-2426-3367 STEP: Waiting for namespaces [csi-mock-volumes-2426-3367] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:17:27.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:138.045 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI Volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561 should expand volume by restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on","total":133,"completed":78,"skipped":4578,"failed":2,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should not modify fsGroup if fsGroupPolicy=None /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1457 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:17:27.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not modify fsGroup if fsGroupPolicy=None /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1457 STEP: Building a driver namespace object, basename csi-mock-volumes-5292 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 13:17:28.093: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5292-934/csi-attacher Mar 25 13:17:28.098: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5292 Mar 25 13:17:28.098: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-5292 Mar 25 13:17:28.112: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5292 Mar 25 13:17:28.116: INFO: creating *v1.Role: csi-mock-volumes-5292-934/external-attacher-cfg-csi-mock-volumes-5292 Mar 25 13:17:28.120: INFO: creating *v1.RoleBinding: csi-mock-volumes-5292-934/csi-attacher-role-cfg Mar 25 13:17:28.126: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5292-934/csi-provisioner Mar 25 13:17:28.148: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5292 Mar 25 13:17:28.148: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-5292 Mar 25 13:17:28.162: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5292 Mar 25 13:17:28.183: INFO: creating *v1.Role: csi-mock-volumes-5292-934/external-provisioner-cfg-csi-mock-volumes-5292 Mar 25 13:17:28.198: INFO: creating *v1.RoleBinding: csi-mock-volumes-5292-934/csi-provisioner-role-cfg Mar 25 13:17:28.256: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5292-934/csi-resizer Mar 25 13:17:28.264: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5292 Mar 25 13:17:28.264: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-5292 Mar 25 13:17:28.270: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5292 Mar 25 13:17:28.276: INFO: creating *v1.Role: csi-mock-volumes-5292-934/external-resizer-cfg-csi-mock-volumes-5292 Mar 25 13:17:28.282: INFO: creating *v1.RoleBinding: csi-mock-volumes-5292-934/csi-resizer-role-cfg Mar 25 13:17:28.305: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5292-934/csi-snapshotter Mar 25 13:17:28.329: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5292 Mar 25 13:17:28.329: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-5292 Mar 25 13:17:28.342: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5292 Mar 25 13:17:28.347: INFO: creating *v1.Role: csi-mock-volumes-5292-934/external-snapshotter-leaderelection-csi-mock-volumes-5292 Mar 25 13:17:28.354: INFO: creating *v1.RoleBinding: csi-mock-volumes-5292-934/external-snapshotter-leaderelection Mar 25 13:17:28.394: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5292-934/csi-mock Mar 25 13:17:28.397: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5292 Mar 25 13:17:28.420: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5292 Mar 25 13:17:28.447: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5292 Mar 25 13:17:28.462: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5292 Mar 25 13:17:28.477: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5292 Mar 25 13:17:28.491: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5292 Mar 25 13:17:28.519: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5292 Mar 25 13:17:28.523: INFO: creating *v1.StatefulSet: csi-mock-volumes-5292-934/csi-mockplugin Mar 25 13:17:28.545: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-5292 Mar 25 13:17:28.570: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-5292" Mar 25 13:17:28.599: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5292 to register on node latest-worker2 STEP: Creating pod with fsGroup Mar 25 13:17:43.180: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 25 13:17:43.208: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-dqmfc] to have phase Bound Mar 25 13:17:43.210: INFO: PersistentVolumeClaim pvc-dqmfc found but phase is Pending instead of Bound. Mar 25 13:17:45.214: INFO: PersistentVolumeClaim pvc-dqmfc found and phase=Bound (2.005929392s) Mar 25 13:17:49.254: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir /mnt/test/csi-mock-volumes-5292] Namespace:csi-mock-volumes-5292 PodName:pvc-volume-tester-7bmdc ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 13:17:49.254: INFO: >>> kubeConfig: /root/.kube/config Mar 25 13:17:49.367: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'filecontents' > '/mnt/test/csi-mock-volumes-5292/csi-mock-volumes-5292'; sync] Namespace:csi-mock-volumes-5292 PodName:pvc-volume-tester-7bmdc ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 13:17:49.367: INFO: >>> kubeConfig: /root/.kube/config Mar 25 13:18:30.189: INFO: ExecWithOptions {Command:[/bin/sh -c ls -l /mnt/test/csi-mock-volumes-5292/csi-mock-volumes-5292] Namespace:csi-mock-volumes-5292 PodName:pvc-volume-tester-7bmdc ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 13:18:30.189: INFO: >>> kubeConfig: /root/.kube/config Mar 25 13:18:30.291: INFO: pod csi-mock-volumes-5292/pvc-volume-tester-7bmdc exec for cmd ls -l /mnt/test/csi-mock-volumes-5292/csi-mock-volumes-5292, stdout: -rw-r--r-- 1 root root 13 Mar 25 13:17 /mnt/test/csi-mock-volumes-5292/csi-mock-volumes-5292, stderr: Mar 25 13:18:30.291: INFO: ExecWithOptions {Command:[/bin/sh -c rm -fr /mnt/test/csi-mock-volumes-5292] Namespace:csi-mock-volumes-5292 PodName:pvc-volume-tester-7bmdc ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 13:18:30.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod pvc-volume-tester-7bmdc Mar 25 13:18:30.388: INFO: Deleting pod "pvc-volume-tester-7bmdc" in namespace "csi-mock-volumes-5292" Mar 25 13:18:30.394: INFO: Wait up to 5m0s for pod "pvc-volume-tester-7bmdc" to be fully deleted STEP: Deleting claim pvc-dqmfc Mar 25 13:19:56.413: INFO: Waiting up to 2m0s for PersistentVolume pvc-eddb8175-4328-40a5-8331-d8c516357ade to get deleted Mar 25 13:19:56.419: INFO: PersistentVolume pvc-eddb8175-4328-40a5-8331-d8c516357ade found and phase=Bound (6.336794ms) Mar 25 13:19:58.424: INFO: PersistentVolume pvc-eddb8175-4328-40a5-8331-d8c516357ade was removed STEP: Deleting storageclass csi-mock-volumes-5292-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-5292 STEP: Waiting for namespaces [csi-mock-volumes-5292] to vanish STEP: uninstalling csi mock driver Mar 25 13:20:04.442: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5292-934/csi-attacher Mar 25 13:20:04.449: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5292 Mar 25 13:20:04.455: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5292 Mar 25 13:20:04.465: INFO: deleting *v1.Role: csi-mock-volumes-5292-934/external-attacher-cfg-csi-mock-volumes-5292 Mar 25 13:20:04.491: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5292-934/csi-attacher-role-cfg Mar 25 13:20:04.497: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5292-934/csi-provisioner Mar 25 13:20:04.508: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5292 Mar 25 13:20:04.522: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5292 Mar 25 13:20:04.583: INFO: deleting *v1.Role: csi-mock-volumes-5292-934/external-provisioner-cfg-csi-mock-volumes-5292 Mar 25 13:20:04.592: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5292-934/csi-provisioner-role-cfg Mar 25 13:20:04.598: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5292-934/csi-resizer Mar 25 13:20:04.604: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5292 Mar 25 13:20:04.610: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5292 Mar 25 13:20:04.621: INFO: deleting *v1.Role: csi-mock-volumes-5292-934/external-resizer-cfg-csi-mock-volumes-5292 Mar 25 13:20:04.629: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5292-934/csi-resizer-role-cfg Mar 25 13:20:04.635: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5292-934/csi-snapshotter Mar 25 13:20:04.652: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5292 Mar 25 13:20:04.676: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5292 Mar 25 13:20:04.707: INFO: deleting *v1.Role: csi-mock-volumes-5292-934/external-snapshotter-leaderelection-csi-mock-volumes-5292 Mar 25 13:20:04.712: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5292-934/external-snapshotter-leaderelection Mar 25 13:20:04.717: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5292-934/csi-mock Mar 25 13:20:04.723: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5292 Mar 25 13:20:04.730: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5292 Mar 25 13:20:04.735: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5292 Mar 25 13:20:04.741: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5292 Mar 25 13:20:04.761: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5292 Mar 25 13:20:04.772: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5292 Mar 25 13:20:04.777: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5292 Mar 25 13:20:04.795: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5292-934/csi-mockplugin Mar 25 13:20:04.803: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-5292 STEP: deleting the driver namespace: csi-mock-volumes-5292-934 STEP: Waiting for namespaces [csi-mock-volumes-5292-934] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:21:00.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:212.947 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI FSGroupPolicy [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1433 should not modify fsGroup if fsGroupPolicy=None /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1457 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should not modify fsGroup if fsGroupPolicy=None","total":133,"completed":79,"skipped":4616,"failed":2,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes GCEPD should test that deleting the Namespace of a PVC and Pod causes the successful detach of Persistent Disk /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:155 [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:21:00.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:77 Mar 25 13:21:00.911: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:21:00.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-5186" for this suite. [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:110 Mar 25 13:21:00.932: INFO: AfterEach: Cleaning up test resources Mar 25 13:21:00.932: INFO: pvc is nil Mar 25 13:21:00.932: INFO: pv is nil S [SKIPPING] in Spec Setup (BeforeEach) [0.089 seconds] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should test that deleting the Namespace of a PVC and Pod causes the successful detach of Persistent Disk [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:155 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48 [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:21:00.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48 STEP: Creating a pod to test hostPath mode Mar 25 13:21:01.036: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-375" to be "Succeeded or Failed" Mar 25 13:21:01.082: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 45.656565ms Mar 25 13:21:03.156: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119475394s Mar 25 13:21:05.161: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.124853165s Mar 25 13:21:07.170: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.133860924s STEP: Saw pod success Mar 25 13:21:07.170: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Mar 25 13:21:07.173: INFO: Trying to get logs from node latest-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 25 13:21:07.214: INFO: Waiting for pod pod-host-path-test to disappear Mar 25 13:21:07.235: INFO: Pod pod-host-path-test no longer exists Mar 25 13:21:07.235: FAIL: Unexpected error: <*errors.errorString | 0xc001e31170>: { s: "expected \"mode of file \\\"/test-volume\\\": dtrwxrwx\" in container output: Expected\n : mount type of \"/test-volume\": tmpfs\n mode of file \"/test-volume\": dgtrwxrwxrwx\n \nto contain substring\n : mode of file \"/test-volume\": dtrwxrwx", } expected "mode of file \"/test-volume\": dtrwxrwx" in container output: Expected : mount type of "/test-volume": tmpfs mode of file "/test-volume": dgtrwxrwxrwx to contain substring : mode of file "/test-volume": dtrwxrwx occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc0004d8840, 0x6b6efc8, 0xd, 0xc003b9ec00, 0x0, 0xc0020691c0, 0x1, 0x1, 0x6d64568) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:742 +0x1e5 k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:564 k8s.io/kubernetes/test/e2e/common/storage.glob..func5.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:59 +0x299 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc003264a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc003264a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc003264a80, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "hostpath-375". STEP: Found 7 events. Mar 25 13:21:07.257: INFO: At 2021-03-25 13:21:01 +0000 UTC - event for pod-host-path-test: {default-scheduler } Scheduled: Successfully assigned hostpath-375/pod-host-path-test to latest-worker Mar 25 13:21:07.257: INFO: At 2021-03-25 13:21:02 +0000 UTC - event for pod-host-path-test: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 13:21:07.257: INFO: At 2021-03-25 13:21:03 +0000 UTC - event for pod-host-path-test: {kubelet latest-worker} Created: Created container test-container-1 Mar 25 13:21:07.257: INFO: At 2021-03-25 13:21:04 +0000 UTC - event for pod-host-path-test: {kubelet latest-worker} Started: Started container test-container-1 Mar 25 13:21:07.257: INFO: At 2021-03-25 13:21:04 +0000 UTC - event for pod-host-path-test: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 13:21:07.257: INFO: At 2021-03-25 13:21:05 +0000 UTC - event for pod-host-path-test: {kubelet latest-worker} Created: Created container test-container-2 Mar 25 13:21:07.257: INFO: At 2021-03-25 13:21:05 +0000 UTC - event for pod-host-path-test: {kubelet latest-worker} Started: Started container test-container-2 Mar 25 13:21:07.260: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 13:21:07.260: INFO: Mar 25 13:21:07.264: INFO: Logging node info for node latest-control-plane Mar 25 13:21:07.266: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane cc9ffc7a-24ee-4720-b82b-ca49361a1767 1172891 0 2021-03-22 08:06:26 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-03-22 08:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-03-22 08:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-03-22 08:06:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 13:19:05 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 13:19:05 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 13:19:05 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 13:19:05 +0000 UTC,LastTransitionTime:2021-03-22 08:06:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.16,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7ddc81afc45247dcbfc9057854ace76d,SystemUUID:bb656e9a-07dd-4f2a-b240-e40b62fcf128,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 13:21:07.267: INFO: Logging kubelet events for node latest-control-plane Mar 25 13:21:07.270: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 25 13:21:07.291: INFO: kube-apiserver-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 13:21:07.291: INFO: Container kube-apiserver ready: true, restart count 0 Mar 25 13:21:07.291: INFO: kube-scheduler-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 13:21:07.291: INFO: Container kube-scheduler ready: true, restart count 0 Mar 25 13:21:07.291: INFO: local-path-provisioner-8b46957d4-mm6wg started at 2021-03-22 08:07:00 +0000 UTC (0+1 container statuses recorded) Mar 25 13:21:07.291: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 25 13:21:07.291: INFO: coredns-74ff55c5b-zfkjb started at 2021-03-25 11:13:02 +0000 UTC (0+1 container statuses recorded) Mar 25 13:21:07.291: INFO: Container coredns ready: true, restart count 0 Mar 25 13:21:07.291: INFO: etcd-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 13:21:07.291: INFO: Container etcd ready: true, restart count 0 Mar 25 13:21:07.291: INFO: kube-controller-manager-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 13:21:07.291: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 25 13:21:07.291: INFO: kindnet-f7lbb started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 13:21:07.291: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 13:21:07.291: INFO: kube-proxy-vs4qz started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 13:21:07.291: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 13:21:07.291: INFO: coredns-74ff55c5b-nh9lj started at 2021-03-25 11:13:01 +0000 UTC (0+1 container statuses recorded) Mar 25 13:21:07.291: INFO: Container coredns ready: true, restart count 0 W0325 13:21:07.296502 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 13:21:07.388: INFO: Latency metrics for node latest-control-plane Mar 25 13:21:07.388: INFO: Logging node info for node latest-worker Mar 25 13:21:07.392: INFO: Node Info: &Node{ObjectMeta:{latest-worker d799492c-1b1f-4258-b431-31204511a98f 1172599 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-218":"csi-mock-csi-mock-volumes-218","csi-mock-csi-mock-volumes-2733":"csi-mock-csi-mock-volumes-2733","csi-mock-csi-mock-volumes-3982":"csi-mock-csi-mock-volumes-3982","csi-mock-csi-mock-volumes-4129":"csi-mock-csi-mock-volumes-4129","csi-mock-csi-mock-volumes-4395":"csi-mock-csi-mock-volumes-4395","csi-mock-csi-mock-volumes-5145":"csi-mock-csi-mock-volumes-5145","csi-mock-csi-mock-volumes-6281":"csi-mock-csi-mock-volumes-6281","csi-mock-csi-mock-volumes-8884":"csi-mock-csi-mock-volumes-8884"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-25 13:06:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 13:07:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 13:17:25 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 13:17:25 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 13:17:25 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 13:17:25 +0000 UTC,LastTransitionTime:2021-03-22 08:07:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.17,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4be5fb85644b44b5b165e551ded370d1,SystemUUID:55469ec9-514f-495b-b880-812c90367461,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 13:21:07.392: INFO: Logging kubelet events for node latest-worker Mar 25 13:21:07.395: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 25 13:21:07.399: INFO: kindnet-jmhgw started at 2021-03-25 12:24:39 +0000 UTC (0+1 container statuses recorded) Mar 25 13:21:07.399: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 13:21:07.399: INFO: kube-proxy-kjrrj started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 13:21:07.399: INFO: Container kube-proxy ready: true, restart count 0 W0325 13:21:07.405578 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 13:21:07.512: INFO: Latency metrics for node latest-worker Mar 25 13:21:07.512: INFO: Logging node info for node latest-worker2 Mar 25 13:21:07.516: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 525d2fa2-95f1-4436-b726-c3866136dd3a 1173069 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1087":"csi-mock-csi-mock-volumes-1087","csi-mock-csi-mock-volumes-1305":"csi-mock-csi-mock-volumes-1305","csi-mock-csi-mock-volumes-1436":"csi-mock-csi-mock-volumes-1436","csi-mock-csi-mock-volumes-4385":"csi-mock-csi-mock-volumes-4385","csi-mock-csi-mock-volumes-5253":"csi-mock-csi-mock-volumes-5253","csi-mock-csi-mock-volumes-5595":"csi-mock-csi-mock-volumes-5595","csi-mock-csi-mock-volumes-6229":"csi-mock-csi-mock-volumes-6229","csi-mock-csi-mock-volumes-6949":"csi-mock-csi-mock-volumes-6949","csi-mock-csi-mock-volumes-7130":"csi-mock-csi-mock-volumes-7130","csi-mock-csi-mock-volumes-7225":"csi-mock-csi-mock-volumes-7225","csi-mock-csi-mock-volumes-8538":"csi-mock-csi-mock-volumes-8538","csi-mock-csi-mock-volumes-9682":"csi-mock-csi-mock-volumes-9682","csi-mock-csi-mock-volumes-9809":"csi-mock-csi-mock-volumes-9809"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-25 13:04:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 13:05:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 13:20:05 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 13:20:05 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 13:20:05 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 13:20:05 +0000 UTC,LastTransitionTime:2021-03-22 08:07:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.15,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:cf016e12ad1c42869781f444437713bb,SystemUUID:796f762a-f81b-4766-9835-b125da6d5224,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 13:21:07.517: INFO: Logging kubelet events for node latest-worker2 Mar 25 13:21:07.523: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 25 13:21:07.535: INFO: kindnet-f7zk8 started at 2021-03-25 12:10:50 +0000 UTC (0+1 container statuses recorded) Mar 25 13:21:07.535: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 13:21:07.535: INFO: pvc-volume-tester-gqglb started at 2021-03-24 09:41:54 +0000 UTC (0+1 container statuses recorded) Mar 25 13:21:07.535: INFO: Container volume-tester ready: false, restart count 0 Mar 25 13:21:07.535: INFO: kube-proxy-dv4wd started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 13:21:07.535: INFO: Container kube-proxy ready: true, restart count 0 W0325 13:21:07.541362 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 13:21:07.654: INFO: Latency metrics for node latest-worker2 Mar 25 13:21:07.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-375" for this suite. • Failure [6.731 seconds] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should give a volume the correct mode [LinuxOnly] [NodeConformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48 Mar 25 13:21:07.235: Unexpected error: <*errors.errorString | 0xc001e31170>: { s: "expected \"mode of file \\\"/test-volume\\\": dtrwxrwx\" in container output: Expected\n : mount type of \"/test-volume\": tmpfs\n mode of file \"/test-volume\": dgtrwxrwxrwx\n \nto contain substring\n : mode of file \"/test-volume\": dtrwxrwx", } expected "mode of file \"/test-volume\": dtrwxrwx" in container output: Expected : mount type of "/test-volume": tmpfs mode of file "/test-volume": dgtrwxrwxrwx to contain substring : mode of file "/test-volume": dtrwxrwx occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:742 ------------------------------ {"msg":"FAILED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","total":133,"completed":79,"skipped":4649,"failed":3,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Dynamic Provisioning Invalid AWS KMS key should report an error and create no PV /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:826 [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:21:07.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should report an error and create no PV /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:826 Mar 25 13:21:07.726: INFO: Only supported for providers [aws] (not local) [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:21:07.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-9920" for this suite. S [SKIPPING] [0.071 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Invalid AWS KMS key /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:825 should report an error and create no PV [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:826 Only supported for providers [aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:827 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:67 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:21:07.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] files with FSGroup ownership should support (root,0644,tmpfs) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:67 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 25 13:21:07.815: INFO: Waiting up to 5m0s for pod "pod-eeb056c3-3547-47ba-a818-2045097914d4" in namespace "emptydir-74" to be "Succeeded or Failed" Mar 25 13:21:07.842: INFO: Pod "pod-eeb056c3-3547-47ba-a818-2045097914d4": Phase="Pending", Reason="", readiness=false. Elapsed: 27.249403ms Mar 25 13:21:09.847: INFO: Pod "pod-eeb056c3-3547-47ba-a818-2045097914d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032316776s Mar 25 13:21:11.855: INFO: Pod "pod-eeb056c3-3547-47ba-a818-2045097914d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039908619s STEP: Saw pod success Mar 25 13:21:11.855: INFO: Pod "pod-eeb056c3-3547-47ba-a818-2045097914d4" satisfied condition "Succeeded or Failed" Mar 25 13:21:11.857: INFO: Trying to get logs from node latest-worker pod pod-eeb056c3-3547-47ba-a818-2045097914d4 container test-container: STEP: delete the pod Mar 25 13:21:11.899: INFO: Waiting for pod pod-eeb056c3-3547-47ba-a818-2045097914d4 to disappear Mar 25 13:21:11.909: INFO: Pod pod-eeb056c3-3547-47ba-a818-2045097914d4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:21:11.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-74" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)","total":133,"completed":80,"skipped":4727,"failed":3,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PV Protection Verify "immediate" deletion of a PV that is not bound to a PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:99 [BeforeEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:21:11.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:51 Mar 25 13:21:12.014: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PV STEP: Waiting for PV to enter phase Available Mar 25 13:21:12.029: INFO: Waiting up to 30s for PersistentVolume hostpath-wf6v8 to have phase Available Mar 25 13:21:12.077: INFO: PersistentVolume hostpath-wf6v8 found but phase is Pending instead of Available. Mar 25 13:21:13.083: INFO: PersistentVolume hostpath-wf6v8 found and phase=Available (1.053971468s) STEP: Checking that PV Protection finalizer is set [It] Verify "immediate" deletion of a PV that is not bound to a PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:99 STEP: Deleting the PV Mar 25 13:21:13.089: INFO: Waiting up to 3m0s for PersistentVolume hostpath-wf6v8 to get deleted Mar 25 13:21:13.094: INFO: PersistentVolume hostpath-wf6v8 found and phase=Available (5.029951ms) Mar 25 13:21:15.099: INFO: PersistentVolume hostpath-wf6v8 was removed [AfterEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:21:15.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-protection-4516" for this suite. [AfterEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:92 Mar 25 13:21:15.108: INFO: AfterEach: Cleaning up test resources. Mar 25 13:21:15.108: INFO: Deleting PersistentVolumeClaim "pvc-8kvm5" Mar 25 13:21:15.156: INFO: Deleting PersistentVolume "hostpath-wf6v8" •{"msg":"PASSED [sig-storage] PV Protection Verify \"immediate\" deletion of a PV that is not bound to a PVC","total":133,"completed":81,"skipped":4786,"failed":3,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:21:15.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "latest-worker2" using path "/tmp/local-volume-test-1d2f6748-e030-4523-8995-2e806cb1727f" Mar 25 13:21:19.282: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-1d2f6748-e030-4523-8995-2e806cb1727f && dd if=/dev/zero of=/tmp/local-volume-test-1d2f6748-e030-4523-8995-2e806cb1727f/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-1d2f6748-e030-4523-8995-2e806cb1727f/file] Namespace:persistent-local-volumes-test-5669 PodName:hostexec-latest-worker2-xngx7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 13:21:19.282: INFO: >>> kubeConfig: /root/.kube/config Mar 25 13:21:19.504: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-1d2f6748-e030-4523-8995-2e806cb1727f/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-5669 PodName:hostexec-latest-worker2-xngx7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 13:21:19.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 13:21:19.599: INFO: Creating a PV followed by a PVC Mar 25 13:21:19.610: INFO: Waiting for PV local-pvpfjz8 to bind to PVC pvc-cz8v4 Mar 25 13:21:19.610: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-cz8v4] to have phase Bound Mar 25 13:21:19.642: INFO: PersistentVolumeClaim pvc-cz8v4 found but phase is Pending instead of Bound. Mar 25 13:21:21.647: INFO: PersistentVolumeClaim pvc-cz8v4 found but phase is Pending instead of Bound. Mar 25 13:21:23.652: INFO: PersistentVolumeClaim pvc-cz8v4 found but phase is Pending instead of Bound. Mar 25 13:21:25.657: INFO: PersistentVolumeClaim pvc-cz8v4 found but phase is Pending instead of Bound. Mar 25 13:21:27.663: INFO: PersistentVolumeClaim pvc-cz8v4 found but phase is Pending instead of Bound. Mar 25 13:21:29.668: INFO: PersistentVolumeClaim pvc-cz8v4 found but phase is Pending instead of Bound. Mar 25 13:21:31.674: INFO: PersistentVolumeClaim pvc-cz8v4 found but phase is Pending instead of Bound. Mar 25 13:21:33.678: INFO: PersistentVolumeClaim pvc-cz8v4 found and phase=Bound (14.068164617s) Mar 25 13:21:33.678: INFO: Waiting up to 3m0s for PersistentVolume local-pvpfjz8 to have phase Bound Mar 25 13:21:33.681: INFO: PersistentVolume local-pvpfjz8 found and phase=Bound (3.119717ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Mar 25 13:21:39.727: INFO: pod "pod-23e04f6f-db27-4f68-a22c-72ef9f4cec8f" created on Node "latest-worker2" STEP: Writing in pod1 Mar 25 13:21:39.727: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5669 PodName:pod-23e04f6f-db27-4f68-a22c-72ef9f4cec8f ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 13:21:39.727: INFO: >>> kubeConfig: /root/.kube/config Mar 25 13:21:39.855: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Mar 25 13:21:39.855: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5669 PodName:pod-23e04f6f-db27-4f68-a22c-72ef9f4cec8f ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 13:21:39.855: INFO: >>> kubeConfig: /root/.kube/config Mar 25 13:21:39.948: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-23e04f6f-db27-4f68-a22c-72ef9f4cec8f in namespace persistent-local-volumes-test-5669 STEP: Creating pod2 STEP: Creating a pod Mar 25 13:21:44.007: INFO: pod "pod-d41b9257-faee-4b5f-993b-a0fb504056c6" created on Node "latest-worker2" STEP: Reading in pod2 Mar 25 13:21:44.007: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5669 PodName:pod-d41b9257-faee-4b5f-993b-a0fb504056c6 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 13:21:44.007: INFO: >>> kubeConfig: /root/.kube/config Mar 25 13:21:44.104: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-d41b9257-faee-4b5f-993b-a0fb504056c6 in namespace persistent-local-volumes-test-5669 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 13:21:44.111: INFO: Deleting PersistentVolumeClaim "pvc-cz8v4" Mar 25 13:21:44.121: INFO: Deleting PersistentVolume "local-pvpfjz8" Mar 25 13:21:44.168: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-1d2f6748-e030-4523-8995-2e806cb1727f/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-5669 PodName:hostexec-latest-worker2-xngx7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 13:21:44.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "latest-worker2" at path /tmp/local-volume-test-1d2f6748-e030-4523-8995-2e806cb1727f/file Mar 25 13:21:44.305: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-5669 PodName:hostexec-latest-worker2-xngx7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 13:21:44.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-1d2f6748-e030-4523-8995-2e806cb1727f Mar 25 13:21:44.415: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-1d2f6748-e030-4523-8995-2e806cb1727f] Namespace:persistent-local-volumes-test-5669 PodName:hostexec-latest-worker2-xngx7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 13:21:44.415: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:21:44.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5669" for this suite. • [SLOW TEST:29.381 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":133,"completed":82,"skipped":4861,"failed":3,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Pod Disks should be able to delete a non-existent PD without error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:449 [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:21:44.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 [It] should be able to delete a non-existent PD without error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:449 Mar 25 13:21:44.657: INFO: Only supported for providers [gce] (not local) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:21:44.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-2490" for this suite. S [SKIPPING] [0.123 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should be able to delete a non-existent PD without error [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:449 Only supported for providers [gce] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:450 ------------------------------ [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:21:44.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity used, have capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 STEP: Building a driver namespace object, basename csi-mock-volumes-9759 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 13:21:44.813: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9759-8616/csi-attacher Mar 25 13:21:44.816: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9759 Mar 25 13:21:44.816: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-9759 Mar 25 13:21:44.820: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9759 Mar 25 13:21:44.874: INFO: creating *v1.Role: csi-mock-volumes-9759-8616/external-attacher-cfg-csi-mock-volumes-9759 Mar 25 13:21:44.898: INFO: creating *v1.RoleBinding: csi-mock-volumes-9759-8616/csi-attacher-role-cfg Mar 25 13:21:44.958: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9759-8616/csi-provisioner Mar 25 13:21:44.970: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9759 Mar 25 13:21:44.970: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-9759 Mar 25 13:21:44.988: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9759 Mar 25 13:21:44.997: INFO: creating *v1.Role: csi-mock-volumes-9759-8616/external-provisioner-cfg-csi-mock-volumes-9759 Mar 25 13:21:45.011: INFO: creating *v1.RoleBinding: csi-mock-volumes-9759-8616/csi-provisioner-role-cfg Mar 25 13:21:45.023: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9759-8616/csi-resizer Mar 25 13:21:45.031: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9759 Mar 25 13:21:45.031: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-9759 Mar 25 13:21:45.071: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9759 Mar 25 13:21:45.114: INFO: creating *v1.Role: csi-mock-volumes-9759-8616/external-resizer-cfg-csi-mock-volumes-9759 Mar 25 13:21:45.126: INFO: creating *v1.RoleBinding: csi-mock-volumes-9759-8616/csi-resizer-role-cfg Mar 25 13:21:45.155: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9759-8616/csi-snapshotter Mar 25 13:21:45.161: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9759 Mar 25 13:21:45.161: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-9759 Mar 25 13:21:45.168: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9759 Mar 25 13:21:45.173: INFO: creating *v1.Role: csi-mock-volumes-9759-8616/external-snapshotter-leaderelection-csi-mock-volumes-9759 Mar 25 13:21:45.276: INFO: creating *v1.RoleBinding: csi-mock-volumes-9759-8616/external-snapshotter-leaderelection Mar 25 13:21:45.312: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9759-8616/csi-mock Mar 25 13:21:45.331: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9759 Mar 25 13:21:45.349: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9759 Mar 25 13:21:45.355: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9759 Mar 25 13:21:45.361: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9759 Mar 25 13:21:45.413: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9759 Mar 25 13:21:45.425: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9759 Mar 25 13:21:45.439: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9759 Mar 25 13:21:45.445: INFO: creating *v1.StatefulSet: csi-mock-volumes-9759-8616/csi-mockplugin Mar 25 13:21:45.451: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-9759 Mar 25 13:21:45.486: INFO: creating *v1.StatefulSet: csi-mock-volumes-9759-8616/csi-mockplugin-attacher Mar 25 13:21:45.552: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-9759" Mar 25 13:21:45.581: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-9759 to register on node latest-worker Mar 25 13:21:55.214: FAIL: create CSIStorageCapacity {TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name: GenerateName:fake-capacity- Namespace: SelfLink: UID: ResourceVersion: Generation:0 CreationTimestamp:0001-01-01 00:00:00 +0000 UTC DeletionTimestamp: DeletionGracePeriodSeconds: Labels:map[] Annotations:map[] OwnerReferences:[] Finalizers:[] ClusterName: ManagedFields:[]} NodeTopology:&LabelSelector{MatchLabels:map[string]string{},MatchExpressions:[]LabelSelectorRequirement{},} StorageClassName:mock-csi-storage-capacity-csi-mock-volumes-9759 Capacity:100Gi MaximumVolumeSize:} Unexpected error: <*errors.StatusError | 0xc0048f45a0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func1.14.1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1201 +0x47a k8s.io/kubernetes/test/e2e.RunE2ETests(0xc003264a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc003264a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc003264a80, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-9759 STEP: Waiting for namespaces [csi-mock-volumes-9759] to vanish STEP: uninstalling csi mock driver Mar 25 13:22:01.226: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9759-8616/csi-attacher Mar 25 13:22:01.231: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9759 Mar 25 13:22:01.242: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9759 Mar 25 13:22:01.250: INFO: deleting *v1.Role: csi-mock-volumes-9759-8616/external-attacher-cfg-csi-mock-volumes-9759 Mar 25 13:22:01.270: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9759-8616/csi-attacher-role-cfg Mar 25 13:22:01.309: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9759-8616/csi-provisioner Mar 25 13:22:01.322: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9759 Mar 25 13:22:01.328: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9759 Mar 25 13:22:01.339: INFO: deleting *v1.Role: csi-mock-volumes-9759-8616/external-provisioner-cfg-csi-mock-volumes-9759 Mar 25 13:22:01.346: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9759-8616/csi-provisioner-role-cfg Mar 25 13:22:01.352: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9759-8616/csi-resizer Mar 25 13:22:01.358: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9759 Mar 25 13:22:01.363: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9759 Mar 25 13:22:01.384: INFO: deleting *v1.Role: csi-mock-volumes-9759-8616/external-resizer-cfg-csi-mock-volumes-9759 Mar 25 13:22:01.400: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9759-8616/csi-resizer-role-cfg Mar 25 13:22:01.425: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9759-8616/csi-snapshotter Mar 25 13:22:01.430: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9759 Mar 25 13:22:01.442: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9759 Mar 25 13:22:01.453: INFO: deleting *v1.Role: csi-mock-volumes-9759-8616/external-snapshotter-leaderelection-csi-mock-volumes-9759 Mar 25 13:22:01.459: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9759-8616/external-snapshotter-leaderelection Mar 25 13:22:01.466: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9759-8616/csi-mock Mar 25 13:22:01.472: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9759 Mar 25 13:22:01.479: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9759 Mar 25 13:22:01.507: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9759 Mar 25 13:22:01.514: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9759 Mar 25 13:22:01.519: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9759 Mar 25 13:22:01.545: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9759 Mar 25 13:22:01.549: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9759 Mar 25 13:22:01.558: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9759-8616/csi-mockplugin Mar 25 13:22:01.569: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-9759 Mar 25 13:22:01.575: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9759-8616/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-9759-8616 STEP: Waiting for namespaces [csi-mock-volumes-9759-8616] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:22:29.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • Failure [44.950 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134 CSIStorageCapacity used, have capacity [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 Mar 25 13:21:55.214: create CSIStorageCapacity {TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name: GenerateName:fake-capacity- Namespace: SelfLink: UID: ResourceVersion: Generation:0 CreationTimestamp:0001-01-01 00:00:00 +0000 UTC DeletionTimestamp: DeletionGracePeriodSeconds: Labels:map[] Annotations:map[] OwnerReferences:[] Finalizers:[] ClusterName: ManagedFields:[]} NodeTopology:&LabelSelector{MatchLabels:map[string]string{},MatchExpressions:[]LabelSelectorRequirement{},} StorageClassName:mock-csi-storage-capacity-csi-mock-volumes-9759 Capacity:100Gi MaximumVolumeSize:} Unexpected error: <*errors.StatusError | 0xc0048f45a0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1201 ------------------------------ {"msg":"FAILED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","total":133,"completed":82,"skipped":4887,"failed":4,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] One pod requesting one prebound PVC should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:22:29.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Mar 25 13:22:33.716: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-6315 PodName:hostexec-latest-worker-kxl5m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 13:22:33.716: INFO: >>> kubeConfig: /root/.kube/config Mar 25 13:22:33.830: INFO: exec latest-worker: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Mar 25 13:22:33.830: INFO: exec latest-worker: stdout: "0\n" Mar 25 13:22:33.830: INFO: exec latest-worker: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Mar 25 13:22:33.830: INFO: exec latest-worker: exit code: 0 Mar 25 13:22:33.830: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:22:33.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6315" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [4.228 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1256 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create metrics for total time taken in volume operations in P/V Controller /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:261 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:22:33.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Mar 25 13:22:33.902: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:22:33.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-2383" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.131 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create metrics for total time taken in volume operations in P/V Controller [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:261 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:22:33.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not expand volume if resizingOnDriver=off, resizingOnSC=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 STEP: Building a driver namespace object, basename csi-mock-volumes-9258 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 13:22:34.314: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9258-2244/csi-attacher Mar 25 13:22:34.317: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9258 Mar 25 13:22:34.317: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-9258 Mar 25 13:22:34.331: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9258 Mar 25 13:22:34.336: INFO: creating *v1.Role: csi-mock-volumes-9258-2244/external-attacher-cfg-csi-mock-volumes-9258 Mar 25 13:22:34.343: INFO: creating *v1.RoleBinding: csi-mock-volumes-9258-2244/csi-attacher-role-cfg Mar 25 13:22:34.392: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9258-2244/csi-provisioner Mar 25 13:22:34.425: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9258 Mar 25 13:22:34.425: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-9258 Mar 25 13:22:34.430: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9258 Mar 25 13:22:34.445: INFO: creating *v1.Role: csi-mock-volumes-9258-2244/external-provisioner-cfg-csi-mock-volumes-9258 Mar 25 13:22:34.469: INFO: creating *v1.RoleBinding: csi-mock-volumes-9258-2244/csi-provisioner-role-cfg Mar 25 13:22:34.480: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9258-2244/csi-resizer Mar 25 13:22:34.486: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9258 Mar 25 13:22:34.486: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-9258 Mar 25 13:22:34.493: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9258 Mar 25 13:22:34.498: INFO: creating *v1.Role: csi-mock-volumes-9258-2244/external-resizer-cfg-csi-mock-volumes-9258 Mar 25 13:22:34.558: INFO: creating *v1.RoleBinding: csi-mock-volumes-9258-2244/csi-resizer-role-cfg Mar 25 13:22:34.562: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9258-2244/csi-snapshotter Mar 25 13:22:34.577: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9258 Mar 25 13:22:34.577: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-9258 Mar 25 13:22:34.588: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9258 Mar 25 13:22:34.594: INFO: creating *v1.Role: csi-mock-volumes-9258-2244/external-snapshotter-leaderelection-csi-mock-volumes-9258 Mar 25 13:22:34.600: INFO: creating *v1.RoleBinding: csi-mock-volumes-9258-2244/external-snapshotter-leaderelection Mar 25 13:22:34.618: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9258-2244/csi-mock Mar 25 13:22:34.630: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9258 Mar 25 13:22:34.639: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9258 Mar 25 13:22:34.654: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9258 Mar 25 13:22:34.677: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9258 Mar 25 13:22:34.692: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9258 Mar 25 13:22:34.702: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9258 Mar 25 13:22:34.708: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9258 Mar 25 13:22:34.714: INFO: creating *v1.StatefulSet: csi-mock-volumes-9258-2244/csi-mockplugin Mar 25 13:22:34.733: INFO: creating *v1.StatefulSet: csi-mock-volumes-9258-2244/csi-mockplugin-attacher Mar 25 13:22:34.757: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-9258 to register on node latest-worker2 STEP: Creating pod Mar 25 13:22:44.533: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 25 13:22:44.543: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-dp52w] to have phase Bound Mar 25 13:22:44.547: INFO: PersistentVolumeClaim pvc-dp52w found but phase is Pending instead of Bound. Mar 25 13:22:46.551: INFO: PersistentVolumeClaim pvc-dp52w found and phase=Bound (2.008258876s) STEP: Expanding current pvc STEP: Deleting pod pvc-volume-tester-gzpgs Mar 25 13:25:08.631: INFO: Deleting pod "pvc-volume-tester-gzpgs" in namespace "csi-mock-volumes-9258" Mar 25 13:25:08.640: INFO: Wait up to 5m0s for pod "pvc-volume-tester-gzpgs" to be fully deleted STEP: Deleting claim pvc-dp52w Mar 25 13:25:16.818: INFO: Waiting up to 2m0s for PersistentVolume pvc-037c3fb0-6c73-4b5a-96f5-b145b4fe82a3 to get deleted Mar 25 13:25:16.845: INFO: PersistentVolume pvc-037c3fb0-6c73-4b5a-96f5-b145b4fe82a3 found and phase=Bound (26.804451ms) Mar 25 13:25:18.849: INFO: PersistentVolume pvc-037c3fb0-6c73-4b5a-96f5-b145b4fe82a3 was removed STEP: Deleting storageclass csi-mock-volumes-9258-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-9258 STEP: Waiting for namespaces [csi-mock-volumes-9258] to vanish STEP: uninstalling csi mock driver Mar 25 13:25:24.868: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9258-2244/csi-attacher Mar 25 13:25:24.917: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9258 Mar 25 13:25:24.939: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9258 Mar 25 13:25:24.989: INFO: deleting *v1.Role: csi-mock-volumes-9258-2244/external-attacher-cfg-csi-mock-volumes-9258 Mar 25 13:25:25.057: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9258-2244/csi-attacher-role-cfg Mar 25 13:25:25.072: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9258-2244/csi-provisioner Mar 25 13:25:25.131: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9258 Mar 25 13:25:25.262: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9258 Mar 25 13:25:25.270: INFO: deleting *v1.Role: csi-mock-volumes-9258-2244/external-provisioner-cfg-csi-mock-volumes-9258 Mar 25 13:25:25.351: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9258-2244/csi-provisioner-role-cfg Mar 25 13:25:25.360: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9258-2244/csi-resizer Mar 25 13:25:25.378: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9258 Mar 25 13:25:25.383: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9258 Mar 25 13:25:25.394: INFO: deleting *v1.Role: csi-mock-volumes-9258-2244/external-resizer-cfg-csi-mock-volumes-9258 Mar 25 13:25:25.439: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9258-2244/csi-resizer-role-cfg Mar 25 13:25:25.498: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9258-2244/csi-snapshotter Mar 25 13:25:25.540: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9258 Mar 25 13:25:25.680: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9258 Mar 25 13:25:25.712: INFO: deleting *v1.Role: csi-mock-volumes-9258-2244/external-snapshotter-leaderelection-csi-mock-volumes-9258 Mar 25 13:25:25.725: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9258-2244/external-snapshotter-leaderelection Mar 25 13:25:25.731: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9258-2244/csi-mock Mar 25 13:25:25.743: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9258 Mar 25 13:25:25.842: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9258 Mar 25 13:25:25.995: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9258 Mar 25 13:25:26.001: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9258 Mar 25 13:25:26.024: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9258 Mar 25 13:25:26.042: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9258 Mar 25 13:25:26.048: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9258 Mar 25 13:25:26.054: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9258-2244/csi-mockplugin Mar 25 13:25:26.066: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9258-2244/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-9258-2244 STEP: Waiting for namespaces [csi-mock-volumes-9258-2244] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:26:12.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:218.275 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI Volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561 should not expand volume if resizingOnDriver=off, resizingOnSC=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on","total":133,"completed":83,"skipped":4967,"failed":4,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:26:12.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should preserve attachment policy when no CSIDriver present /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 STEP: Building a driver namespace object, basename csi-mock-volumes-7750 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 13:26:14.807: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7750-129/csi-attacher Mar 25 13:26:14.811: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7750 Mar 25 13:26:14.811: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-7750 Mar 25 13:26:14.856: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7750 Mar 25 13:26:14.887: INFO: creating *v1.Role: csi-mock-volumes-7750-129/external-attacher-cfg-csi-mock-volumes-7750 Mar 25 13:26:15.392: INFO: creating *v1.RoleBinding: csi-mock-volumes-7750-129/csi-attacher-role-cfg Mar 25 13:26:15.396: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7750-129/csi-provisioner Mar 25 13:26:15.449: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7750 Mar 25 13:26:15.449: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-7750 Mar 25 13:26:15.576: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7750 Mar 25 13:26:15.632: INFO: creating *v1.Role: csi-mock-volumes-7750-129/external-provisioner-cfg-csi-mock-volumes-7750 Mar 25 13:26:15.762: INFO: creating *v1.RoleBinding: csi-mock-volumes-7750-129/csi-provisioner-role-cfg Mar 25 13:26:15.794: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7750-129/csi-resizer Mar 25 13:26:15.833: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7750 Mar 25 13:26:15.833: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-7750 Mar 25 13:26:15.985: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7750 Mar 25 13:26:16.055: INFO: creating *v1.Role: csi-mock-volumes-7750-129/external-resizer-cfg-csi-mock-volumes-7750 Mar 25 13:26:16.195: INFO: creating *v1.RoleBinding: csi-mock-volumes-7750-129/csi-resizer-role-cfg Mar 25 13:26:16.262: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7750-129/csi-snapshotter Mar 25 13:26:16.331: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7750 Mar 25 13:26:16.331: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-7750 Mar 25 13:26:16.335: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7750 Mar 25 13:26:16.359: INFO: creating *v1.Role: csi-mock-volumes-7750-129/external-snapshotter-leaderelection-csi-mock-volumes-7750 Mar 25 13:26:16.505: INFO: creating *v1.RoleBinding: csi-mock-volumes-7750-129/external-snapshotter-leaderelection Mar 25 13:26:16.685: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7750-129/csi-mock Mar 25 13:26:16.713: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7750 Mar 25 13:26:16.730: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7750 Mar 25 13:26:16.841: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7750 Mar 25 13:26:16.869: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7750 Mar 25 13:26:16.922: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7750 Mar 25 13:26:16.959: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7750 Mar 25 13:26:16.964: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7750 Mar 25 13:26:16.983: INFO: creating *v1.StatefulSet: csi-mock-volumes-7750-129/csi-mockplugin Mar 25 13:26:17.002: INFO: creating *v1.StatefulSet: csi-mock-volumes-7750-129/csi-mockplugin-attacher Mar 25 13:26:17.046: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-7750 to register on node latest-worker2 STEP: Creating pod Mar 25 13:26:33.835: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 25 13:26:34.022: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-cdmbr] to have phase Bound Mar 25 13:26:34.031: INFO: PersistentVolumeClaim pvc-cdmbr found but phase is Pending instead of Bound. Mar 25 13:26:36.067: INFO: PersistentVolumeClaim pvc-cdmbr found and phase=Bound (2.0447769s) STEP: Checking if VolumeAttachment was created for the pod STEP: Deleting pod pvc-volume-tester-j6vxv Mar 25 13:26:58.846: INFO: Deleting pod "pvc-volume-tester-j6vxv" in namespace "csi-mock-volumes-7750" Mar 25 13:26:58.850: INFO: Wait up to 5m0s for pod "pvc-volume-tester-j6vxv" to be fully deleted STEP: Deleting claim pvc-cdmbr Mar 25 13:27:06.939: INFO: Waiting up to 2m0s for PersistentVolume pvc-9d08af8b-822a-45f9-a0b6-21176d1a528a to get deleted Mar 25 13:27:06.948: INFO: PersistentVolume pvc-9d08af8b-822a-45f9-a0b6-21176d1a528a found and phase=Bound (9.389897ms) Mar 25 13:27:08.954: INFO: PersistentVolume pvc-9d08af8b-822a-45f9-a0b6-21176d1a528a was removed STEP: Deleting storageclass csi-mock-volumes-7750-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-7750 STEP: Waiting for namespaces [csi-mock-volumes-7750] to vanish STEP: uninstalling csi mock driver Mar 25 13:27:15.030: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7750-129/csi-attacher Mar 25 13:27:15.036: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7750 Mar 25 13:27:15.086: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7750 Mar 25 13:27:15.094: INFO: deleting *v1.Role: csi-mock-volumes-7750-129/external-attacher-cfg-csi-mock-volumes-7750 Mar 25 13:27:15.100: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7750-129/csi-attacher-role-cfg Mar 25 13:27:15.105: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7750-129/csi-provisioner Mar 25 13:27:15.112: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7750 Mar 25 13:27:15.131: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7750 Mar 25 13:27:15.153: INFO: deleting *v1.Role: csi-mock-volumes-7750-129/external-provisioner-cfg-csi-mock-volumes-7750 Mar 25 13:27:15.160: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7750-129/csi-provisioner-role-cfg Mar 25 13:27:15.165: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7750-129/csi-resizer Mar 25 13:27:15.171: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7750 Mar 25 13:27:15.177: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7750 Mar 25 13:27:15.195: INFO: deleting *v1.Role: csi-mock-volumes-7750-129/external-resizer-cfg-csi-mock-volumes-7750 Mar 25 13:27:15.202: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7750-129/csi-resizer-role-cfg Mar 25 13:27:15.207: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7750-129/csi-snapshotter Mar 25 13:27:15.214: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7750 Mar 25 13:27:15.234: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7750 Mar 25 13:27:15.255: INFO: deleting *v1.Role: csi-mock-volumes-7750-129/external-snapshotter-leaderelection-csi-mock-volumes-7750 Mar 25 13:27:15.261: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7750-129/external-snapshotter-leaderelection Mar 25 13:27:15.267: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7750-129/csi-mock Mar 25 13:27:15.273: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7750 Mar 25 13:27:15.279: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7750 Mar 25 13:27:15.290: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7750 Mar 25 13:27:15.309: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7750 Mar 25 13:27:15.338: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7750 Mar 25 13:27:15.351: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7750 Mar 25 13:27:15.358: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7750 Mar 25 13:27:15.364: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7750-129/csi-mockplugin Mar 25 13:27:15.369: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7750-129/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-7750-129 STEP: Waiting for namespaces [csi-mock-volumes-7750-129] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:28:02.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:110.073 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI attach test using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:316 should preserve attachment policy when no CSIDriver present /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present","total":133,"completed":84,"skipped":5019,"failed":4,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Volumes NFSv3 should be mountable for NFSv3 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:103 [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:28:02.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:68 Mar 25 13:28:02.625: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:28:02.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-9610" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.323 seconds] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 NFSv3 [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:102 should be mountable for NFSv3 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:103 Only supported for node OS distro [gci ubuntu custom] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:69 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:110 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:28:02.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:110 STEP: Creating configMap with name projected-configmap-test-volume-map-16c1891d-0f99-439c-b1d8-edd9a7171b0a STEP: Creating a pod to test consume configMaps Mar 25 13:28:03.034: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e43acdf1-e46b-42a9-8ed5-c0303dc0c9fa" in namespace "projected-9957" to be "Succeeded or Failed" Mar 25 13:28:03.224: INFO: Pod "pod-projected-configmaps-e43acdf1-e46b-42a9-8ed5-c0303dc0c9fa": Phase="Pending", Reason="", readiness=false. Elapsed: 189.76357ms Mar 25 13:28:05.512: INFO: Pod "pod-projected-configmaps-e43acdf1-e46b-42a9-8ed5-c0303dc0c9fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.477870359s Mar 25 13:28:07.605: INFO: Pod "pod-projected-configmaps-e43acdf1-e46b-42a9-8ed5-c0303dc0c9fa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.570877567s Mar 25 13:28:09.609: INFO: Pod "pod-projected-configmaps-e43acdf1-e46b-42a9-8ed5-c0303dc0c9fa": Phase="Running", Reason="", readiness=true. Elapsed: 6.574909048s Mar 25 13:28:11.615: INFO: Pod "pod-projected-configmaps-e43acdf1-e46b-42a9-8ed5-c0303dc0c9fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.58089553s STEP: Saw pod success Mar 25 13:28:11.615: INFO: Pod "pod-projected-configmaps-e43acdf1-e46b-42a9-8ed5-c0303dc0c9fa" satisfied condition "Succeeded or Failed" Mar 25 13:28:11.618: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-e43acdf1-e46b-42a9-8ed5-c0303dc0c9fa container agnhost-container: STEP: delete the pod Mar 25 13:28:11.741: INFO: Waiting for pod pod-projected-configmaps-e43acdf1-e46b-42a9-8ed5-c0303dc0c9fa to disappear Mar 25 13:28:11.784: INFO: Pod pod-projected-configmaps-e43acdf1-e46b-42a9-8ed5-c0303dc0c9fa no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:28:11.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9957" for this suite. • [SLOW TEST:9.141 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:110 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":133,"completed":85,"skipped":5065,"failed":4,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:71 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:28:11.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] volume on default medium should have the correct mode using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:71 STEP: Creating a pod to test emptydir volume type on node default medium Mar 25 13:28:11.895: INFO: Waiting up to 5m0s for pod "pod-dffc38d7-dab1-4bc3-8562-34eaae087377" in namespace "emptydir-5483" to be "Succeeded or Failed" Mar 25 13:28:11.909: INFO: Pod "pod-dffc38d7-dab1-4bc3-8562-34eaae087377": Phase="Pending", Reason="", readiness=false. Elapsed: 13.278237ms Mar 25 13:28:13.961: INFO: Pod "pod-dffc38d7-dab1-4bc3-8562-34eaae087377": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065210553s Mar 25 13:28:15.965: INFO: Pod "pod-dffc38d7-dab1-4bc3-8562-34eaae087377": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069092199s STEP: Saw pod success Mar 25 13:28:15.965: INFO: Pod "pod-dffc38d7-dab1-4bc3-8562-34eaae087377" satisfied condition "Succeeded or Failed" Mar 25 13:28:15.966: INFO: Trying to get logs from node latest-worker2 pod pod-dffc38d7-dab1-4bc3-8562-34eaae087377 container test-container: STEP: delete the pod Mar 25 13:28:16.220: INFO: Waiting for pod pod-dffc38d7-dab1-4bc3-8562-34eaae087377 to disappear Mar 25 13:28:16.269: INFO: Pod pod-dffc38d7-dab1-4bc3-8562-34eaae087377 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:28:16.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5483" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":133,"completed":86,"skipped":5181,"failed":4,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:63 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:28:16.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] nonexistent volume subPath should have the correct mode and owner using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:63 STEP: Creating a pod to test emptydir subpath on tmpfs Mar 25 13:28:16.590: INFO: Waiting up to 5m0s for pod "pod-df2b027e-ab10-409a-bc77-2f7c2f21958d" in namespace "emptydir-6263" to be "Succeeded or Failed" Mar 25 13:28:16.722: INFO: Pod "pod-df2b027e-ab10-409a-bc77-2f7c2f21958d": Phase="Pending", Reason="", readiness=false. Elapsed: 131.58361ms Mar 25 13:28:18.726: INFO: Pod "pod-df2b027e-ab10-409a-bc77-2f7c2f21958d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1360318s Mar 25 13:28:20.823: INFO: Pod "pod-df2b027e-ab10-409a-bc77-2f7c2f21958d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.232788489s Mar 25 13:28:22.828: INFO: Pod "pod-df2b027e-ab10-409a-bc77-2f7c2f21958d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.237617012s STEP: Saw pod success Mar 25 13:28:22.828: INFO: Pod "pod-df2b027e-ab10-409a-bc77-2f7c2f21958d" satisfied condition "Succeeded or Failed" Mar 25 13:28:22.830: INFO: Trying to get logs from node latest-worker2 pod pod-df2b027e-ab10-409a-bc77-2f7c2f21958d container test-container: STEP: delete the pod Mar 25 13:28:22.931: INFO: Waiting for pod pod-df2b027e-ab10-409a-bc77-2f7c2f21958d to disappear Mar 25 13:28:22.933: INFO: Pod pod-df2b027e-ab10-409a-bc77-2f7c2f21958d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:28:22.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6263" for this suite. • [SLOW TEST:6.663 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48 nonexistent volume subPath should have the correct mode and owner using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:63 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup","total":133,"completed":87,"skipped":5190,"failed":4,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity unused /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:28:22.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity unused /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 STEP: Building a driver namespace object, basename csi-mock-volumes-3945 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 13:28:23.093: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3945-1250/csi-attacher Mar 25 13:28:23.097: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3945 Mar 25 13:28:23.097: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-3945 Mar 25 13:28:23.102: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3945 Mar 25 13:28:23.108: INFO: creating *v1.Role: csi-mock-volumes-3945-1250/external-attacher-cfg-csi-mock-volumes-3945 Mar 25 13:28:23.135: INFO: creating *v1.RoleBinding: csi-mock-volumes-3945-1250/csi-attacher-role-cfg Mar 25 13:28:23.156: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3945-1250/csi-provisioner Mar 25 13:28:23.200: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3945 Mar 25 13:28:23.200: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-3945 Mar 25 13:28:23.216: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3945 Mar 25 13:28:23.222: INFO: creating *v1.Role: csi-mock-volumes-3945-1250/external-provisioner-cfg-csi-mock-volumes-3945 Mar 25 13:28:23.228: INFO: creating *v1.RoleBinding: csi-mock-volumes-3945-1250/csi-provisioner-role-cfg Mar 25 13:28:23.252: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3945-1250/csi-resizer Mar 25 13:28:23.276: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3945 Mar 25 13:28:23.276: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-3945 Mar 25 13:28:23.287: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3945 Mar 25 13:28:23.293: INFO: creating *v1.Role: csi-mock-volumes-3945-1250/external-resizer-cfg-csi-mock-volumes-3945 Mar 25 13:28:23.299: INFO: creating *v1.RoleBinding: csi-mock-volumes-3945-1250/csi-resizer-role-cfg Mar 25 13:28:23.338: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3945-1250/csi-snapshotter Mar 25 13:28:23.347: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3945 Mar 25 13:28:23.347: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-3945 Mar 25 13:28:23.354: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3945 Mar 25 13:28:23.393: INFO: creating *v1.Role: csi-mock-volumes-3945-1250/external-snapshotter-leaderelection-csi-mock-volumes-3945 Mar 25 13:28:23.421: INFO: creating *v1.RoleBinding: csi-mock-volumes-3945-1250/external-snapshotter-leaderelection Mar 25 13:28:23.482: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3945-1250/csi-mock Mar 25 13:28:23.493: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3945 Mar 25 13:28:23.520: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3945 Mar 25 13:28:23.619: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3945 Mar 25 13:28:23.636: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3945 Mar 25 13:28:23.659: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3945 Mar 25 13:28:23.672: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3945 Mar 25 13:28:23.682: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3945 Mar 25 13:28:23.694: INFO: creating *v1.StatefulSet: csi-mock-volumes-3945-1250/csi-mockplugin Mar 25 13:28:23.718: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-3945 Mar 25 13:28:23.787: INFO: creating *v1.StatefulSet: csi-mock-volumes-3945-1250/csi-mockplugin-attacher Mar 25 13:28:23.798: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-3945" Mar 25 13:28:23.826: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-3945 to register on node latest-worker2 STEP: Creating pod Mar 25 13:28:39.393: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Deleting the previously created pod Mar 25 13:28:55.793: INFO: Deleting pod "pvc-volume-tester-rtgns" in namespace "csi-mock-volumes-3945" Mar 25 13:28:55.797: INFO: Wait up to 5m0s for pod "pvc-volume-tester-rtgns" to be fully deleted STEP: Deleting pod pvc-volume-tester-rtgns Mar 25 13:30:05.859: INFO: Deleting pod "pvc-volume-tester-rtgns" in namespace "csi-mock-volumes-3945" STEP: Deleting claim pvc-gc5sr Mar 25 13:30:05.871: INFO: Waiting up to 2m0s for PersistentVolume pvc-1bbbd0dc-e936-4879-8699-223616ee8010 to get deleted Mar 25 13:30:05.879: INFO: PersistentVolume pvc-1bbbd0dc-e936-4879-8699-223616ee8010 found and phase=Bound (8.240739ms) Mar 25 13:30:07.883: INFO: PersistentVolume pvc-1bbbd0dc-e936-4879-8699-223616ee8010 was removed STEP: Deleting storageclass mock-csi-storage-capacity-csi-mock-volumes-3945 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-3945 STEP: Waiting for namespaces [csi-mock-volumes-3945] to vanish STEP: uninstalling csi mock driver Mar 25 13:30:13.903: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3945-1250/csi-attacher Mar 25 13:30:13.910: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3945 Mar 25 13:30:13.918: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3945 Mar 25 13:30:13.929: INFO: deleting *v1.Role: csi-mock-volumes-3945-1250/external-attacher-cfg-csi-mock-volumes-3945 Mar 25 13:30:13.935: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3945-1250/csi-attacher-role-cfg Mar 25 13:30:13.941: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3945-1250/csi-provisioner Mar 25 13:30:13.947: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3945 Mar 25 13:30:13.953: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3945 Mar 25 13:30:13.964: INFO: deleting *v1.Role: csi-mock-volumes-3945-1250/external-provisioner-cfg-csi-mock-volumes-3945 Mar 25 13:30:13.982: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3945-1250/csi-provisioner-role-cfg Mar 25 13:30:14.011: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3945-1250/csi-resizer Mar 25 13:30:14.020: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3945 Mar 25 13:30:14.030: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3945 Mar 25 13:30:14.037: INFO: deleting *v1.Role: csi-mock-volumes-3945-1250/external-resizer-cfg-csi-mock-volumes-3945 Mar 25 13:30:14.043: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3945-1250/csi-resizer-role-cfg Mar 25 13:30:14.049: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3945-1250/csi-snapshotter Mar 25 13:30:14.055: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3945 Mar 25 13:30:14.065: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3945 Mar 25 13:30:14.087: INFO: deleting *v1.Role: csi-mock-volumes-3945-1250/external-snapshotter-leaderelection-csi-mock-volumes-3945 Mar 25 13:30:14.125: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3945-1250/external-snapshotter-leaderelection Mar 25 13:30:14.133: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3945-1250/csi-mock Mar 25 13:30:14.139: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3945 Mar 25 13:30:14.149: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3945 Mar 25 13:30:14.157: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3945 Mar 25 13:30:14.169: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3945 Mar 25 13:30:14.181: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3945 Mar 25 13:30:14.193: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3945 Mar 25 13:30:14.205: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3945 Mar 25 13:30:14.306: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3945-1250/csi-mockplugin Mar 25 13:30:14.319: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-3945 Mar 25 13:30:14.324: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3945-1250/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-3945-1250 STEP: Waiting for namespaces [csi-mock-volumes-3945-1250] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:31:08.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:165.468 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134 CSIStorageCapacity unused /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity unused","total":133,"completed":88,"skipped":5214,"failed":4,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity disabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:31:08.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity disabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 STEP: Building a driver namespace object, basename csi-mock-volumes-4716 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 13:31:08.592: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4716-9971/csi-attacher Mar 25 13:31:08.605: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4716 Mar 25 13:31:08.605: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4716 Mar 25 13:31:08.618: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4716 Mar 25 13:31:08.624: INFO: creating *v1.Role: csi-mock-volumes-4716-9971/external-attacher-cfg-csi-mock-volumes-4716 Mar 25 13:31:08.659: INFO: creating *v1.RoleBinding: csi-mock-volumes-4716-9971/csi-attacher-role-cfg Mar 25 13:31:08.673: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4716-9971/csi-provisioner Mar 25 13:31:08.716: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4716 Mar 25 13:31:08.716: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4716 Mar 25 13:31:08.721: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4716 Mar 25 13:31:08.731: INFO: creating *v1.Role: csi-mock-volumes-4716-9971/external-provisioner-cfg-csi-mock-volumes-4716 Mar 25 13:31:08.771: INFO: creating *v1.RoleBinding: csi-mock-volumes-4716-9971/csi-provisioner-role-cfg Mar 25 13:31:08.998: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4716-9971/csi-resizer Mar 25 13:31:09.003: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4716 Mar 25 13:31:09.003: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4716 Mar 25 13:31:09.160: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4716 Mar 25 13:31:09.170: INFO: creating *v1.Role: csi-mock-volumes-4716-9971/external-resizer-cfg-csi-mock-volumes-4716 Mar 25 13:31:09.175: INFO: creating *v1.RoleBinding: csi-mock-volumes-4716-9971/csi-resizer-role-cfg Mar 25 13:31:09.236: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4716-9971/csi-snapshotter Mar 25 13:31:09.247: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4716 Mar 25 13:31:09.247: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4716 Mar 25 13:31:09.253: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4716 Mar 25 13:31:09.297: INFO: creating *v1.Role: csi-mock-volumes-4716-9971/external-snapshotter-leaderelection-csi-mock-volumes-4716 Mar 25 13:31:09.302: INFO: creating *v1.RoleBinding: csi-mock-volumes-4716-9971/external-snapshotter-leaderelection Mar 25 13:31:09.319: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4716-9971/csi-mock Mar 25 13:31:09.335: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4716 Mar 25 13:31:09.359: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4716 Mar 25 13:31:09.373: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4716 Mar 25 13:31:09.379: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4716 Mar 25 13:31:09.435: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4716 Mar 25 13:31:09.438: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4716 Mar 25 13:31:09.445: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4716 Mar 25 13:31:09.451: INFO: creating *v1.StatefulSet: csi-mock-volumes-4716-9971/csi-mockplugin Mar 25 13:31:09.457: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4716 Mar 25 13:31:09.473: INFO: creating *v1.StatefulSet: csi-mock-volumes-4716-9971/csi-mockplugin-attacher Mar 25 13:31:09.498: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4716" Mar 25 13:31:09.579: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4716 to register on node latest-worker STEP: Creating pod Mar 25 13:31:24.830: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Deleting the previously created pod Mar 25 13:31:48.255: INFO: Deleting pod "pvc-volume-tester-bxbr5" in namespace "csi-mock-volumes-4716" Mar 25 13:31:48.259: INFO: Wait up to 5m0s for pod "pvc-volume-tester-bxbr5" to be fully deleted STEP: Deleting pod pvc-volume-tester-bxbr5 Mar 25 13:32:36.285: INFO: Deleting pod "pvc-volume-tester-bxbr5" in namespace "csi-mock-volumes-4716" STEP: Deleting claim pvc-mbbmh Mar 25 13:32:36.295: INFO: Waiting up to 2m0s for PersistentVolume pvc-42266338-3ccd-42e1-8410-2d90fda885fd to get deleted Mar 25 13:32:36.303: INFO: PersistentVolume pvc-42266338-3ccd-42e1-8410-2d90fda885fd found and phase=Bound (8.493569ms) Mar 25 13:32:38.307: INFO: PersistentVolume pvc-42266338-3ccd-42e1-8410-2d90fda885fd was removed STEP: Deleting storageclass mock-csi-storage-capacity-csi-mock-volumes-4716 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4716 STEP: Waiting for namespaces [csi-mock-volumes-4716] to vanish STEP: uninstalling csi mock driver Mar 25 13:32:44.328: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4716-9971/csi-attacher Mar 25 13:32:44.334: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4716 Mar 25 13:32:44.396: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4716 Mar 25 13:32:44.408: INFO: deleting *v1.Role: csi-mock-volumes-4716-9971/external-attacher-cfg-csi-mock-volumes-4716 Mar 25 13:32:44.420: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4716-9971/csi-attacher-role-cfg Mar 25 13:32:44.449: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4716-9971/csi-provisioner Mar 25 13:32:44.468: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4716 Mar 25 13:32:44.478: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4716 Mar 25 13:32:44.486: INFO: deleting *v1.Role: csi-mock-volumes-4716-9971/external-provisioner-cfg-csi-mock-volumes-4716 Mar 25 13:32:44.552: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4716-9971/csi-provisioner-role-cfg Mar 25 13:32:44.576: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4716-9971/csi-resizer Mar 25 13:32:44.594: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4716 Mar 25 13:32:44.601: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4716 Mar 25 13:32:44.606: INFO: deleting *v1.Role: csi-mock-volumes-4716-9971/external-resizer-cfg-csi-mock-volumes-4716 Mar 25 13:32:44.626: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4716-9971/csi-resizer-role-cfg Mar 25 13:32:44.642: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4716-9971/csi-snapshotter Mar 25 13:32:44.648: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4716 Mar 25 13:32:44.677: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4716 Mar 25 13:32:44.695: INFO: deleting *v1.Role: csi-mock-volumes-4716-9971/external-snapshotter-leaderelection-csi-mock-volumes-4716 Mar 25 13:32:44.701: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4716-9971/external-snapshotter-leaderelection Mar 25 13:32:44.708: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4716-9971/csi-mock Mar 25 13:32:44.713: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4716 Mar 25 13:32:44.723: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4716 Mar 25 13:32:44.747: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4716 Mar 25 13:32:44.762: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4716 Mar 25 13:32:44.767: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4716 Mar 25 13:32:44.774: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4716 Mar 25 13:32:44.803: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4716 Mar 25 13:32:44.816: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4716-9971/csi-mockplugin Mar 25 13:32:44.822: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-4716 Mar 25 13:32:44.829: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4716-9971/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-4716-9971 STEP: Waiting for namespaces [csi-mock-volumes-4716-9971] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:33:47.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:159.093 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134 CSIStorageCapacity disabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity disabled","total":133,"completed":89,"skipped":5344,"failed":4,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:33:47.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Mar 25 13:33:57.263: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-740 PodName:hostexec-latest-worker2-njncn ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 13:33:57.263: INFO: >>> kubeConfig: /root/.kube/config Mar 25 13:33:57.354: INFO: exec latest-worker2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Mar 25 13:33:57.354: INFO: exec latest-worker2: stdout: "0\n" Mar 25 13:33:57.354: INFO: exec latest-worker2: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Mar 25 13:33:57.354: INFO: exec latest-worker2: exit code: 0 Mar 25 13:33:57.354: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:33:57.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-740" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [9.910 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1256 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. [sig-storage] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:483 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:33:57.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a persistent volume claim. [sig-storage] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:483 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a PersistentVolumeClaim STEP: Ensuring resource quota status captures persistent volume claim creation STEP: Deleting a PersistentVolumeClaim STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:34:09.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-634" for this suite. • [SLOW TEST:12.874 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a persistent volume claim. [sig-storage] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:483 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. [sig-storage]","total":133,"completed":90,"skipped":5403,"failed":4,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} S ------------------------------ [sig-storage] Flexvolumes should be mountable when non-attachable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/flexvolume.go:188 [BeforeEach] [sig-storage] Flexvolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:34:10.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename flexvolume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Flexvolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/flexvolume.go:169 Mar 25 13:34:11.525: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-storage] Flexvolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:34:11.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "flexvolume-8175" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.015 seconds] [sig-storage] Flexvolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should be mountable when non-attachable [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/flexvolume.go:188 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/flexvolume.go:173 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] CSI mock volume storage capacity unlimited /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:34:12.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] unlimited /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 STEP: Building a driver namespace object, basename csi-mock-volumes-4975 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Mar 25 13:34:12.657: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4975-981/csi-attacher Mar 25 13:34:12.687: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4975 Mar 25 13:34:12.687: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4975 Mar 25 13:34:12.699: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4975 Mar 25 13:34:12.704: INFO: creating *v1.Role: csi-mock-volumes-4975-981/external-attacher-cfg-csi-mock-volumes-4975 Mar 25 13:34:12.710: INFO: creating *v1.RoleBinding: csi-mock-volumes-4975-981/csi-attacher-role-cfg Mar 25 13:34:12.728: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4975-981/csi-provisioner Mar 25 13:34:12.734: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4975 Mar 25 13:34:12.734: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4975 Mar 25 13:34:12.741: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4975 Mar 25 13:34:12.758: INFO: creating *v1.Role: csi-mock-volumes-4975-981/external-provisioner-cfg-csi-mock-volumes-4975 Mar 25 13:34:12.765: INFO: creating *v1.RoleBinding: csi-mock-volumes-4975-981/csi-provisioner-role-cfg Mar 25 13:34:12.771: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4975-981/csi-resizer Mar 25 13:34:12.807: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4975 Mar 25 13:34:12.807: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4975 Mar 25 13:34:12.822: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4975 Mar 25 13:34:12.836: INFO: creating *v1.Role: csi-mock-volumes-4975-981/external-resizer-cfg-csi-mock-volumes-4975 Mar 25 13:34:12.848: INFO: creating *v1.RoleBinding: csi-mock-volumes-4975-981/csi-resizer-role-cfg Mar 25 13:34:12.866: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4975-981/csi-snapshotter Mar 25 13:34:12.878: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4975 Mar 25 13:34:12.878: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4975 Mar 25 13:34:12.887: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4975 Mar 25 13:34:12.939: INFO: creating *v1.Role: csi-mock-volumes-4975-981/external-snapshotter-leaderelection-csi-mock-volumes-4975 Mar 25 13:34:12.962: INFO: creating *v1.RoleBinding: csi-mock-volumes-4975-981/external-snapshotter-leaderelection Mar 25 13:34:12.974: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4975-981/csi-mock Mar 25 13:34:12.998: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4975 Mar 25 13:34:13.029: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4975 Mar 25 13:34:13.034: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4975 Mar 25 13:34:13.065: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4975 Mar 25 13:34:13.075: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4975 Mar 25 13:34:13.099: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4975 Mar 25 13:34:13.112: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4975 Mar 25 13:34:13.118: INFO: creating *v1.StatefulSet: csi-mock-volumes-4975-981/csi-mockplugin Mar 25 13:34:13.125: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4975 Mar 25 13:34:13.142: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4975" Mar 25 13:34:13.183: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4975 to register on node latest-worker2 STEP: Creating pod Mar 25 13:34:23.122: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Mar 25 13:34:23.133: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-vdngm] to have phase Bound Mar 25 13:34:23.156: INFO: PersistentVolumeClaim pvc-vdngm found but phase is Pending instead of Bound. Mar 25 13:34:25.159: INFO: PersistentVolumeClaim pvc-vdngm found and phase=Bound (2.02590267s) Mar 25 13:34:29.179: INFO: Deleting pod "pvc-volume-tester-f7r98" in namespace "csi-mock-volumes-4975" Mar 25 13:34:29.207: INFO: Wait up to 5m0s for pod "pvc-volume-tester-f7r98" to be fully deleted STEP: Checking PVC events Mar 25 13:35:08.267: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-vdngm", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4975", SelfLink:"", UID:"c0852a00-da33-40f1-802a-2090728ce8ba", ResourceVersion:"1179859", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752276063, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0059ac600), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0059ac618)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc002e5b5c0), VolumeMode:(*v1.PersistentVolumeMode)(0xc002e5b5d0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 13:35:08.267: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-vdngm", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4975", SelfLink:"", UID:"c0852a00-da33-40f1-802a-2090728ce8ba", ResourceVersion:"1179860", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752276063, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4975"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001b09f68), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001b09f80)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001b09f98), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001b09fb0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc002172dc0), VolumeMode:(*v1.PersistentVolumeMode)(0xc002172de0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 13:35:08.267: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-vdngm", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4975", SelfLink:"", UID:"c0852a00-da33-40f1-802a-2090728ce8ba", ResourceVersion:"1179867", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752276063, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4975"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0059adc68), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0059adc80)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0059adc98), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0059adcb0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-c0852a00-da33-40f1-802a-2090728ce8ba", StorageClassName:(*string)(0xc0017925a0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0017925b0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 13:35:08.267: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-vdngm", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4975", SelfLink:"", UID:"c0852a00-da33-40f1-802a-2090728ce8ba", ResourceVersion:"1179868", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752276063, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4975"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0059adce0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0059adcf8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0059add10), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0059add28)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-c0852a00-da33-40f1-802a-2090728ce8ba", StorageClassName:(*string)(0xc0017925e0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0017925f0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 13:35:08.267: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-vdngm", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4975", SelfLink:"", UID:"c0852a00-da33-40f1-802a-2090728ce8ba", ResourceVersion:"1180044", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752276063, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(0xc0059add58), DeletionGracePeriodSeconds:(*int64)(0xc001d94648), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4975"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0059add70), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0059add88)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0059adda0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0059addb8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-c0852a00-da33-40f1-802a-2090728ce8ba", StorageClassName:(*string)(0xc001792630), VolumeMode:(*v1.PersistentVolumeMode)(0xc001792640), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Mar 25 13:35:08.268: INFO: PVC event DELETED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-vdngm", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4975", SelfLink:"", UID:"c0852a00-da33-40f1-802a-2090728ce8ba", ResourceVersion:"1180045", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752276063, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(0xc0059adde8), DeletionGracePeriodSeconds:(*int64)(0xc001d94838), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4975"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0059ade00), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0059ade18)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0059ade30), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0059ade48)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-c0852a00-da33-40f1-802a-2090728ce8ba", StorageClassName:(*string)(0xc001792680), VolumeMode:(*v1.PersistentVolumeMode)(0xc001792690), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} STEP: Deleting pod pvc-volume-tester-f7r98 Mar 25 13:35:08.268: INFO: Deleting pod "pvc-volume-tester-f7r98" in namespace "csi-mock-volumes-4975" STEP: Deleting claim pvc-vdngm STEP: Deleting storageclass csi-mock-volumes-4975-sc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4975 STEP: Waiting for namespaces [csi-mock-volumes-4975] to vanish STEP: uninstalling csi mock driver Mar 25 13:35:18.308: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4975-981/csi-attacher Mar 25 13:35:18.361: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4975 Mar 25 13:35:18.375: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4975 Mar 25 13:35:18.391: INFO: deleting *v1.Role: csi-mock-volumes-4975-981/external-attacher-cfg-csi-mock-volumes-4975 Mar 25 13:35:18.403: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4975-981/csi-attacher-role-cfg Mar 25 13:35:18.411: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4975-981/csi-provisioner Mar 25 13:35:18.596: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4975 Mar 25 13:35:18.650: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4975 Mar 25 13:35:18.724: INFO: deleting *v1.Role: csi-mock-volumes-4975-981/external-provisioner-cfg-csi-mock-volumes-4975 Mar 25 13:35:18.735: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4975-981/csi-provisioner-role-cfg Mar 25 13:35:18.817: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4975-981/csi-resizer Mar 25 13:35:18.855: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4975 Mar 25 13:35:18.885: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4975 Mar 25 13:35:18.912: INFO: deleting *v1.Role: csi-mock-volumes-4975-981/external-resizer-cfg-csi-mock-volumes-4975 Mar 25 13:35:18.969: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4975-981/csi-resizer-role-cfg Mar 25 13:35:18.990: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4975-981/csi-snapshotter Mar 25 13:35:19.002: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4975 Mar 25 13:35:19.014: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4975 Mar 25 13:35:19.022: INFO: deleting *v1.Role: csi-mock-volumes-4975-981/external-snapshotter-leaderelection-csi-mock-volumes-4975 Mar 25 13:35:19.040: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4975-981/external-snapshotter-leaderelection Mar 25 13:35:19.089: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4975-981/csi-mock Mar 25 13:35:19.100: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4975 Mar 25 13:35:19.115: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4975 Mar 25 13:35:19.148: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4975 Mar 25 13:35:19.160: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4975 Mar 25 13:35:19.165: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4975 Mar 25 13:35:19.171: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4975 Mar 25 13:35:19.178: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4975 Mar 25 13:35:19.184: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4975-981/csi-mockplugin Mar 25 13:35:19.216: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-4975 STEP: deleting the driver namespace: csi-mock-volumes-4975-981 STEP: Waiting for namespaces [csi-mock-volumes-4975-981] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:36:07.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:115.090 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 storage capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900 unlimited /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume storage capacity unlimited","total":133,"completed":91,"skipped":5423,"failed":4,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:75 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:36:07.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:75 STEP: Creating configMap with name projected-configmap-test-volume-1c2b62d1-cada-45db-9263-eb7cff4c2805 STEP: Creating a pod to test consume configMaps Mar 25 13:36:07.532: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3b5e8415-376a-4906-999e-4df0b8339181" in namespace "projected-5492" to be "Succeeded or Failed" Mar 25 13:36:07.587: INFO: Pod "pod-projected-configmaps-3b5e8415-376a-4906-999e-4df0b8339181": Phase="Pending", Reason="", readiness=false. Elapsed: 54.771979ms Mar 25 13:36:09.713: INFO: Pod "pod-projected-configmaps-3b5e8415-376a-4906-999e-4df0b8339181": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180976358s Mar 25 13:36:11.718: INFO: Pod "pod-projected-configmaps-3b5e8415-376a-4906-999e-4df0b8339181": Phase="Running", Reason="", readiness=true. Elapsed: 4.18555998s Mar 25 13:36:13.722: INFO: Pod "pod-projected-configmaps-3b5e8415-376a-4906-999e-4df0b8339181": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.190212708s STEP: Saw pod success Mar 25 13:36:13.722: INFO: Pod "pod-projected-configmaps-3b5e8415-376a-4906-999e-4df0b8339181" satisfied condition "Succeeded or Failed" Mar 25 13:36:13.725: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-3b5e8415-376a-4906-999e-4df0b8339181 container agnhost-container: STEP: delete the pod Mar 25 13:36:13.764: INFO: Waiting for pod pod-projected-configmaps-3b5e8415-376a-4906-999e-4df0b8339181 to disappear Mar 25 13:36:13.777: INFO: Pod pod-projected-configmaps-3b5e8415-376a-4906-999e-4df0b8339181 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:36:13.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5492" for this suite. • [SLOW TEST:6.389 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:75 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":133,"completed":92,"skipped":5487,"failed":4,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:36:13.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 25 13:36:17.926: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-347daa95-a238-432b-a0bf-1511a5fab387] Namespace:persistent-local-volumes-test-5453 PodName:hostexec-latest-worker2-djbcp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 13:36:17.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 13:36:18.049: INFO: Creating a PV followed by a PVC Mar 25 13:36:18.069: INFO: Waiting for PV local-pvmhvdj to bind to PVC pvc-7d6td Mar 25 13:36:18.069: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-7d6td] to have phase Bound Mar 25 13:36:18.129: INFO: PersistentVolumeClaim pvc-7d6td found but phase is Pending instead of Bound. Mar 25 13:36:20.132: INFO: PersistentVolumeClaim pvc-7d6td found and phase=Bound (2.063089811s) Mar 25 13:36:20.132: INFO: Waiting up to 3m0s for PersistentVolume local-pvmhvdj to have phase Bound Mar 25 13:36:20.134: INFO: PersistentVolume local-pvmhvdj found and phase=Bound (2.318895ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Mar 25 13:36:20.161: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 13:36:20.162: INFO: Deleting PersistentVolumeClaim "pvc-7d6td" Mar 25 13:36:20.170: INFO: Deleting PersistentVolume "local-pvmhvdj" STEP: Removing the test directory Mar 25 13:36:20.175: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-347daa95-a238-432b-a0bf-1511a5fab387] Namespace:persistent-local-volumes-test-5453 PodName:hostexec-latest-worker2-djbcp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 13:36:20.175: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:36:20.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5453" for this suite. S [SKIPPING] [6.504 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: dir-link] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 13:36:20.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Mar 25 13:36:24.419: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-a20514e6-d345-46e5-a294-cb86856dfaae-backend && ln -s /tmp/local-volume-test-a20514e6-d345-46e5-a294-cb86856dfaae-backend /tmp/local-volume-test-a20514e6-d345-46e5-a294-cb86856dfaae] Namespace:persistent-local-volumes-test-3083 PodName:hostexec-latest-worker2-z4fk9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 13:36:24.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Mar 25 13:36:24.610: INFO: Creating a PV followed by a PVC Mar 25 13:36:24.622: INFO: Waiting for PV local-pvhgxxh to bind to PVC pvc-zqcbp Mar 25 13:36:24.622: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-zqcbp] to have phase Bound Mar 25 13:36:24.674: INFO: PersistentVolumeClaim pvc-zqcbp found but phase is Pending instead of Bound. Mar 25 13:36:26.680: INFO: PersistentVolumeClaim pvc-zqcbp found and phase=Bound (2.057240654s) Mar 25 13:36:26.680: INFO: Waiting up to 3m0s for PersistentVolume local-pvhgxxh to have phase Bound Mar 25 13:36:26.683: INFO: PersistentVolume local-pvhgxxh found and phase=Bound (3.474314ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Mar 25 13:36:26.689: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Mar 25 13:36:26.690: INFO: Deleting PersistentVolumeClaim "pvc-zqcbp" Mar 25 13:36:26.697: INFO: Deleting PersistentVolume "local-pvhgxxh" STEP: Removing the test directory Mar 25 13:36:26.711: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a20514e6-d345-46e5-a294-cb86856dfaae && rm -r /tmp/local-volume-test-a20514e6-d345-46e5-a294-cb86856dfaae-backend] Namespace:persistent-local-volumes-test-3083 PodName:hostexec-latest-worker2-z4fk9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 13:36:26.711: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 13:36:26.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3083" for this suite. S [SKIPPING] [6.606 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMar 25 13:36:26.898: INFO: Running AfterSuite actions on all nodes Mar 25 13:36:26.898: INFO: Running AfterSuite actions on node 1 Mar 25 13:36:26.898: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_storage/junit_01.xml {"msg":"Test Suite completed","total":133,"completed":92,"skipped":5641,"failed":4,"failures":["[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","[sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity"]} Summarizing 4 Failures: [Fail] [sig-storage] CSI mock volume CSIStorageCapacity [It] CSIStorageCapacity used, insufficient capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1201 [Fail] [sig-storage] CSI mock volume CSIStorageCapacity [It] CSIStorageCapacity used, no capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1232 [Fail] [sig-storage] HostPath [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:742 [Fail] [sig-storage] CSI mock volume CSIStorageCapacity [It] CSIStorageCapacity used, have capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1201 Ran 96 of 5737 Specs in 7082.505 seconds FAIL! -- 92 Passed | 4 Failed | 0 Pending | 5641 Skipped --- FAIL: TestE2E (7082.60s) FAIL