Running Suite: Kubernetes e2e suite =================================== Random Seed: 1636781995 - Will randomize all specs Will run 5770 specs Running in parallel across 10 nodes Nov 13 05:39:57.425: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:39:57.427: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Nov 13 05:39:57.456: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 13 05:39:57.514: INFO: The status of Pod cmk-init-discover-node1-vkj2s is Succeeded, skipping waiting Nov 13 05:39:57.514: INFO: The status of Pod cmk-init-discover-node2-5f4hp is Succeeded, skipping waiting Nov 13 05:39:57.514: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 13 05:39:57.514: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Nov 13 05:39:57.514: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Nov 13 05:39:57.531: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Nov 13 05:39:57.531: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Nov 13 05:39:57.531: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Nov 13 05:39:57.531: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Nov 13 05:39:57.531: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Nov 13 05:39:57.531: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Nov 13 05:39:57.531: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Nov 13 05:39:57.531: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Nov 13 05:39:57.531: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Nov 13 05:39:57.531: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Nov 13 05:39:57.531: INFO: e2e test version: v1.21.5 Nov 13 05:39:57.532: INFO: kube-apiserver version: v1.21.1 Nov 13 05:39:57.533: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:39:57.539: INFO: Cluster IP family: ipv4 SSSS ------------------------------ Nov 13 05:39:57.546: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:39:57.567: INFO: Cluster IP family: ipv4 Nov 13 05:39:57.547: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:39:57.569: INFO: Cluster IP family: ipv4 Nov 13 05:39:57.547: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:39:57.569: INFO: Cluster IP family: ipv4 Nov 13 05:39:57.547: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:39:57.570: INFO: Cluster IP family: ipv4 Nov 13 05:39:57.548: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:39:57.570: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSS ------------------------------ Nov 13 05:39:57.557: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:39:57.579: INFO: Cluster IP family: ipv4 SSSSSSSSS ------------------------------ Nov 13 05:39:57.565: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:39:57.586: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Nov 13 05:39:57.577: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:39:57.605: INFO: Cluster IP family: ipv4 Nov 13 05:39:57.572: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:39:57.605: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Volume limits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:39:57.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-limits-on-node W1113 05:39:57.724965 35 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 05:39:57.725: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 05:39:57.726: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volume limits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_limits.go:35 Nov 13 05:39:57.729: INFO: Only supported for providers [aws gce gke] (not local) [AfterEach] [sig-storage] Volume limits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:39:57.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-limits-on-node-3057" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.072 seconds] [sig-storage] Volume limits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should verify that all nodes have volume limits [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_limits.go:41 Only supported for providers [aws gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_limits.go:36 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:39:57.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename multi-az W1113 05:39:59.173453 26 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 05:39:59.173: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 05:39:59.175: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:39 Nov 13 05:39:59.177: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:39:59.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "multi-az-3502" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [1.464 seconds] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should schedule pods in the same zones as statically provisioned PVs [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:50 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:40 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:39:57.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev W1113 05:39:57.578008 27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 05:39:57.578: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 05:39:57.581: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Nov 13 05:39:57.602: INFO: The status of Pod test-hostpath-type-ttr86 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:39:59.605: INFO: The status of Pod test-hostpath-type-ttr86 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:40:01.612: INFO: The status of Pod test-hostpath-type-ttr86 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:40:03.605: INFO: The status of Pod test-hostpath-type-ttr86 is Running (Ready = true) STEP: running on node node2 STEP: Create a block device for further testing Nov 13 05:40:03.608: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-4877 PodName:test-hostpath-type-ttr86 ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:40:03.608: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:359 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:40:05.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-4877" for this suite. • [SLOW TEST:8.171 seconds] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting block device 'ablkdev' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:359 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathFile","total":-1,"completed":1,"skipped":4,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:39:57.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected W1113 05:39:57.672747 37 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 05:39:57.673: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 05:39:57.674: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:106 STEP: Creating a pod to test downward API volume plugin Nov 13 05:39:57.690: INFO: Waiting up to 5m0s for pod "metadata-volume-e9e6b33c-a80e-4a65-84df-f5cb3618ce5e" in namespace "projected-4055" to be "Succeeded or Failed" Nov 13 05:39:57.693: INFO: Pod "metadata-volume-e9e6b33c-a80e-4a65-84df-f5cb3618ce5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.556214ms Nov 13 05:39:59.698: INFO: Pod "metadata-volume-e9e6b33c-a80e-4a65-84df-f5cb3618ce5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007057055s Nov 13 05:40:01.701: INFO: Pod "metadata-volume-e9e6b33c-a80e-4a65-84df-f5cb3618ce5e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010392108s Nov 13 05:40:03.708: INFO: Pod "metadata-volume-e9e6b33c-a80e-4a65-84df-f5cb3618ce5e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01757187s Nov 13 05:40:05.712: INFO: Pod "metadata-volume-e9e6b33c-a80e-4a65-84df-f5cb3618ce5e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.021793574s Nov 13 05:40:07.715: INFO: Pod "metadata-volume-e9e6b33c-a80e-4a65-84df-f5cb3618ce5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.024792741s STEP: Saw pod success Nov 13 05:40:07.715: INFO: Pod "metadata-volume-e9e6b33c-a80e-4a65-84df-f5cb3618ce5e" satisfied condition "Succeeded or Failed" Nov 13 05:40:07.718: INFO: Trying to get logs from node node1 pod metadata-volume-e9e6b33c-a80e-4a65-84df-f5cb3618ce5e container client-container: STEP: delete the pod Nov 13 05:40:07.734: INFO: Waiting for pod metadata-volume-e9e6b33c-a80e-4a65-84df-f5cb3618ce5e to disappear Nov 13 05:40:07.736: INFO: Pod metadata-volume-e9e6b33c-a80e-4a65-84df-f5cb3618ce5e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:40:07.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4055" for this suite. • [SLOW TEST:10.157 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:106 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":1,"skipped":2,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:39:57.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Nov 13 05:40:00.685: INFO: The status of Pod test-hostpath-type-vvmlz is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:40:02.691: INFO: The status of Pod test-hostpath-type-vvmlz is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:40:04.690: INFO: The status of Pod test-hostpath-type-vvmlz is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:40:06.689: INFO: The status of Pod test-hostpath-type-vvmlz is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:40:08.690: INFO: The status of Pod test-hostpath-type-vvmlz is Running (Ready = true) STEP: running on node node1 STEP: Create a character device for further testing Nov 13 05:40:08.692: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-1341 PodName:test-hostpath-type-vvmlz ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:40:08.692: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting character device 'achardev' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:300 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:40:12.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-1341" for this suite. • [SLOW TEST:15.125 seconds] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting character device 'achardev' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:300 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathBlockDev","total":-1,"completed":1,"skipped":70,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:40:07.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37 [It] should support r/w [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:65 STEP: Creating a pod to test hostPath r/w Nov 13 05:40:07.813: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9684" to be "Succeeded or Failed" Nov 13 05:40:07.815: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.802038ms Nov 13 05:40:09.819: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005991185s Nov 13 05:40:11.822: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00942772s Nov 13 05:40:13.826: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013158492s STEP: Saw pod success Nov 13 05:40:13.826: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Nov 13 05:40:13.829: INFO: Trying to get logs from node node1 pod pod-host-path-test container test-container-2: STEP: delete the pod Nov 13 05:40:13.845: INFO: Waiting for pod pod-host-path-test to disappear Nov 13 05:40:13.847: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:40:13.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-9684" for this suite. • [SLOW TEST:6.076 seconds] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support r/w [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:65 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should support r/w [NodeConformance]","total":-1,"completed":2,"skipped":18,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:39:59.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-cf39e77d-9db4-4e3f-9c10-1f585d85d2da" Nov 13 05:40:05.948: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-cf39e77d-9db4-4e3f-9c10-1f585d85d2da && dd if=/dev/zero of=/tmp/local-volume-test-cf39e77d-9db4-4e3f-9c10-1f585d85d2da/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-cf39e77d-9db4-4e3f-9c10-1f585d85d2da/file] Namespace:persistent-local-volumes-test-943 PodName:hostexec-node2-wqmh2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:40:05.948: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:40:06.110: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-cf39e77d-9db4-4e3f-9c10-1f585d85d2da/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-943 PodName:hostexec-node2-wqmh2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:40:06.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:40:06.202: INFO: Creating a PV followed by a PVC Nov 13 05:40:06.208: INFO: Waiting for PV local-pvg6ckr to bind to PVC pvc-cclkw Nov 13 05:40:06.208: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-cclkw] to have phase Bound Nov 13 05:40:06.211: INFO: PersistentVolumeClaim pvc-cclkw found but phase is Pending instead of Bound. Nov 13 05:40:08.214: INFO: PersistentVolumeClaim pvc-cclkw found but phase is Pending instead of Bound. Nov 13 05:40:10.220: INFO: PersistentVolumeClaim pvc-cclkw found but phase is Pending instead of Bound. Nov 13 05:40:12.227: INFO: PersistentVolumeClaim pvc-cclkw found and phase=Bound (6.019008945s) Nov 13 05:40:12.227: INFO: Waiting up to 3m0s for PersistentVolume local-pvg6ckr to have phase Bound Nov 13 05:40:12.230: INFO: PersistentVolume local-pvg6ckr found and phase=Bound (2.271484ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Nov 13 05:40:18.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-943 exec pod-493dcb3f-2637-4f35-922a-768dc28bcf87 --namespace=persistent-local-volumes-test-943 -- stat -c %g /mnt/volume1' Nov 13 05:40:18.518: INFO: stderr: "" Nov 13 05:40:18.518: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-493dcb3f-2637-4f35-922a-768dc28bcf87 in namespace persistent-local-volumes-test-943 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:40:18.523: INFO: Deleting PersistentVolumeClaim "pvc-cclkw" Nov 13 05:40:18.526: INFO: Deleting PersistentVolume "local-pvg6ckr" Nov 13 05:40:18.531: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-cf39e77d-9db4-4e3f-9c10-1f585d85d2da/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-943 PodName:hostexec-node2-wqmh2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:40:18.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-cf39e77d-9db4-4e3f-9c10-1f585d85d2da/file Nov 13 05:40:18.628: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-943 PodName:hostexec-node2-wqmh2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:40:18.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-cf39e77d-9db4-4e3f-9c10-1f585d85d2da Nov 13 05:40:18.706: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-cf39e77d-9db4-4e3f-9c10-1f585d85d2da] Namespace:persistent-local-volumes-test-943 PodName:hostexec-node2-wqmh2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:40:18.706: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:40:18.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-943" for this suite. • [SLOW TEST:19.546 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":1,"skipped":98,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:40:12.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Nov 13 05:40:13.026: INFO: The status of Pod test-hostpath-type-t7jvb is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:40:15.030: INFO: The status of Pod test-hostpath-type-t7jvb is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:40:17.031: INFO: The status of Pod test-hostpath-type-t7jvb is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:40:19.032: INFO: The status of Pod test-hostpath-type-t7jvb is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:40:21.030: INFO: The status of Pod test-hostpath-type-t7jvb is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:40:23.030: INFO: The status of Pod test-hostpath-type-t7jvb is Running (Ready = true) STEP: running on node node2 STEP: Create a character device for further testing Nov 13 05:40:23.033: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-3248 PodName:test-hostpath-type-t7jvb ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:40:23.033: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting character device 'achardev' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:290 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:40:25.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-3248" for this suite. • [SLOW TEST:12.726 seconds] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting character device 'achardev' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:290 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathFile","total":-1,"completed":2,"skipped":93,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:40:25.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] files with FSGroup ownership should support (root,0644,tmpfs) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:67 STEP: Creating a pod to test emptydir 0644 on tmpfs Nov 13 05:40:25.901: INFO: Waiting up to 5m0s for pod "pod-2badd3cf-4d1c-4b06-8a57-5478d143e2ce" in namespace "emptydir-91" to be "Succeeded or Failed" Nov 13 05:40:25.903: INFO: Pod "pod-2badd3cf-4d1c-4b06-8a57-5478d143e2ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.145446ms Nov 13 05:40:27.908: INFO: Pod "pod-2badd3cf-4d1c-4b06-8a57-5478d143e2ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006665299s Nov 13 05:40:29.912: INFO: Pod "pod-2badd3cf-4d1c-4b06-8a57-5478d143e2ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010946168s STEP: Saw pod success Nov 13 05:40:29.912: INFO: Pod "pod-2badd3cf-4d1c-4b06-8a57-5478d143e2ce" satisfied condition "Succeeded or Failed" Nov 13 05:40:29.916: INFO: Trying to get logs from node node1 pod pod-2badd3cf-4d1c-4b06-8a57-5478d143e2ce container test-container: STEP: delete the pod Nov 13 05:40:29.930: INFO: Waiting for pod pod-2badd3cf-4d1c-4b06-8a57-5478d143e2ce to disappear Nov 13 05:40:29.932: INFO: Pod pod-2badd3cf-4d1c-4b06-8a57-5478d143e2ce no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:40:29.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-91" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)","total":-1,"completed":3,"skipped":165,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:40:29.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:90 STEP: Creating projection with secret that has name projected-secret-test-dea587f7-3d0a-42d4-86de-43e5fb6a1f67 STEP: Creating a pod to test consume secrets Nov 13 05:40:30.055: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f422845e-ab3d-46a2-af86-58135609c2a9" in namespace "projected-6496" to be "Succeeded or Failed" Nov 13 05:40:30.057: INFO: Pod "pod-projected-secrets-f422845e-ab3d-46a2-af86-58135609c2a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.873107ms Nov 13 05:40:32.060: INFO: Pod "pod-projected-secrets-f422845e-ab3d-46a2-af86-58135609c2a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005784417s Nov 13 05:40:34.064: INFO: Pod "pod-projected-secrets-f422845e-ab3d-46a2-af86-58135609c2a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009843422s STEP: Saw pod success Nov 13 05:40:34.065: INFO: Pod "pod-projected-secrets-f422845e-ab3d-46a2-af86-58135609c2a9" satisfied condition "Succeeded or Failed" Nov 13 05:40:34.067: INFO: Trying to get logs from node node2 pod pod-projected-secrets-f422845e-ab3d-46a2-af86-58135609c2a9 container projected-secret-volume-test: STEP: delete the pod Nov 13 05:40:34.079: INFO: Waiting for pod pod-projected-secrets-f422845e-ab3d-46a2-af86-58135609c2a9 to disappear Nov 13 05:40:34.081: INFO: Pod pod-projected-secrets-f422845e-ab3d-46a2-af86-58135609c2a9 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:40:34.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6496" for this suite. STEP: Destroying namespace "secret-namespace-5041" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]","total":-1,"completed":4,"skipped":190,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:40:13.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-8bdfeb73-d6b0-4c57-8526-2bdd4b6e5570" Nov 13 05:40:19.928: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-8bdfeb73-d6b0-4c57-8526-2bdd4b6e5570" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-8bdfeb73-d6b0-4c57-8526-2bdd4b6e5570" "/tmp/local-volume-test-8bdfeb73-d6b0-4c57-8526-2bdd4b6e5570"] Namespace:persistent-local-volumes-test-447 PodName:hostexec-node2-hrrq8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:40:19.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:40:20.050: INFO: Creating a PV followed by a PVC Nov 13 05:40:20.059: INFO: Waiting for PV local-pvbgwzx to bind to PVC pvc-5tw8r Nov 13 05:40:20.059: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-5tw8r] to have phase Bound Nov 13 05:40:20.061: INFO: PersistentVolumeClaim pvc-5tw8r found but phase is Pending instead of Bound. Nov 13 05:40:22.066: INFO: PersistentVolumeClaim pvc-5tw8r found but phase is Pending instead of Bound. Nov 13 05:40:24.070: INFO: PersistentVolumeClaim pvc-5tw8r found but phase is Pending instead of Bound. Nov 13 05:40:26.073: INFO: PersistentVolumeClaim pvc-5tw8r found but phase is Pending instead of Bound. Nov 13 05:40:28.077: INFO: PersistentVolumeClaim pvc-5tw8r found and phase=Bound (8.018248759s) Nov 13 05:40:28.077: INFO: Waiting up to 3m0s for PersistentVolume local-pvbgwzx to have phase Bound Nov 13 05:40:28.080: INFO: PersistentVolume local-pvbgwzx found and phase=Bound (2.865795ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:40:34.109: INFO: pod "pod-93087e86-2e3e-4ba9-92d2-d77c65e8a5b8" created on Node "node2" STEP: Writing in pod1 Nov 13 05:40:34.109: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-447 PodName:pod-93087e86-2e3e-4ba9-92d2-d77c65e8a5b8 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:40:34.109: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:40:34.201: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Nov 13 05:40:34.201: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-447 PodName:pod-93087e86-2e3e-4ba9-92d2-d77c65e8a5b8 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:40:34.201: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:40:34.309: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-93087e86-2e3e-4ba9-92d2-d77c65e8a5b8 in namespace persistent-local-volumes-test-447 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:40:34.313: INFO: Deleting PersistentVolumeClaim "pvc-5tw8r" Nov 13 05:40:34.317: INFO: Deleting PersistentVolume "local-pvbgwzx" STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-8bdfeb73-d6b0-4c57-8526-2bdd4b6e5570" Nov 13 05:40:34.321: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-8bdfeb73-d6b0-4c57-8526-2bdd4b6e5570"] Namespace:persistent-local-volumes-test-447 PodName:hostexec-node2-hrrq8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:40:34.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:40:34.421: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8bdfeb73-d6b0-4c57-8526-2bdd4b6e5570] Namespace:persistent-local-volumes-test-447 PodName:hostexec-node2-hrrq8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:40:34.421: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:40:34.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-447" for this suite. • [SLOW TEST:20.645 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":3,"skipped":25,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:40:34.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Nov 13 05:40:34.555: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:40:34.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-7861" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create total pv count metrics for with plugin and volume mode labels after creating pv /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:513 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:40:34.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-7d6095d0-a10c-444c-a0ce-c752a0ac790c" Nov 13 05:40:38.206: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-7d6095d0-a10c-444c-a0ce-c752a0ac790c && dd if=/dev/zero of=/tmp/local-volume-test-7d6095d0-a10c-444c-a0ce-c752a0ac790c/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-7d6095d0-a10c-444c-a0ce-c752a0ac790c/file] Namespace:persistent-local-volumes-test-1687 PodName:hostexec-node2-p4hjb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:40:38.206: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:40:38.357: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-7d6095d0-a10c-444c-a0ce-c752a0ac790c/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1687 PodName:hostexec-node2-p4hjb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:40:38.357: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:40:38.468: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-7d6095d0-a10c-444c-a0ce-c752a0ac790c && chmod o+rwx /tmp/local-volume-test-7d6095d0-a10c-444c-a0ce-c752a0ac790c] Namespace:persistent-local-volumes-test-1687 PodName:hostexec-node2-p4hjb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:40:38.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:40:38.739: INFO: Creating a PV followed by a PVC Nov 13 05:40:38.746: INFO: Waiting for PV local-pvdpj2q to bind to PVC pvc-747lq Nov 13 05:40:38.746: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-747lq] to have phase Bound Nov 13 05:40:38.749: INFO: PersistentVolumeClaim pvc-747lq found but phase is Pending instead of Bound. Nov 13 05:40:40.753: INFO: PersistentVolumeClaim pvc-747lq found but phase is Pending instead of Bound. Nov 13 05:40:42.755: INFO: PersistentVolumeClaim pvc-747lq found and phase=Bound (4.009044572s) Nov 13 05:40:42.755: INFO: Waiting up to 3m0s for PersistentVolume local-pvdpj2q to have phase Bound Nov 13 05:40:42.757: INFO: PersistentVolume local-pvdpj2q found and phase=Bound (2.042146ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Nov 13 05:40:46.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-1687 exec pod-07462f16-e32f-451d-b641-65a7a8420e62 --namespace=persistent-local-volumes-test-1687 -- stat -c %g /mnt/volume1' Nov 13 05:40:47.047: INFO: stderr: "" Nov 13 05:40:47.047: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-07462f16-e32f-451d-b641-65a7a8420e62 in namespace persistent-local-volumes-test-1687 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:40:47.054: INFO: Deleting PersistentVolumeClaim "pvc-747lq" Nov 13 05:40:47.058: INFO: Deleting PersistentVolume "local-pvdpj2q" Nov 13 05:40:47.062: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-7d6095d0-a10c-444c-a0ce-c752a0ac790c] Namespace:persistent-local-volumes-test-1687 PodName:hostexec-node2-p4hjb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:40:47.062: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:40:47.153: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-7d6095d0-a10c-444c-a0ce-c752a0ac790c/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1687 PodName:hostexec-node2-p4hjb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:40:47.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-7d6095d0-a10c-444c-a0ce-c752a0ac790c/file Nov 13 05:40:47.243: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-1687 PodName:hostexec-node2-p4hjb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:40:47.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-7d6095d0-a10c-444c-a0ce-c752a0ac790c Nov 13 05:40:47.359: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-7d6095d0-a10c-444c-a0ce-c752a0ac790c] Namespace:persistent-local-volumes-test-1687 PodName:hostexec-node2-p4hjb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:40:47.359: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:40:47.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1687" for this suite. • [SLOW TEST:13.325 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":5,"skipped":215,"failed":0} [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:40:47.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should not provision a volume in an unmanaged GCE zone. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:451 Nov 13 05:40:47.505: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:40:47.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-242" for this suite. S [SKIPPING] [0.035 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 DynamicProvisioner [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:152 should not provision a volume in an unmanaged GCE zone. [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:451 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:452 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:39:57.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes W1113 05:39:57.603131 29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 05:39:57.603: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 05:39:57.604: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should not be passed when podInfoOnMount=nil /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 STEP: Building a driver namespace object, basename csi-mock-volumes-4507 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:39:57.704: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4507-1648/csi-attacher Nov 13 05:39:57.709: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4507 Nov 13 05:39:57.709: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4507 Nov 13 05:39:57.711: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4507 Nov 13 05:39:57.714: INFO: creating *v1.Role: csi-mock-volumes-4507-1648/external-attacher-cfg-csi-mock-volumes-4507 Nov 13 05:39:57.717: INFO: creating *v1.RoleBinding: csi-mock-volumes-4507-1648/csi-attacher-role-cfg Nov 13 05:39:57.720: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4507-1648/csi-provisioner Nov 13 05:39:57.723: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4507 Nov 13 05:39:57.723: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4507 Nov 13 05:39:57.726: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4507 Nov 13 05:39:57.730: INFO: creating *v1.Role: csi-mock-volumes-4507-1648/external-provisioner-cfg-csi-mock-volumes-4507 Nov 13 05:39:57.733: INFO: creating *v1.RoleBinding: csi-mock-volumes-4507-1648/csi-provisioner-role-cfg Nov 13 05:39:57.736: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4507-1648/csi-resizer Nov 13 05:39:57.738: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4507 Nov 13 05:39:57.739: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4507 Nov 13 05:39:57.741: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4507 Nov 13 05:39:57.744: INFO: creating *v1.Role: csi-mock-volumes-4507-1648/external-resizer-cfg-csi-mock-volumes-4507 Nov 13 05:39:57.747: INFO: creating *v1.RoleBinding: csi-mock-volumes-4507-1648/csi-resizer-role-cfg Nov 13 05:39:57.750: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4507-1648/csi-snapshotter Nov 13 05:39:57.752: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4507 Nov 13 05:39:57.752: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4507 Nov 13 05:39:57.756: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4507 Nov 13 05:39:57.758: INFO: creating *v1.Role: csi-mock-volumes-4507-1648/external-snapshotter-leaderelection-csi-mock-volumes-4507 Nov 13 05:39:57.761: INFO: creating *v1.RoleBinding: csi-mock-volumes-4507-1648/external-snapshotter-leaderelection Nov 13 05:39:57.763: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4507-1648/csi-mock Nov 13 05:39:57.765: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4507 Nov 13 05:39:57.768: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4507 Nov 13 05:39:57.770: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4507 Nov 13 05:39:57.773: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4507 Nov 13 05:39:57.775: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4507 Nov 13 05:39:57.778: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4507 Nov 13 05:39:57.780: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4507 Nov 13 05:39:57.783: INFO: creating *v1.StatefulSet: csi-mock-volumes-4507-1648/csi-mockplugin Nov 13 05:39:57.787: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4507 Nov 13 05:39:57.790: INFO: creating *v1.StatefulSet: csi-mock-volumes-4507-1648/csi-mockplugin-attacher Nov 13 05:39:57.794: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4507" Nov 13 05:39:57.796: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4507 to register on node node2 STEP: Creating pod Nov 13 05:40:07.310: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:40:07.315: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-jklvh] to have phase Bound Nov 13 05:40:07.317: INFO: PersistentVolumeClaim pvc-jklvh found but phase is Pending instead of Bound. Nov 13 05:40:09.322: INFO: PersistentVolumeClaim pvc-jklvh found and phase=Bound (2.006498234s) STEP: Deleting the previously created pod Nov 13 05:40:25.342: INFO: Deleting pod "pvc-volume-tester-zdqz4" in namespace "csi-mock-volumes-4507" Nov 13 05:40:25.347: INFO: Wait up to 5m0s for pod "pvc-volume-tester-zdqz4" to be fully deleted STEP: Checking CSI driver logs Nov 13 05:40:33.438: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/5f21e260-adbd-433b-9e00-943bad04f6b7/volumes/kubernetes.io~csi/pvc-b225ca33-3c95-471f-8bba-532647efa14d/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-zdqz4 Nov 13 05:40:33.438: INFO: Deleting pod "pvc-volume-tester-zdqz4" in namespace "csi-mock-volumes-4507" STEP: Deleting claim pvc-jklvh Nov 13 05:40:33.446: INFO: Waiting up to 2m0s for PersistentVolume pvc-b225ca33-3c95-471f-8bba-532647efa14d to get deleted Nov 13 05:40:33.448: INFO: PersistentVolume pvc-b225ca33-3c95-471f-8bba-532647efa14d found and phase=Bound (1.907002ms) Nov 13 05:40:35.451: INFO: PersistentVolume pvc-b225ca33-3c95-471f-8bba-532647efa14d was removed STEP: Deleting storageclass csi-mock-volumes-4507-scvwc4t STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4507 STEP: Waiting for namespaces [csi-mock-volumes-4507] to vanish STEP: uninstalling csi mock driver Nov 13 05:40:41.463: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4507-1648/csi-attacher Nov 13 05:40:41.467: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4507 Nov 13 05:40:41.470: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4507 Nov 13 05:40:41.474: INFO: deleting *v1.Role: csi-mock-volumes-4507-1648/external-attacher-cfg-csi-mock-volumes-4507 Nov 13 05:40:41.478: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4507-1648/csi-attacher-role-cfg Nov 13 05:40:41.481: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4507-1648/csi-provisioner Nov 13 05:40:41.485: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4507 Nov 13 05:40:41.488: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4507 Nov 13 05:40:41.492: INFO: deleting *v1.Role: csi-mock-volumes-4507-1648/external-provisioner-cfg-csi-mock-volumes-4507 Nov 13 05:40:41.496: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4507-1648/csi-provisioner-role-cfg Nov 13 05:40:41.499: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4507-1648/csi-resizer Nov 13 05:40:41.502: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4507 Nov 13 05:40:41.506: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4507 Nov 13 05:40:41.509: INFO: deleting *v1.Role: csi-mock-volumes-4507-1648/external-resizer-cfg-csi-mock-volumes-4507 Nov 13 05:40:41.512: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4507-1648/csi-resizer-role-cfg Nov 13 05:40:41.515: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4507-1648/csi-snapshotter Nov 13 05:40:41.519: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4507 Nov 13 05:40:41.522: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4507 Nov 13 05:40:41.525: INFO: deleting *v1.Role: csi-mock-volumes-4507-1648/external-snapshotter-leaderelection-csi-mock-volumes-4507 Nov 13 05:40:41.528: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4507-1648/external-snapshotter-leaderelection Nov 13 05:40:41.532: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4507-1648/csi-mock Nov 13 05:40:41.535: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4507 Nov 13 05:40:41.538: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4507 Nov 13 05:40:41.542: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4507 Nov 13 05:40:41.546: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4507 Nov 13 05:40:41.549: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4507 Nov 13 05:40:41.552: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4507 Nov 13 05:40:41.555: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4507 Nov 13 05:40:41.558: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4507-1648/csi-mockplugin Nov 13 05:40:41.562: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-4507 Nov 13 05:40:41.565: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4507-1648/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-4507-1648 STEP: Waiting for namespaces [csi-mock-volumes-4507-1648] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:40:53.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:56.004 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443 should not be passed when podInfoOnMount=nil /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil","total":-1,"completed":1,"skipped":3,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:40:47.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Nov 13 05:40:47.565: INFO: The status of Pod test-hostpath-type-rkwxt is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:40:49.569: INFO: The status of Pod test-hostpath-type-rkwxt is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:40:51.568: INFO: The status of Pod test-hostpath-type-rkwxt is Running (Ready = true) STEP: running on node node1 [It] Should fail on mounting socket 'asocket' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:226 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:40:53.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-4378" for this suite. • [SLOW TEST:6.075 seconds] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting socket 'asocket' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:226 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathCharDev","total":-1,"completed":6,"skipped":221,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:39:57.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes W1113 05:39:57.692932 25 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 05:39:57.693: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 05:39:57.694: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should not be passed when podInfoOnMount=false /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 STEP: Building a driver namespace object, basename csi-mock-volumes-1323 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:39:58.303: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1323-7550/csi-attacher Nov 13 05:39:58.305: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1323 Nov 13 05:39:58.305: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-1323 Nov 13 05:39:58.309: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1323 Nov 13 05:39:58.312: INFO: creating *v1.Role: csi-mock-volumes-1323-7550/external-attacher-cfg-csi-mock-volumes-1323 Nov 13 05:39:58.315: INFO: creating *v1.RoleBinding: csi-mock-volumes-1323-7550/csi-attacher-role-cfg Nov 13 05:39:58.318: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1323-7550/csi-provisioner Nov 13 05:39:58.321: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1323 Nov 13 05:39:58.321: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-1323 Nov 13 05:39:58.324: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1323 Nov 13 05:39:58.326: INFO: creating *v1.Role: csi-mock-volumes-1323-7550/external-provisioner-cfg-csi-mock-volumes-1323 Nov 13 05:39:58.330: INFO: creating *v1.RoleBinding: csi-mock-volumes-1323-7550/csi-provisioner-role-cfg Nov 13 05:39:58.333: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1323-7550/csi-resizer Nov 13 05:39:58.336: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1323 Nov 13 05:39:58.336: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-1323 Nov 13 05:39:58.339: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1323 Nov 13 05:39:58.342: INFO: creating *v1.Role: csi-mock-volumes-1323-7550/external-resizer-cfg-csi-mock-volumes-1323 Nov 13 05:39:58.344: INFO: creating *v1.RoleBinding: csi-mock-volumes-1323-7550/csi-resizer-role-cfg Nov 13 05:39:58.347: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1323-7550/csi-snapshotter Nov 13 05:39:58.349: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1323 Nov 13 05:39:58.350: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-1323 Nov 13 05:39:58.352: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1323 Nov 13 05:39:58.355: INFO: creating *v1.Role: csi-mock-volumes-1323-7550/external-snapshotter-leaderelection-csi-mock-volumes-1323 Nov 13 05:39:58.358: INFO: creating *v1.RoleBinding: csi-mock-volumes-1323-7550/external-snapshotter-leaderelection Nov 13 05:39:58.361: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1323-7550/csi-mock Nov 13 05:39:58.363: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1323 Nov 13 05:39:58.366: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1323 Nov 13 05:39:58.369: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1323 Nov 13 05:39:58.371: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1323 Nov 13 05:39:58.374: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1323 Nov 13 05:39:58.377: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1323 Nov 13 05:39:58.379: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1323 Nov 13 05:39:58.381: INFO: creating *v1.StatefulSet: csi-mock-volumes-1323-7550/csi-mockplugin Nov 13 05:39:58.386: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-1323 Nov 13 05:39:58.389: INFO: creating *v1.StatefulSet: csi-mock-volumes-1323-7550/csi-mockplugin-attacher Nov 13 05:39:58.392: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-1323" Nov 13 05:39:58.394: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-1323 to register on node node1 STEP: Creating pod Nov 13 05:40:14.668: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:40:14.673: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-wdvt8] to have phase Bound Nov 13 05:40:14.675: INFO: PersistentVolumeClaim pvc-wdvt8 found but phase is Pending instead of Bound. Nov 13 05:40:16.680: INFO: PersistentVolumeClaim pvc-wdvt8 found and phase=Bound (2.00762115s) STEP: Deleting the previously created pod Nov 13 05:40:28.701: INFO: Deleting pod "pvc-volume-tester-97cg7" in namespace "csi-mock-volumes-1323" Nov 13 05:40:28.707: INFO: Wait up to 5m0s for pod "pvc-volume-tester-97cg7" to be fully deleted STEP: Checking CSI driver logs Nov 13 05:40:32.720: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/441fc185-e601-492c-b407-6bab53f36648/volumes/kubernetes.io~csi/pvc-63323a8e-07a0-4e85-adf6-aeb4a8330ce1/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-97cg7 Nov 13 05:40:32.720: INFO: Deleting pod "pvc-volume-tester-97cg7" in namespace "csi-mock-volumes-1323" STEP: Deleting claim pvc-wdvt8 Nov 13 05:40:32.730: INFO: Waiting up to 2m0s for PersistentVolume pvc-63323a8e-07a0-4e85-adf6-aeb4a8330ce1 to get deleted Nov 13 05:40:32.732: INFO: PersistentVolume pvc-63323a8e-07a0-4e85-adf6-aeb4a8330ce1 found and phase=Bound (2.012393ms) Nov 13 05:40:34.735: INFO: PersistentVolume pvc-63323a8e-07a0-4e85-adf6-aeb4a8330ce1 was removed STEP: Deleting storageclass csi-mock-volumes-1323-scsh7nd STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-1323 STEP: Waiting for namespaces [csi-mock-volumes-1323] to vanish STEP: uninstalling csi mock driver Nov 13 05:40:40.749: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1323-7550/csi-attacher Nov 13 05:40:40.753: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1323 Nov 13 05:40:40.756: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1323 Nov 13 05:40:40.760: INFO: deleting *v1.Role: csi-mock-volumes-1323-7550/external-attacher-cfg-csi-mock-volumes-1323 Nov 13 05:40:40.763: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1323-7550/csi-attacher-role-cfg Nov 13 05:40:40.766: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1323-7550/csi-provisioner Nov 13 05:40:40.769: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1323 Nov 13 05:40:40.773: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1323 Nov 13 05:40:40.776: INFO: deleting *v1.Role: csi-mock-volumes-1323-7550/external-provisioner-cfg-csi-mock-volumes-1323 Nov 13 05:40:40.779: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1323-7550/csi-provisioner-role-cfg Nov 13 05:40:40.783: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1323-7550/csi-resizer Nov 13 05:40:40.787: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1323 Nov 13 05:40:40.790: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1323 Nov 13 05:40:40.794: INFO: deleting *v1.Role: csi-mock-volumes-1323-7550/external-resizer-cfg-csi-mock-volumes-1323 Nov 13 05:40:40.797: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1323-7550/csi-resizer-role-cfg Nov 13 05:40:40.800: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1323-7550/csi-snapshotter Nov 13 05:40:40.805: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1323 Nov 13 05:40:40.809: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1323 Nov 13 05:40:40.812: INFO: deleting *v1.Role: csi-mock-volumes-1323-7550/external-snapshotter-leaderelection-csi-mock-volumes-1323 Nov 13 05:40:40.816: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1323-7550/external-snapshotter-leaderelection Nov 13 05:40:40.819: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1323-7550/csi-mock Nov 13 05:40:40.823: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1323 Nov 13 05:40:40.826: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1323 Nov 13 05:40:40.829: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1323 Nov 13 05:40:40.831: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1323 Nov 13 05:40:40.834: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1323 Nov 13 05:40:40.838: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1323 Nov 13 05:40:40.841: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1323 Nov 13 05:40:40.844: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1323-7550/csi-mockplugin Nov 13 05:40:40.848: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-1323 Nov 13 05:40:40.851: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1323-7550/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-1323-7550 STEP: Waiting for namespaces [csi-mock-volumes-1323-7550] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:41:00.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:63.253 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443 should not be passed when podInfoOnMount=false /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false","total":-1,"completed":1,"skipped":4,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:39:57.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes W1113 05:39:57.631598 30 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 05:39:57.631: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 05:39:57.633: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should require VolumeAttach for drivers with attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 STEP: Building a driver namespace object, basename csi-mock-volumes-5330 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:39:57.726: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5330-3407/csi-attacher Nov 13 05:39:57.728: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5330 Nov 13 05:39:57.728: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-5330 Nov 13 05:39:57.730: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5330 Nov 13 05:39:57.733: INFO: creating *v1.Role: csi-mock-volumes-5330-3407/external-attacher-cfg-csi-mock-volumes-5330 Nov 13 05:39:57.736: INFO: creating *v1.RoleBinding: csi-mock-volumes-5330-3407/csi-attacher-role-cfg Nov 13 05:39:57.739: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5330-3407/csi-provisioner Nov 13 05:39:57.742: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5330 Nov 13 05:39:57.742: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-5330 Nov 13 05:39:57.744: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5330 Nov 13 05:39:57.747: INFO: creating *v1.Role: csi-mock-volumes-5330-3407/external-provisioner-cfg-csi-mock-volumes-5330 Nov 13 05:39:57.750: INFO: creating *v1.RoleBinding: csi-mock-volumes-5330-3407/csi-provisioner-role-cfg Nov 13 05:39:57.752: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5330-3407/csi-resizer Nov 13 05:39:57.754: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5330 Nov 13 05:39:57.754: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-5330 Nov 13 05:39:57.757: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5330 Nov 13 05:39:57.759: INFO: creating *v1.Role: csi-mock-volumes-5330-3407/external-resizer-cfg-csi-mock-volumes-5330 Nov 13 05:39:57.761: INFO: creating *v1.RoleBinding: csi-mock-volumes-5330-3407/csi-resizer-role-cfg Nov 13 05:39:57.764: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5330-3407/csi-snapshotter Nov 13 05:39:57.766: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5330 Nov 13 05:39:57.766: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-5330 Nov 13 05:39:57.769: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5330 Nov 13 05:39:57.772: INFO: creating *v1.Role: csi-mock-volumes-5330-3407/external-snapshotter-leaderelection-csi-mock-volumes-5330 Nov 13 05:39:57.774: INFO: creating *v1.RoleBinding: csi-mock-volumes-5330-3407/external-snapshotter-leaderelection Nov 13 05:39:57.777: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5330-3407/csi-mock Nov 13 05:39:57.780: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5330 Nov 13 05:39:57.782: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5330 Nov 13 05:39:57.784: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5330 Nov 13 05:39:57.787: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5330 Nov 13 05:39:57.791: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5330 Nov 13 05:39:57.793: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5330 Nov 13 05:39:57.797: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5330 Nov 13 05:39:57.799: INFO: creating *v1.StatefulSet: csi-mock-volumes-5330-3407/csi-mockplugin Nov 13 05:39:57.804: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-5330 Nov 13 05:39:57.808: INFO: creating *v1.StatefulSet: csi-mock-volumes-5330-3407/csi-mockplugin-attacher Nov 13 05:39:57.812: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-5330" Nov 13 05:39:57.814: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5330 to register on node node1 STEP: Creating pod Nov 13 05:40:07.329: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:40:07.333: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-97llk] to have phase Bound Nov 13 05:40:07.336: INFO: PersistentVolumeClaim pvc-97llk found but phase is Pending instead of Bound. Nov 13 05:40:09.339: INFO: PersistentVolumeClaim pvc-97llk found and phase=Bound (2.005800049s) STEP: Checking if VolumeAttachment was created for the pod STEP: Deleting pod pvc-volume-tester-wgwnd Nov 13 05:40:19.371: INFO: Deleting pod "pvc-volume-tester-wgwnd" in namespace "csi-mock-volumes-5330" Nov 13 05:40:19.375: INFO: Wait up to 5m0s for pod "pvc-volume-tester-wgwnd" to be fully deleted STEP: Deleting claim pvc-97llk Nov 13 05:40:31.387: INFO: Waiting up to 2m0s for PersistentVolume pvc-850866fe-7bee-49d2-91c6-7152b16a38bf to get deleted Nov 13 05:40:31.391: INFO: PersistentVolume pvc-850866fe-7bee-49d2-91c6-7152b16a38bf found and phase=Bound (4.422913ms) Nov 13 05:40:33.395: INFO: PersistentVolume pvc-850866fe-7bee-49d2-91c6-7152b16a38bf was removed STEP: Deleting storageclass csi-mock-volumes-5330-schxzd7 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-5330 STEP: Waiting for namespaces [csi-mock-volumes-5330] to vanish STEP: uninstalling csi mock driver Nov 13 05:40:39.408: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5330-3407/csi-attacher Nov 13 05:40:39.411: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5330 Nov 13 05:40:39.414: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5330 Nov 13 05:40:39.418: INFO: deleting *v1.Role: csi-mock-volumes-5330-3407/external-attacher-cfg-csi-mock-volumes-5330 Nov 13 05:40:39.421: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5330-3407/csi-attacher-role-cfg Nov 13 05:40:39.424: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5330-3407/csi-provisioner Nov 13 05:40:39.428: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5330 Nov 13 05:40:39.431: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5330 Nov 13 05:40:39.436: INFO: deleting *v1.Role: csi-mock-volumes-5330-3407/external-provisioner-cfg-csi-mock-volumes-5330 Nov 13 05:40:39.440: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5330-3407/csi-provisioner-role-cfg Nov 13 05:40:39.443: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5330-3407/csi-resizer Nov 13 05:40:39.446: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5330 Nov 13 05:40:39.449: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5330 Nov 13 05:40:39.452: INFO: deleting *v1.Role: csi-mock-volumes-5330-3407/external-resizer-cfg-csi-mock-volumes-5330 Nov 13 05:40:39.457: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5330-3407/csi-resizer-role-cfg Nov 13 05:40:39.460: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5330-3407/csi-snapshotter Nov 13 05:40:39.463: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5330 Nov 13 05:40:39.467: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5330 Nov 13 05:40:39.470: INFO: deleting *v1.Role: csi-mock-volumes-5330-3407/external-snapshotter-leaderelection-csi-mock-volumes-5330 Nov 13 05:40:39.473: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5330-3407/external-snapshotter-leaderelection Nov 13 05:40:39.476: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5330-3407/csi-mock Nov 13 05:40:39.479: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5330 Nov 13 05:40:39.482: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5330 Nov 13 05:40:39.486: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5330 Nov 13 05:40:39.488: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5330 Nov 13 05:40:39.492: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5330 Nov 13 05:40:39.495: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5330 Nov 13 05:40:39.499: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5330 Nov 13 05:40:39.502: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5330-3407/csi-mockplugin Nov 13 05:40:39.506: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-5330 Nov 13 05:40:39.509: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5330-3407/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-5330-3407 STEP: Waiting for namespaces [csi-mock-volumes-5330-3407] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:41:01.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:63.941 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI attach test using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:316 should require VolumeAttach for drivers with attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment","total":-1,"completed":1,"skipped":4,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:41:01.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should report an error and create no PV /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:825 Nov 13 05:41:01.597: INFO: Only supported for providers [aws] (not local) [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:41:01.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-3967" for this suite. S [SKIPPING] [0.030 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Invalid AWS KMS key /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:824 should report an error and create no PV [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:825 Only supported for providers [aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:826 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:39:57.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes W1113 05:39:59.074325 32 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 05:39:59.074: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 05:39:59.076: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume by restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 STEP: Building a driver namespace object, basename csi-mock-volumes-8005 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:40:01.502: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8005-3368/csi-attacher Nov 13 05:40:01.504: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8005 Nov 13 05:40:01.504: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-8005 Nov 13 05:40:01.507: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8005 Nov 13 05:40:01.510: INFO: creating *v1.Role: csi-mock-volumes-8005-3368/external-attacher-cfg-csi-mock-volumes-8005 Nov 13 05:40:01.513: INFO: creating *v1.RoleBinding: csi-mock-volumes-8005-3368/csi-attacher-role-cfg Nov 13 05:40:01.515: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8005-3368/csi-provisioner Nov 13 05:40:01.518: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8005 Nov 13 05:40:01.518: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-8005 Nov 13 05:40:01.521: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8005 Nov 13 05:40:01.523: INFO: creating *v1.Role: csi-mock-volumes-8005-3368/external-provisioner-cfg-csi-mock-volumes-8005 Nov 13 05:40:01.526: INFO: creating *v1.RoleBinding: csi-mock-volumes-8005-3368/csi-provisioner-role-cfg Nov 13 05:40:01.528: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8005-3368/csi-resizer Nov 13 05:40:01.531: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8005 Nov 13 05:40:01.531: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-8005 Nov 13 05:40:01.533: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8005 Nov 13 05:40:01.535: INFO: creating *v1.Role: csi-mock-volumes-8005-3368/external-resizer-cfg-csi-mock-volumes-8005 Nov 13 05:40:01.538: INFO: creating *v1.RoleBinding: csi-mock-volumes-8005-3368/csi-resizer-role-cfg Nov 13 05:40:01.541: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8005-3368/csi-snapshotter Nov 13 05:40:01.544: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8005 Nov 13 05:40:01.544: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-8005 Nov 13 05:40:01.547: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8005 Nov 13 05:40:01.549: INFO: creating *v1.Role: csi-mock-volumes-8005-3368/external-snapshotter-leaderelection-csi-mock-volumes-8005 Nov 13 05:40:01.552: INFO: creating *v1.RoleBinding: csi-mock-volumes-8005-3368/external-snapshotter-leaderelection Nov 13 05:40:01.554: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8005-3368/csi-mock Nov 13 05:40:01.557: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8005 Nov 13 05:40:01.560: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8005 Nov 13 05:40:01.562: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8005 Nov 13 05:40:01.565: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8005 Nov 13 05:40:01.567: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8005 Nov 13 05:40:01.570: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8005 Nov 13 05:40:01.573: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8005 Nov 13 05:40:01.576: INFO: creating *v1.StatefulSet: csi-mock-volumes-8005-3368/csi-mockplugin Nov 13 05:40:01.581: INFO: creating *v1.StatefulSet: csi-mock-volumes-8005-3368/csi-mockplugin-attacher Nov 13 05:40:01.612: INFO: creating *v1.StatefulSet: csi-mock-volumes-8005-3368/csi-mockplugin-resizer Nov 13 05:40:01.616: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-8005 to register on node node1 STEP: Creating pod Nov 13 05:40:11.134: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:40:11.139: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-dsztb] to have phase Bound Nov 13 05:40:11.141: INFO: PersistentVolumeClaim pvc-dsztb found but phase is Pending instead of Bound. Nov 13 05:40:13.145: INFO: PersistentVolumeClaim pvc-dsztb found and phase=Bound (2.006025073s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Checking for conditions on pvc STEP: Deleting the previously created pod Nov 13 05:40:35.184: INFO: Deleting pod "pvc-volume-tester-nkn5b" in namespace "csi-mock-volumes-8005" Nov 13 05:40:35.188: INFO: Wait up to 5m0s for pod "pvc-volume-tester-nkn5b" to be fully deleted STEP: Creating a new pod with same volume STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-nkn5b Nov 13 05:40:43.209: INFO: Deleting pod "pvc-volume-tester-nkn5b" in namespace "csi-mock-volumes-8005" STEP: Deleting pod pvc-volume-tester-pqxnf Nov 13 05:40:43.212: INFO: Deleting pod "pvc-volume-tester-pqxnf" in namespace "csi-mock-volumes-8005" Nov 13 05:40:43.216: INFO: Wait up to 5m0s for pod "pvc-volume-tester-pqxnf" to be fully deleted STEP: Deleting claim pvc-dsztb Nov 13 05:40:45.229: INFO: Waiting up to 2m0s for PersistentVolume pvc-df32d16d-6459-464a-964a-9424ec1179b3 to get deleted Nov 13 05:40:45.231: INFO: PersistentVolume pvc-df32d16d-6459-464a-964a-9424ec1179b3 found and phase=Bound (1.880702ms) Nov 13 05:40:47.234: INFO: PersistentVolume pvc-df32d16d-6459-464a-964a-9424ec1179b3 was removed STEP: Deleting storageclass csi-mock-volumes-8005-sc2tll8 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-8005 STEP: Waiting for namespaces [csi-mock-volumes-8005] to vanish STEP: uninstalling csi mock driver Nov 13 05:40:53.249: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8005-3368/csi-attacher Nov 13 05:40:53.254: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8005 Nov 13 05:40:53.258: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8005 Nov 13 05:40:53.261: INFO: deleting *v1.Role: csi-mock-volumes-8005-3368/external-attacher-cfg-csi-mock-volumes-8005 Nov 13 05:40:53.265: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8005-3368/csi-attacher-role-cfg Nov 13 05:40:53.270: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8005-3368/csi-provisioner Nov 13 05:40:53.274: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8005 Nov 13 05:40:53.278: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8005 Nov 13 05:40:53.282: INFO: deleting *v1.Role: csi-mock-volumes-8005-3368/external-provisioner-cfg-csi-mock-volumes-8005 Nov 13 05:40:53.285: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8005-3368/csi-provisioner-role-cfg Nov 13 05:40:53.288: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8005-3368/csi-resizer Nov 13 05:40:53.293: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8005 Nov 13 05:40:53.296: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8005 Nov 13 05:40:53.299: INFO: deleting *v1.Role: csi-mock-volumes-8005-3368/external-resizer-cfg-csi-mock-volumes-8005 Nov 13 05:40:53.303: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8005-3368/csi-resizer-role-cfg Nov 13 05:40:53.308: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8005-3368/csi-snapshotter Nov 13 05:40:53.311: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8005 Nov 13 05:40:53.318: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8005 Nov 13 05:40:53.326: INFO: deleting *v1.Role: csi-mock-volumes-8005-3368/external-snapshotter-leaderelection-csi-mock-volumes-8005 Nov 13 05:40:53.330: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8005-3368/external-snapshotter-leaderelection Nov 13 05:40:53.333: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8005-3368/csi-mock Nov 13 05:40:53.336: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8005 Nov 13 05:40:53.340: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8005 Nov 13 05:40:53.343: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8005 Nov 13 05:40:53.347: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8005 Nov 13 05:40:53.351: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8005 Nov 13 05:40:53.354: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8005 Nov 13 05:40:53.359: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8005 Nov 13 05:40:53.363: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8005-3368/csi-mockplugin Nov 13 05:40:53.367: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8005-3368/csi-mockplugin-attacher Nov 13 05:40:53.370: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8005-3368/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-8005-3368 STEP: Waiting for namespaces [csi-mock-volumes-8005-3368] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:41:05.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:67.664 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI Volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561 should expand volume by restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on","total":-1,"completed":1,"skipped":51,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:40:34.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:391 STEP: Setting up local volumes on node "node1" STEP: Initializing test volumes Nov 13 05:40:38.721: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-eb7d16b9-a196-4b65-838c-6693771a6dc4] Namespace:persistent-local-volumes-test-1455 PodName:hostexec-node1-r77nw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:40:38.721: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:40:38.816: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-5cc37574-6080-4c6d-8174-5763185f75f6] Namespace:persistent-local-volumes-test-1455 PodName:hostexec-node1-r77nw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:40:38.816: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:40:38.904: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-23117e64-95ae-4d22-b1c3-774ea9019519] Namespace:persistent-local-volumes-test-1455 PodName:hostexec-node1-r77nw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:40:38.904: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:40:38.993: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-8ca389d6-79fe-48b0-8563-1a2b558376a3] Namespace:persistent-local-volumes-test-1455 PodName:hostexec-node1-r77nw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:40:38.993: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:40:39.083: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-06455f43-5d85-4167-92f8-41df461956bf] Namespace:persistent-local-volumes-test-1455 PodName:hostexec-node1-r77nw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:40:39.083: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:40:39.192: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-dd1ed6f1-133b-4875-b14e-6369df7034b1] Namespace:persistent-local-volumes-test-1455 PodName:hostexec-node1-r77nw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:40:39.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:40:39.285: INFO: Creating a PV followed by a PVC Nov 13 05:40:39.291: INFO: Creating a PV followed by a PVC Nov 13 05:40:39.297: INFO: Creating a PV followed by a PVC Nov 13 05:40:39.303: INFO: Creating a PV followed by a PVC Nov 13 05:40:39.308: INFO: Creating a PV followed by a PVC Nov 13 05:40:39.313: INFO: Creating a PV followed by a PVC Nov 13 05:40:49.355: INFO: PVCs were not bound within 10s (that's good) STEP: Setting up local volumes on node "node2" STEP: Initializing test volumes Nov 13 05:40:51.371: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-91878b11-7b35-4b5b-bae2-942d90f8f9a3] Namespace:persistent-local-volumes-test-1455 PodName:hostexec-node2-zbbnq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:40:51.372: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:40:51.599: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-cf6432a3-6e7a-4d4f-a98b-63efe93979ec] Namespace:persistent-local-volumes-test-1455 PodName:hostexec-node2-zbbnq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:40:51.599: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:40:51.678: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-a20cf677-2ca0-42c3-a679-07e70ca9baa5] Namespace:persistent-local-volumes-test-1455 PodName:hostexec-node2-zbbnq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:40:51.678: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:40:51.772: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-f54d0735-b4c1-4a6b-b902-fb68a2d6673f] Namespace:persistent-local-volumes-test-1455 PodName:hostexec-node2-zbbnq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:40:51.772: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:40:51.859: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-1d6813bb-5351-42f8-b2b0-ebe649db0785] Namespace:persistent-local-volumes-test-1455 PodName:hostexec-node2-zbbnq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:40:51.859: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:40:51.977: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-4fd8a679-dd8e-4d34-8d6e-7f7d75446dc0] Namespace:persistent-local-volumes-test-1455 PodName:hostexec-node2-zbbnq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:40:51.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:40:52.058: INFO: Creating a PV followed by a PVC Nov 13 05:40:52.065: INFO: Creating a PV followed by a PVC Nov 13 05:40:52.071: INFO: Creating a PV followed by a PVC Nov 13 05:40:52.076: INFO: Creating a PV followed by a PVC Nov 13 05:40:52.082: INFO: Creating a PV followed by a PVC Nov 13 05:40:52.088: INFO: Creating a PV followed by a PVC Nov 13 05:41:02.136: INFO: PVCs were not bound within 10s (that's good) [It] should use volumes spread across nodes when pod has anti-affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:410 Nov 13 05:41:02.136: INFO: Runs only when number of nodes >= 3 [AfterEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:403 STEP: Cleaning up PVC and PV Nov 13 05:41:02.137: INFO: Deleting PersistentVolumeClaim "pvc-fb5v2" Nov 13 05:41:02.142: INFO: Deleting PersistentVolume "local-pvb9q67" STEP: Cleaning up PVC and PV Nov 13 05:41:02.146: INFO: Deleting PersistentVolumeClaim "pvc-2jfkq" Nov 13 05:41:02.150: INFO: Deleting PersistentVolume "local-pvf7cbz" STEP: Cleaning up PVC and PV Nov 13 05:41:02.153: INFO: Deleting PersistentVolumeClaim "pvc-twqhj" Nov 13 05:41:02.157: INFO: Deleting PersistentVolume "local-pvt7m95" STEP: Cleaning up PVC and PV Nov 13 05:41:02.161: INFO: Deleting PersistentVolumeClaim "pvc-st22m" Nov 13 05:41:02.165: INFO: Deleting PersistentVolume "local-pv8tpk9" STEP: Cleaning up PVC and PV Nov 13 05:41:02.168: INFO: Deleting PersistentVolumeClaim "pvc-xj65m" Nov 13 05:41:02.172: INFO: Deleting PersistentVolume "local-pvdvrjv" STEP: Cleaning up PVC and PV Nov 13 05:41:02.175: INFO: Deleting PersistentVolumeClaim "pvc-6hb2k" Nov 13 05:41:02.179: INFO: Deleting PersistentVolume "local-pvvq6ng" STEP: Removing the test directory Nov 13 05:41:02.182: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-eb7d16b9-a196-4b65-838c-6693771a6dc4] Namespace:persistent-local-volumes-test-1455 PodName:hostexec-node1-r77nw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:02.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:41:06.005: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-5cc37574-6080-4c6d-8174-5763185f75f6] Namespace:persistent-local-volumes-test-1455 PodName:hostexec-node1-r77nw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:06.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:41:06.246: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-23117e64-95ae-4d22-b1c3-774ea9019519] Namespace:persistent-local-volumes-test-1455 PodName:hostexec-node1-r77nw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:06.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:41:06.366: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8ca389d6-79fe-48b0-8563-1a2b558376a3] Namespace:persistent-local-volumes-test-1455 PodName:hostexec-node1-r77nw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:06.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:41:06.467: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-06455f43-5d85-4167-92f8-41df461956bf] Namespace:persistent-local-volumes-test-1455 PodName:hostexec-node1-r77nw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:06.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:41:06.749: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-dd1ed6f1-133b-4875-b14e-6369df7034b1] Namespace:persistent-local-volumes-test-1455 PodName:hostexec-node1-r77nw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:06.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up PVC and PV Nov 13 05:41:07.439: INFO: Deleting PersistentVolumeClaim "pvc-l9fdn" Nov 13 05:41:07.444: INFO: Deleting PersistentVolume "local-pv28mhs" STEP: Cleaning up PVC and PV Nov 13 05:41:07.448: INFO: Deleting PersistentVolumeClaim "pvc-pspzh" Nov 13 05:41:07.452: INFO: Deleting PersistentVolume "local-pvk6tdp" STEP: Cleaning up PVC and PV Nov 13 05:41:07.456: INFO: Deleting PersistentVolumeClaim "pvc-vscn9" Nov 13 05:41:07.459: INFO: Deleting PersistentVolume "local-pvgvxh5" STEP: Cleaning up PVC and PV Nov 13 05:41:07.463: INFO: Deleting PersistentVolumeClaim "pvc-b5d9l" Nov 13 05:41:07.466: INFO: Deleting PersistentVolume "local-pvkg2tw" STEP: Cleaning up PVC and PV Nov 13 05:41:07.470: INFO: Deleting PersistentVolumeClaim "pvc-clgjr" Nov 13 05:41:07.473: INFO: Deleting PersistentVolume "local-pvmpb8v" STEP: Cleaning up PVC and PV Nov 13 05:41:07.478: INFO: Deleting PersistentVolumeClaim "pvc-ggtz6" Nov 13 05:41:07.481: INFO: Deleting PersistentVolume "local-pv6jjrh" STEP: Removing the test directory Nov 13 05:41:07.485: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-91878b11-7b35-4b5b-bae2-942d90f8f9a3] Namespace:persistent-local-volumes-test-1455 PodName:hostexec-node2-zbbnq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:07.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:41:07.592: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-cf6432a3-6e7a-4d4f-a98b-63efe93979ec] Namespace:persistent-local-volumes-test-1455 PodName:hostexec-node2-zbbnq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:07.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:41:07.693: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a20cf677-2ca0-42c3-a679-07e70ca9baa5] Namespace:persistent-local-volumes-test-1455 PodName:hostexec-node2-zbbnq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:07.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:41:07.798: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f54d0735-b4c1-4a6b-b902-fb68a2d6673f] Namespace:persistent-local-volumes-test-1455 PodName:hostexec-node2-zbbnq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:07.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:41:07.892: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-1d6813bb-5351-42f8-b2b0-ebe649db0785] Namespace:persistent-local-volumes-test-1455 PodName:hostexec-node2-zbbnq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:07.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:41:08.001: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4fd8a679-dd8e-4d34-8d6e-7f7d75446dc0] Namespace:persistent-local-volumes-test-1455 PodName:hostexec-node2-zbbnq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:08.001: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:41:08.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1455" for this suite. S [SKIPPING] [33.428 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:384 should use volumes spread across nodes when pod has anti-affinity [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:410 Runs only when number of nodes >= 3 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:412 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:41:08.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-93e70af1-5a3f-4393-a17a-24a7f18be4c9" Nov 13 05:41:10.240: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-93e70af1-5a3f-4393-a17a-24a7f18be4c9 && dd if=/dev/zero of=/tmp/local-volume-test-93e70af1-5a3f-4393-a17a-24a7f18be4c9/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-93e70af1-5a3f-4393-a17a-24a7f18be4c9/file] Namespace:persistent-local-volumes-test-837 PodName:hostexec-node2-v8xg7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:10.240: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:41:10.374: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-93e70af1-5a3f-4393-a17a-24a7f18be4c9/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-837 PodName:hostexec-node2-v8xg7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:10.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:41:10.475: INFO: Creating a PV followed by a PVC Nov 13 05:41:10.481: INFO: Waiting for PV local-pvvds2l to bind to PVC pvc-zbph8 Nov 13 05:41:10.481: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-zbph8] to have phase Bound Nov 13 05:41:10.483: INFO: PersistentVolumeClaim pvc-zbph8 found but phase is Pending instead of Bound. Nov 13 05:41:12.486: INFO: PersistentVolumeClaim pvc-zbph8 found and phase=Bound (2.004803017s) Nov 13 05:41:12.486: INFO: Waiting up to 3m0s for PersistentVolume local-pvvds2l to have phase Bound Nov 13 05:41:12.488: INFO: PersistentVolume local-pvvds2l found and phase=Bound (2.204697ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 Nov 13 05:41:12.492: INFO: We don't set fsGroup on block device, skipped. [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:41:12.494: INFO: Deleting PersistentVolumeClaim "pvc-zbph8" Nov 13 05:41:12.498: INFO: Deleting PersistentVolume "local-pvvds2l" Nov 13 05:41:12.502: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-93e70af1-5a3f-4393-a17a-24a7f18be4c9/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-837 PodName:hostexec-node2-v8xg7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:12.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-93e70af1-5a3f-4393-a17a-24a7f18be4c9/file Nov 13 05:41:12.597: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-837 PodName:hostexec-node2-v8xg7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:12.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-93e70af1-5a3f-4393-a17a-24a7f18be4c9 Nov 13 05:41:12.720: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-93e70af1-5a3f-4393-a17a-24a7f18be4c9] Namespace:persistent-local-volumes-test-837 PodName:hostexec-node2-v8xg7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:12.720: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:41:12.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-837" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [4.742 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 We don't set fsGroup on block device, skipped. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:263 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:41:13.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Nov 13 05:41:13.042: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:41:13.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-8999" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create metrics for total number of volumes in A/D Controller [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:322 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:41:01.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-b25c10ca-8e0e-47e2-b22f-9899cda947fe" Nov 13 05:41:09.696: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-b25c10ca-8e0e-47e2-b22f-9899cda947fe && dd if=/dev/zero of=/tmp/local-volume-test-b25c10ca-8e0e-47e2-b22f-9899cda947fe/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-b25c10ca-8e0e-47e2-b22f-9899cda947fe/file] Namespace:persistent-local-volumes-test-8501 PodName:hostexec-node1-zqz4b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:09.696: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:41:09.877: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-b25c10ca-8e0e-47e2-b22f-9899cda947fe/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-8501 PodName:hostexec-node1-zqz4b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:09.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:41:10.100: INFO: Creating a PV followed by a PVC Nov 13 05:41:10.106: INFO: Waiting for PV local-pvmn55t to bind to PVC pvc-hc25w Nov 13 05:41:10.106: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-hc25w] to have phase Bound Nov 13 05:41:10.108: INFO: PersistentVolumeClaim pvc-hc25w found but phase is Pending instead of Bound. Nov 13 05:41:12.112: INFO: PersistentVolumeClaim pvc-hc25w found and phase=Bound (2.005738008s) Nov 13 05:41:12.112: INFO: Waiting up to 3m0s for PersistentVolume local-pvmn55t to have phase Bound Nov 13 05:41:12.114: INFO: PersistentVolume local-pvmn55t found and phase=Bound (2.480375ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:41:20.140: INFO: pod "pod-ceaf7ab5-73ad-4284-a22d-93c7a4c3094a" created on Node "node1" STEP: Writing in pod1 Nov 13 05:41:20.140: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8501 PodName:pod-ceaf7ab5-73ad-4284-a22d-93c7a4c3094a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:41:20.140: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:41:20.220: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n18 bytes (18B) copied, 0.000183 seconds, 96.1KB/s", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Nov 13 05:41:20.220: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-8501 PodName:pod-ceaf7ab5-73ad-4284-a22d-93c7a4c3094a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:41:20.220: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:41:20.316: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Writing in pod1 Nov 13 05:41:20.316: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo /dev/loop0 > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8501 PodName:pod-ceaf7ab5-73ad-4284-a22d-93c7a4c3094a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:41:20.316: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:41:20.466: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo /dev/loop0 > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n11 bytes (11B) copied, 0.000030 seconds, 358.1KB/s", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-ceaf7ab5-73ad-4284-a22d-93c7a4c3094a in namespace persistent-local-volumes-test-8501 [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:41:20.472: INFO: Deleting PersistentVolumeClaim "pvc-hc25w" Nov 13 05:41:20.475: INFO: Deleting PersistentVolume "local-pvmn55t" Nov 13 05:41:20.479: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-b25c10ca-8e0e-47e2-b22f-9899cda947fe/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-8501 PodName:hostexec-node1-zqz4b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:20.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node1" at path /tmp/local-volume-test-b25c10ca-8e0e-47e2-b22f-9899cda947fe/file Nov 13 05:41:20.598: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-8501 PodName:hostexec-node1-zqz4b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:20.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-b25c10ca-8e0e-47e2-b22f-9899cda947fe Nov 13 05:41:20.975: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-b25c10ca-8e0e-47e2-b22f-9899cda947fe] Namespace:persistent-local-volumes-test-8501 PodName:hostexec-node1-zqz4b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:20.975: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:41:21.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8501" for this suite. • [SLOW TEST:19.490 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":2,"skipped":47,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:40:53.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:40:59.665: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-4a003637-7802-42b4-aa7b-97cebf67974f && mount --bind /tmp/local-volume-test-4a003637-7802-42b4-aa7b-97cebf67974f /tmp/local-volume-test-4a003637-7802-42b4-aa7b-97cebf67974f] Namespace:persistent-local-volumes-test-7050 PodName:hostexec-node1-tnrgl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:40:59.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:40:59.764: INFO: Creating a PV followed by a PVC Nov 13 05:40:59.772: INFO: Waiting for PV local-pv2zss5 to bind to PVC pvc-dxzww Nov 13 05:40:59.772: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-dxzww] to have phase Bound Nov 13 05:40:59.774: INFO: PersistentVolumeClaim pvc-dxzww found but phase is Pending instead of Bound. Nov 13 05:41:01.777: INFO: PersistentVolumeClaim pvc-dxzww found but phase is Pending instead of Bound. Nov 13 05:41:03.780: INFO: PersistentVolumeClaim pvc-dxzww found but phase is Pending instead of Bound. Nov 13 05:41:05.784: INFO: PersistentVolumeClaim pvc-dxzww found but phase is Pending instead of Bound. Nov 13 05:41:07.787: INFO: PersistentVolumeClaim pvc-dxzww found but phase is Pending instead of Bound. Nov 13 05:41:09.791: INFO: PersistentVolumeClaim pvc-dxzww found but phase is Pending instead of Bound. Nov 13 05:41:11.795: INFO: PersistentVolumeClaim pvc-dxzww found and phase=Bound (12.023324524s) Nov 13 05:41:11.795: INFO: Waiting up to 3m0s for PersistentVolume local-pv2zss5 to have phase Bound Nov 13 05:41:11.797: INFO: PersistentVolume local-pv2zss5 found and phase=Bound (2.050526ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:41:17.824: INFO: pod "pod-2f0e9c05-77ab-4371-ab94-a61cd670f143" created on Node "node1" STEP: Writing in pod1 Nov 13 05:41:17.824: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7050 PodName:pod-2f0e9c05-77ab-4371-ab94-a61cd670f143 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:41:17.824: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:41:18.058: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 13 05:41:18.058: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7050 PodName:pod-2f0e9c05-77ab-4371-ab94-a61cd670f143 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:41:18.058: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:41:18.498: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-2f0e9c05-77ab-4371-ab94-a61cd670f143 in namespace persistent-local-volumes-test-7050 STEP: Creating pod2 STEP: Creating a pod Nov 13 05:41:22.526: INFO: pod "pod-6d91b5a2-9e68-4c07-a206-2e96acf49d52" created on Node "node1" STEP: Reading in pod2 Nov 13 05:41:22.526: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7050 PodName:pod-6d91b5a2-9e68-4c07-a206-2e96acf49d52 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:41:22.526: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:41:22.787: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-6d91b5a2-9e68-4c07-a206-2e96acf49d52 in namespace persistent-local-volumes-test-7050 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:41:22.791: INFO: Deleting PersistentVolumeClaim "pvc-dxzww" Nov 13 05:41:22.795: INFO: Deleting PersistentVolume "local-pv2zss5" STEP: Removing the test directory Nov 13 05:41:22.799: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-4a003637-7802-42b4-aa7b-97cebf67974f && rm -r /tmp/local-volume-test-4a003637-7802-42b4-aa7b-97cebf67974f] Namespace:persistent-local-volumes-test-7050 PodName:hostexec-node1-tnrgl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:22.799: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:41:23.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7050" for this suite. • [SLOW TEST:29.412 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":2,"skipped":16,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:41:00.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Nov 13 05:41:00.981: INFO: The status of Pod test-hostpath-type-gqtnf is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:41:02.985: INFO: The status of Pod test-hostpath-type-gqtnf is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:41:04.986: INFO: The status of Pod test-hostpath-type-gqtnf is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:41:06.985: INFO: The status of Pod test-hostpath-type-gqtnf is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:41:08.987: INFO: The status of Pod test-hostpath-type-gqtnf is Running (Ready = true) STEP: running on node node1 STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should be able to mount directory 'adir' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:80 [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:41:23.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-4340" for this suite. • [SLOW TEST:22.099 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount directory 'adir' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:80 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should be able to mount directory 'adir' successfully when HostPathType is HostPathUnset","total":-1,"completed":2,"skipped":38,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:40:05.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should call NodeUnstage after NodeStage ephemeral error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 STEP: Building a driver namespace object, basename csi-mock-volumes-3254 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Nov 13 05:40:05.818: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3254-9030/csi-attacher Nov 13 05:40:05.822: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3254 Nov 13 05:40:05.822: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-3254 Nov 13 05:40:05.825: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3254 Nov 13 05:40:05.827: INFO: creating *v1.Role: csi-mock-volumes-3254-9030/external-attacher-cfg-csi-mock-volumes-3254 Nov 13 05:40:05.830: INFO: creating *v1.RoleBinding: csi-mock-volumes-3254-9030/csi-attacher-role-cfg Nov 13 05:40:05.832: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3254-9030/csi-provisioner Nov 13 05:40:05.835: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3254 Nov 13 05:40:05.835: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-3254 Nov 13 05:40:05.838: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3254 Nov 13 05:40:05.840: INFO: creating *v1.Role: csi-mock-volumes-3254-9030/external-provisioner-cfg-csi-mock-volumes-3254 Nov 13 05:40:05.843: INFO: creating *v1.RoleBinding: csi-mock-volumes-3254-9030/csi-provisioner-role-cfg Nov 13 05:40:05.845: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3254-9030/csi-resizer Nov 13 05:40:05.848: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3254 Nov 13 05:40:05.848: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-3254 Nov 13 05:40:05.850: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3254 Nov 13 05:40:05.853: INFO: creating *v1.Role: csi-mock-volumes-3254-9030/external-resizer-cfg-csi-mock-volumes-3254 Nov 13 05:40:05.856: INFO: creating *v1.RoleBinding: csi-mock-volumes-3254-9030/csi-resizer-role-cfg Nov 13 05:40:05.858: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3254-9030/csi-snapshotter Nov 13 05:40:05.861: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3254 Nov 13 05:40:05.861: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-3254 Nov 13 05:40:05.863: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3254 Nov 13 05:40:05.867: INFO: creating *v1.Role: csi-mock-volumes-3254-9030/external-snapshotter-leaderelection-csi-mock-volumes-3254 Nov 13 05:40:05.869: INFO: creating *v1.RoleBinding: csi-mock-volumes-3254-9030/external-snapshotter-leaderelection Nov 13 05:40:05.872: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3254-9030/csi-mock Nov 13 05:40:05.875: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3254 Nov 13 05:40:05.879: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3254 Nov 13 05:40:05.881: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3254 Nov 13 05:40:05.884: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3254 Nov 13 05:40:05.886: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3254 Nov 13 05:40:05.889: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3254 Nov 13 05:40:05.891: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3254 Nov 13 05:40:05.894: INFO: creating *v1.StatefulSet: csi-mock-volumes-3254-9030/csi-mockplugin Nov 13 05:40:05.898: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-3254 Nov 13 05:40:05.901: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-3254" Nov 13 05:40:05.904: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-3254 to register on node node2 I1113 05:40:11.977769 27 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-3254","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1113 05:40:12.065164 27 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I1113 05:40:12.067105 27 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-3254","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1113 05:40:12.068838 27 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I1113 05:40:12.071551 27 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I1113 05:40:12.371899 27 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-3254"},"Error":"","FullError":null} STEP: Creating pod Nov 13 05:40:15.419: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:40:15.424: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-rcgff] to have phase Bound Nov 13 05:40:15.427: INFO: PersistentVolumeClaim pvc-rcgff found but phase is Pending instead of Bound. I1113 05:40:15.435949 27 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-27c0f0c5-5b55-4711-a973-29702b7a90aa","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-27c0f0c5-5b55-4711-a973-29702b7a90aa"}}},"Error":"","FullError":null} Nov 13 05:40:17.430: INFO: PersistentVolumeClaim pvc-rcgff found and phase=Bound (2.00603419s) Nov 13 05:40:17.446: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-rcgff] to have phase Bound Nov 13 05:40:17.449: INFO: PersistentVolumeClaim pvc-rcgff found and phase=Bound (3.18919ms) STEP: Waiting for expected CSI calls I1113 05:40:19.103508 27 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:40:19.121685 27 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-27c0f0c5-5b55-4711-a973-29702b7a90aa/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-27c0f0c5-5b55-4711-a973-29702b7a90aa","storage.kubernetes.io/csiProvisionerIdentity":"1636782012067-8081-csi-mock-csi-mock-volumes-3254"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} STEP: Deleting the previously created pod Nov 13 05:40:19.450: INFO: Deleting pod "pvc-volume-tester-vctnj" in namespace "csi-mock-volumes-3254" Nov 13 05:40:19.454: INFO: Wait up to 5m0s for pod "pvc-volume-tester-vctnj" to be fully deleted I1113 05:40:19.651071 27 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:40:19.662571 27 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-27c0f0c5-5b55-4711-a973-29702b7a90aa/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-27c0f0c5-5b55-4711-a973-29702b7a90aa","storage.kubernetes.io/csiProvisionerIdentity":"1636782012067-8081-csi-mock-csi-mock-volumes-3254"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I1113 05:40:20.757269 27 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:40:20.759219 27 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-27c0f0c5-5b55-4711-a973-29702b7a90aa/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-27c0f0c5-5b55-4711-a973-29702b7a90aa","storage.kubernetes.io/csiProvisionerIdentity":"1636782012067-8081-csi-mock-csi-mock-volumes-3254"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I1113 05:40:22.800481 27 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:40:22.802594 27 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-27c0f0c5-5b55-4711-a973-29702b7a90aa/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-27c0f0c5-5b55-4711-a973-29702b7a90aa","storage.kubernetes.io/csiProvisionerIdentity":"1636782012067-8081-csi-mock-csi-mock-volumes-3254"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I1113 05:40:26.857403 27 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:40:26.863579 27 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-27c0f0c5-5b55-4711-a973-29702b7a90aa/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-27c0f0c5-5b55-4711-a973-29702b7a90aa","storage.kubernetes.io/csiProvisionerIdentity":"1636782012067-8081-csi-mock-csi-mock-volumes-3254"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I1113 05:40:33.103577 27 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:40:33.106819 27 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-27c0f0c5-5b55-4711-a973-29702b7a90aa/globalmount"},"Response":{},"Error":"","FullError":null} STEP: Waiting for all remaining expected CSI calls STEP: Deleting pod pvc-volume-tester-vctnj Nov 13 05:40:34.460: INFO: Deleting pod "pvc-volume-tester-vctnj" in namespace "csi-mock-volumes-3254" STEP: Deleting claim pvc-rcgff Nov 13 05:40:34.469: INFO: Waiting up to 2m0s for PersistentVolume pvc-27c0f0c5-5b55-4711-a973-29702b7a90aa to get deleted Nov 13 05:40:34.471: INFO: PersistentVolume pvc-27c0f0c5-5b55-4711-a973-29702b7a90aa found and phase=Bound (1.909714ms) I1113 05:40:34.481433 27 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} Nov 13 05:40:36.475: INFO: PersistentVolume pvc-27c0f0c5-5b55-4711-a973-29702b7a90aa was removed STEP: Deleting storageclass csi-mock-volumes-3254-scwdvsj STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-3254 STEP: Waiting for namespaces [csi-mock-volumes-3254] to vanish STEP: uninstalling csi mock driver Nov 13 05:40:42.515: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3254-9030/csi-attacher Nov 13 05:40:42.518: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3254 Nov 13 05:40:42.521: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3254 Nov 13 05:40:42.524: INFO: deleting *v1.Role: csi-mock-volumes-3254-9030/external-attacher-cfg-csi-mock-volumes-3254 Nov 13 05:40:42.528: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3254-9030/csi-attacher-role-cfg Nov 13 05:40:42.532: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3254-9030/csi-provisioner Nov 13 05:40:42.535: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3254 Nov 13 05:40:42.541: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3254 Nov 13 05:40:42.545: INFO: deleting *v1.Role: csi-mock-volumes-3254-9030/external-provisioner-cfg-csi-mock-volumes-3254 Nov 13 05:40:42.548: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3254-9030/csi-provisioner-role-cfg Nov 13 05:40:42.551: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3254-9030/csi-resizer Nov 13 05:40:42.555: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3254 Nov 13 05:40:42.558: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3254 Nov 13 05:40:42.561: INFO: deleting *v1.Role: csi-mock-volumes-3254-9030/external-resizer-cfg-csi-mock-volumes-3254 Nov 13 05:40:42.564: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3254-9030/csi-resizer-role-cfg Nov 13 05:40:42.569: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3254-9030/csi-snapshotter Nov 13 05:40:42.572: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3254 Nov 13 05:40:42.575: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3254 Nov 13 05:40:42.578: INFO: deleting *v1.Role: csi-mock-volumes-3254-9030/external-snapshotter-leaderelection-csi-mock-volumes-3254 Nov 13 05:40:42.584: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3254-9030/external-snapshotter-leaderelection Nov 13 05:40:42.587: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3254-9030/csi-mock Nov 13 05:40:42.592: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3254 Nov 13 05:40:42.595: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3254 Nov 13 05:40:42.598: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3254 Nov 13 05:40:42.601: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3254 Nov 13 05:40:42.604: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3254 Nov 13 05:40:42.609: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3254 Nov 13 05:40:42.612: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3254 Nov 13 05:40:42.616: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3254-9030/csi-mockplugin Nov 13 05:40:42.619: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-3254 STEP: deleting the driver namespace: csi-mock-volumes-3254-9030 STEP: Waiting for namespaces [csi-mock-volumes-3254-9030] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:41:24.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:78.896 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI NodeStage error cases [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:734 should call NodeUnstage after NodeStage ephemeral error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should call NodeUnstage after NodeStage ephemeral error","total":-1,"completed":2,"skipped":11,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:41:21.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:75 STEP: Creating configMap with name projected-configmap-test-volume-1ac34feb-8a5a-4c63-8e42-fca3acd7d9ad STEP: Creating a pod to test consume configMaps Nov 13 05:41:21.200: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-13cd1351-b6f1-408b-957d-d6a5070038a4" in namespace "projected-6582" to be "Succeeded or Failed" Nov 13 05:41:21.204: INFO: Pod "pod-projected-configmaps-13cd1351-b6f1-408b-957d-d6a5070038a4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.438122ms Nov 13 05:41:23.207: INFO: Pod "pod-projected-configmaps-13cd1351-b6f1-408b-957d-d6a5070038a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006868356s Nov 13 05:41:25.211: INFO: Pod "pod-projected-configmaps-13cd1351-b6f1-408b-957d-d6a5070038a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010704113s STEP: Saw pod success Nov 13 05:41:25.211: INFO: Pod "pod-projected-configmaps-13cd1351-b6f1-408b-957d-d6a5070038a4" satisfied condition "Succeeded or Failed" Nov 13 05:41:25.213: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-13cd1351-b6f1-408b-957d-d6a5070038a4 container agnhost-container: STEP: delete the pod Nov 13 05:41:25.226: INFO: Waiting for pod pod-projected-configmaps-13cd1351-b6f1-408b-957d-d6a5070038a4 to disappear Nov 13 05:41:25.228: INFO: Pod pod-projected-configmaps-13cd1351-b6f1-408b-957d-d6a5070038a4 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:41:25.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6582" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":3,"skipped":55,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:41:23.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Nov 13 05:41:25.124: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-794 PodName:hostexec-node2-fxp2q ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:25.124: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:41:25.277: INFO: exec node2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Nov 13 05:41:25.277: INFO: exec node2: stdout: "0\n" Nov 13 05:41:25.277: INFO: exec node2: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Nov 13 05:41:25.277: INFO: exec node2: exit code: 0 Nov 13 05:41:25.277: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:41:25.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-794" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.209 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1256 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:41:25.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Nov 13 05:41:31.357: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-1406 PodName:hostexec-node2-c4lqh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:31.357: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:41:31.462: INFO: exec node2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Nov 13 05:41:31.462: INFO: exec node2: stdout: "0\n" Nov 13 05:41:31.462: INFO: exec node2: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Nov 13 05:41:31.462: INFO: exec node2: exit code: 0 Nov 13 05:41:31.462: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:41:31.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1406" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [6.153 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1256 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:41:24.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Nov 13 05:41:24.824: INFO: The status of Pod test-hostpath-type-86dkv is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:41:26.829: INFO: The status of Pod test-hostpath-type-86dkv is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:41:28.829: INFO: The status of Pod test-hostpath-type-86dkv is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:41:30.830: INFO: The status of Pod test-hostpath-type-86dkv is Running (Ready = true) STEP: running on node node2 [It] Should fail on mounting non-existent socket 'does-not-exist-socket' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:202 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:41:32.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-8234" for this suite. • [SLOW TEST:8.082 seconds] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting non-existent socket 'does-not-exist-socket' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:202 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should fail on mounting non-existent socket 'does-not-exist-socket' when HostPathType is HostPathSocket","total":-1,"completed":3,"skipped":83,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:41:25.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Nov 13 05:41:25.298: INFO: The status of Pod test-hostpath-type-sv6hl is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:41:27.302: INFO: The status of Pod test-hostpath-type-sv6hl is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:41:29.303: INFO: The status of Pod test-hostpath-type-sv6hl is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:41:31.302: INFO: The status of Pod test-hostpath-type-sv6hl is Running (Ready = true) STEP: running on node node2 STEP: Create a character device for further testing Nov 13 05:41:31.304: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-8772 PodName:test-hostpath-type-sv6hl ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:41:31.304: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting character device 'achardev' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:295 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:41:33.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-8772" for this suite. • [SLOW TEST:8.154 seconds] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting character device 'achardev' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:295 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathSocket","total":-1,"completed":4,"skipped":67,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:39:57.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes W1113 05:39:57.609566 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 05:39:57.609: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 05:39:57.611: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should retry NodeStage after NodeStage ephemeral error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 STEP: Building a driver namespace object, basename csi-mock-volumes-558 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Nov 13 05:39:57.716: INFO: creating *v1.ServiceAccount: csi-mock-volumes-558-3864/csi-attacher Nov 13 05:39:57.718: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-558 Nov 13 05:39:57.718: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-558 Nov 13 05:39:57.721: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-558 Nov 13 05:39:57.724: INFO: creating *v1.Role: csi-mock-volumes-558-3864/external-attacher-cfg-csi-mock-volumes-558 Nov 13 05:39:57.727: INFO: creating *v1.RoleBinding: csi-mock-volumes-558-3864/csi-attacher-role-cfg Nov 13 05:39:57.729: INFO: creating *v1.ServiceAccount: csi-mock-volumes-558-3864/csi-provisioner Nov 13 05:39:57.732: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-558 Nov 13 05:39:57.732: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-558 Nov 13 05:39:57.735: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-558 Nov 13 05:39:57.738: INFO: creating *v1.Role: csi-mock-volumes-558-3864/external-provisioner-cfg-csi-mock-volumes-558 Nov 13 05:39:57.741: INFO: creating *v1.RoleBinding: csi-mock-volumes-558-3864/csi-provisioner-role-cfg Nov 13 05:39:57.744: INFO: creating *v1.ServiceAccount: csi-mock-volumes-558-3864/csi-resizer Nov 13 05:39:57.746: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-558 Nov 13 05:39:57.746: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-558 Nov 13 05:39:57.749: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-558 Nov 13 05:39:57.751: INFO: creating *v1.Role: csi-mock-volumes-558-3864/external-resizer-cfg-csi-mock-volumes-558 Nov 13 05:39:57.754: INFO: creating *v1.RoleBinding: csi-mock-volumes-558-3864/csi-resizer-role-cfg Nov 13 05:39:57.756: INFO: creating *v1.ServiceAccount: csi-mock-volumes-558-3864/csi-snapshotter Nov 13 05:39:57.759: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-558 Nov 13 05:39:57.759: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-558 Nov 13 05:39:57.761: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-558 Nov 13 05:39:57.764: INFO: creating *v1.Role: csi-mock-volumes-558-3864/external-snapshotter-leaderelection-csi-mock-volumes-558 Nov 13 05:39:57.767: INFO: creating *v1.RoleBinding: csi-mock-volumes-558-3864/external-snapshotter-leaderelection Nov 13 05:39:57.770: INFO: creating *v1.ServiceAccount: csi-mock-volumes-558-3864/csi-mock Nov 13 05:39:57.772: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-558 Nov 13 05:39:57.774: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-558 Nov 13 05:39:57.777: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-558 Nov 13 05:39:57.779: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-558 Nov 13 05:39:57.781: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-558 Nov 13 05:39:57.784: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-558 Nov 13 05:39:57.786: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-558 Nov 13 05:39:57.788: INFO: creating *v1.StatefulSet: csi-mock-volumes-558-3864/csi-mockplugin Nov 13 05:39:57.793: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-558 Nov 13 05:39:57.797: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-558" Nov 13 05:39:57.799: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-558 to register on node node2 I1113 05:40:08.876409 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-558","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1113 05:40:08.970781 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I1113 05:40:08.973072 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-558","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1113 05:40:08.974552 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I1113 05:40:08.976809 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I1113 05:40:09.353342 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-558"},"Error":"","FullError":null} STEP: Creating pod Nov 13 05:40:14.066: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:40:14.071: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-jjr7m] to have phase Bound Nov 13 05:40:14.074: INFO: PersistentVolumeClaim pvc-jjr7m found but phase is Pending instead of Bound. I1113 05:40:14.078171 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-904fcae4-ffb1-4af4-9ebf-e527bffe0c76","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-904fcae4-ffb1-4af4-9ebf-e527bffe0c76"}}},"Error":"","FullError":null} Nov 13 05:40:16.076: INFO: PersistentVolumeClaim pvc-jjr7m found and phase=Bound (2.004968761s) Nov 13 05:40:16.089: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-jjr7m] to have phase Bound Nov 13 05:40:16.092: INFO: PersistentVolumeClaim pvc-jjr7m found and phase=Bound (2.451994ms) STEP: Waiting for expected CSI calls I1113 05:40:17.873167 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:40:17.875646 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-904fcae4-ffb1-4af4-9ebf-e527bffe0c76/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-904fcae4-ffb1-4af4-9ebf-e527bffe0c76","storage.kubernetes.io/csiProvisionerIdentity":"1636782008972-8081-csi-mock-csi-mock-volumes-558"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I1113 05:40:18.480066 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:40:18.481959 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-904fcae4-ffb1-4af4-9ebf-e527bffe0c76/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-904fcae4-ffb1-4af4-9ebf-e527bffe0c76","storage.kubernetes.io/csiProvisionerIdentity":"1636782008972-8081-csi-mock-csi-mock-volumes-558"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I1113 05:40:19.547724 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:40:19.549551 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-904fcae4-ffb1-4af4-9ebf-e527bffe0c76/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-904fcae4-ffb1-4af4-9ebf-e527bffe0c76","storage.kubernetes.io/csiProvisionerIdentity":"1636782008972-8081-csi-mock-csi-mock-volumes-558"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I1113 05:40:21.564172 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Nov 13 05:40:21.565: INFO: >>> kubeConfig: /root/.kube/config I1113 05:40:21.696123 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-904fcae4-ffb1-4af4-9ebf-e527bffe0c76/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-904fcae4-ffb1-4af4-9ebf-e527bffe0c76","storage.kubernetes.io/csiProvisionerIdentity":"1636782008972-8081-csi-mock-csi-mock-volumes-558"}},"Response":{},"Error":"","FullError":null} I1113 05:40:21.943451 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Nov 13 05:40:21.945: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:40:22.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Waiting for pod to be running Nov 13 05:40:22.128: INFO: >>> kubeConfig: /root/.kube/config I1113 05:40:22.218212 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-904fcae4-ffb1-4af4-9ebf-e527bffe0c76/globalmount","target_path":"/var/lib/kubelet/pods/b1f36002-331e-40d6-8096-e5c169313af3/volumes/kubernetes.io~csi/pvc-904fcae4-ffb1-4af4-9ebf-e527bffe0c76/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-904fcae4-ffb1-4af4-9ebf-e527bffe0c76","storage.kubernetes.io/csiProvisionerIdentity":"1636782008972-8081-csi-mock-csi-mock-volumes-558"}},"Response":{},"Error":"","FullError":null} I1113 05:40:23.500906 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:40:23.504184 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetVolumeStats","Request":{"volume_id":"4","volume_path":"/var/lib/kubelet/pods/b1f36002-331e-40d6-8096-e5c169313af3/volumes/kubernetes.io~csi/pvc-904fcae4-ffb1-4af4-9ebf-e527bffe0c76/mount"},"Response":{"usage":[{"total":1073741824,"unit":1}],"volume_condition":{}},"Error":"","FullError":null} STEP: Deleting the previously created pod Nov 13 05:40:26.102: INFO: Deleting pod "pvc-volume-tester-8hbsw" in namespace "csi-mock-volumes-558" Nov 13 05:40:26.106: INFO: Wait up to 5m0s for pod "pvc-volume-tester-8hbsw" to be fully deleted Nov 13 05:40:31.055: INFO: >>> kubeConfig: /root/.kube/config I1113 05:40:31.245047 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/b1f36002-331e-40d6-8096-e5c169313af3/volumes/kubernetes.io~csi/pvc-904fcae4-ffb1-4af4-9ebf-e527bffe0c76/mount"},"Response":{},"Error":"","FullError":null} I1113 05:40:31.259211 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:40:31.395711 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-904fcae4-ffb1-4af4-9ebf-e527bffe0c76/globalmount"},"Response":{},"Error":"","FullError":null} STEP: Waiting for all remaining expected CSI calls STEP: Deleting pod pvc-volume-tester-8hbsw Nov 13 05:40:43.111: INFO: Deleting pod "pvc-volume-tester-8hbsw" in namespace "csi-mock-volumes-558" STEP: Deleting claim pvc-jjr7m Nov 13 05:40:43.119: INFO: Waiting up to 2m0s for PersistentVolume pvc-904fcae4-ffb1-4af4-9ebf-e527bffe0c76 to get deleted Nov 13 05:40:43.121: INFO: PersistentVolume pvc-904fcae4-ffb1-4af4-9ebf-e527bffe0c76 found and phase=Bound (1.952284ms) I1113 05:40:43.134189 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} Nov 13 05:40:45.124: INFO: PersistentVolume pvc-904fcae4-ffb1-4af4-9ebf-e527bffe0c76 was removed STEP: Deleting storageclass csi-mock-volumes-558-scpgptc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-558 STEP: Waiting for namespaces [csi-mock-volumes-558] to vanish STEP: uninstalling csi mock driver Nov 13 05:40:51.176: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-558-3864/csi-attacher Nov 13 05:40:51.180: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-558 Nov 13 05:40:51.183: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-558 Nov 13 05:40:51.187: INFO: deleting *v1.Role: csi-mock-volumes-558-3864/external-attacher-cfg-csi-mock-volumes-558 Nov 13 05:40:51.190: INFO: deleting *v1.RoleBinding: csi-mock-volumes-558-3864/csi-attacher-role-cfg Nov 13 05:40:51.193: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-558-3864/csi-provisioner Nov 13 05:40:51.196: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-558 Nov 13 05:40:51.200: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-558 Nov 13 05:40:51.206: INFO: deleting *v1.Role: csi-mock-volumes-558-3864/external-provisioner-cfg-csi-mock-volumes-558 Nov 13 05:40:51.210: INFO: deleting *v1.RoleBinding: csi-mock-volumes-558-3864/csi-provisioner-role-cfg Nov 13 05:40:51.218: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-558-3864/csi-resizer Nov 13 05:40:51.226: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-558 Nov 13 05:40:51.234: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-558 Nov 13 05:40:51.238: INFO: deleting *v1.Role: csi-mock-volumes-558-3864/external-resizer-cfg-csi-mock-volumes-558 Nov 13 05:40:51.241: INFO: deleting *v1.RoleBinding: csi-mock-volumes-558-3864/csi-resizer-role-cfg Nov 13 05:40:51.245: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-558-3864/csi-snapshotter Nov 13 05:40:51.248: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-558 Nov 13 05:40:51.251: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-558 Nov 13 05:40:51.254: INFO: deleting *v1.Role: csi-mock-volumes-558-3864/external-snapshotter-leaderelection-csi-mock-volumes-558 Nov 13 05:40:51.258: INFO: deleting *v1.RoleBinding: csi-mock-volumes-558-3864/external-snapshotter-leaderelection Nov 13 05:40:51.262: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-558-3864/csi-mock Nov 13 05:40:51.265: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-558 Nov 13 05:40:51.268: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-558 Nov 13 05:40:51.271: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-558 Nov 13 05:40:51.274: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-558 Nov 13 05:40:51.278: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-558 Nov 13 05:40:51.281: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-558 Nov 13 05:40:51.284: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-558 Nov 13 05:40:51.287: INFO: deleting *v1.StatefulSet: csi-mock-volumes-558-3864/csi-mockplugin Nov 13 05:40:51.290: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-558 STEP: deleting the driver namespace: csi-mock-volumes-558-3864 STEP: Waiting for namespaces [csi-mock-volumes-558-3864] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:41:35.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:97.726 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI NodeStage error cases [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:734 should retry NodeStage after NodeStage ephemeral error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should retry NodeStage after NodeStage ephemeral error","total":-1,"completed":1,"skipped":5,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:41:05.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:391 STEP: Setting up local volumes on node "node1" STEP: Initializing test volumes Nov 13 05:41:09.500: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-e7a1212c-e4fc-4baa-87cd-7c3623ec6d76] Namespace:persistent-local-volumes-test-9523 PodName:hostexec-node1-gtk8v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:09.500: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:41:09.888: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-17fed0e6-d959-4b93-970b-edde0fc5647f] Namespace:persistent-local-volumes-test-9523 PodName:hostexec-node1-gtk8v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:09.888: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:41:10.117: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-dc853450-4fb6-4c5b-aa1d-95b6ac40617c] Namespace:persistent-local-volumes-test-9523 PodName:hostexec-node1-gtk8v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:10.117: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:41:10.675: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-fbc694c0-3aba-432c-9dd9-dfb3d3de1437] Namespace:persistent-local-volumes-test-9523 PodName:hostexec-node1-gtk8v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:10.675: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:41:10.826: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-528247f0-ef0c-4967-b4bc-6acc2e13073a] Namespace:persistent-local-volumes-test-9523 PodName:hostexec-node1-gtk8v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:10.826: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:41:10.993: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-22c1f63e-f683-49a4-8772-f41224c0382e] Namespace:persistent-local-volumes-test-9523 PodName:hostexec-node1-gtk8v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:10.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:41:11.180: INFO: Creating a PV followed by a PVC Nov 13 05:41:11.186: INFO: Creating a PV followed by a PVC Nov 13 05:41:11.192: INFO: Creating a PV followed by a PVC Nov 13 05:41:11.198: INFO: Creating a PV followed by a PVC Nov 13 05:41:11.203: INFO: Creating a PV followed by a PVC Nov 13 05:41:11.209: INFO: Creating a PV followed by a PVC Nov 13 05:41:21.254: INFO: PVCs were not bound within 10s (that's good) STEP: Setting up local volumes on node "node2" STEP: Initializing test volumes Nov 13 05:41:25.272: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-a143284f-79b5-4ff6-8ed3-ad953b48d289] Namespace:persistent-local-volumes-test-9523 PodName:hostexec-node2-vcp9j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:25.272: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:41:25.369: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-38b48bd8-fb1d-469b-b6d6-74a50d2907f9] Namespace:persistent-local-volumes-test-9523 PodName:hostexec-node2-vcp9j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:25.369: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:41:25.474: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-54737371-a02b-4a2f-8a37-a9c2052e3a9a] Namespace:persistent-local-volumes-test-9523 PodName:hostexec-node2-vcp9j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:25.474: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:41:25.705: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-79d65236-db61-413a-af1f-8bb0088f2283] Namespace:persistent-local-volumes-test-9523 PodName:hostexec-node2-vcp9j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:25.705: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:41:25.798: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-165863a1-f0ff-4278-84b7-c3a189550d40] Namespace:persistent-local-volumes-test-9523 PodName:hostexec-node2-vcp9j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:25.798: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:41:26.149: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-e65cf900-e82d-4f55-a777-d2c6a9b553c4] Namespace:persistent-local-volumes-test-9523 PodName:hostexec-node2-vcp9j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:26.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:41:26.276: INFO: Creating a PV followed by a PVC Nov 13 05:41:26.283: INFO: Creating a PV followed by a PVC Nov 13 05:41:26.288: INFO: Creating a PV followed by a PVC Nov 13 05:41:26.293: INFO: Creating a PV followed by a PVC Nov 13 05:41:26.299: INFO: Creating a PV followed by a PVC Nov 13 05:41:26.304: INFO: Creating a PV followed by a PVC Nov 13 05:41:36.345: INFO: PVCs were not bound within 10s (that's good) [It] should use volumes spread across nodes when pod management is parallel and pod has anti-affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:425 Nov 13 05:41:36.345: INFO: Runs only when number of nodes >= 3 [AfterEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:403 STEP: Cleaning up PVC and PV Nov 13 05:41:36.347: INFO: Deleting PersistentVolumeClaim "pvc-xcttv" Nov 13 05:41:36.351: INFO: Deleting PersistentVolume "local-pvhs47h" STEP: Cleaning up PVC and PV Nov 13 05:41:36.355: INFO: Deleting PersistentVolumeClaim "pvc-rq2nh" Nov 13 05:41:36.358: INFO: Deleting PersistentVolume "local-pvv958w" STEP: Cleaning up PVC and PV Nov 13 05:41:36.362: INFO: Deleting PersistentVolumeClaim "pvc-kq2hd" Nov 13 05:41:36.366: INFO: Deleting PersistentVolume "local-pvc897k" STEP: Cleaning up PVC and PV Nov 13 05:41:36.370: INFO: Deleting PersistentVolumeClaim "pvc-z64s9" Nov 13 05:41:36.374: INFO: Deleting PersistentVolume "local-pvd72s4" STEP: Cleaning up PVC and PV Nov 13 05:41:36.377: INFO: Deleting PersistentVolumeClaim "pvc-p54f6" Nov 13 05:41:36.381: INFO: Deleting PersistentVolume "local-pv5mdzr" STEP: Cleaning up PVC and PV Nov 13 05:41:36.385: INFO: Deleting PersistentVolumeClaim "pvc-nrqjh" Nov 13 05:41:36.389: INFO: Deleting PersistentVolume "local-pvvgjmn" STEP: Removing the test directory Nov 13 05:41:36.393: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e7a1212c-e4fc-4baa-87cd-7c3623ec6d76] Namespace:persistent-local-volumes-test-9523 PodName:hostexec-node1-gtk8v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:36.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:41:36.484: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-17fed0e6-d959-4b93-970b-edde0fc5647f] Namespace:persistent-local-volumes-test-9523 PodName:hostexec-node1-gtk8v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:36.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:41:36.586: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-dc853450-4fb6-4c5b-aa1d-95b6ac40617c] Namespace:persistent-local-volumes-test-9523 PodName:hostexec-node1-gtk8v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:36.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:41:36.709: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-fbc694c0-3aba-432c-9dd9-dfb3d3de1437] Namespace:persistent-local-volumes-test-9523 PodName:hostexec-node1-gtk8v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:36.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:41:36.807: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-528247f0-ef0c-4967-b4bc-6acc2e13073a] Namespace:persistent-local-volumes-test-9523 PodName:hostexec-node1-gtk8v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:36.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:41:36.903: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-22c1f63e-f683-49a4-8772-f41224c0382e] Namespace:persistent-local-volumes-test-9523 PodName:hostexec-node1-gtk8v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:36.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up PVC and PV Nov 13 05:41:36.999: INFO: Deleting PersistentVolumeClaim "pvc-dd4pg" Nov 13 05:41:37.003: INFO: Deleting PersistentVolume "local-pvp5zpd" STEP: Cleaning up PVC and PV Nov 13 05:41:37.006: INFO: Deleting PersistentVolumeClaim "pvc-pglcq" Nov 13 05:41:37.010: INFO: Deleting PersistentVolume "local-pv5nfj7" STEP: Cleaning up PVC and PV Nov 13 05:41:37.013: INFO: Deleting PersistentVolumeClaim "pvc-dvr22" Nov 13 05:41:37.017: INFO: Deleting PersistentVolume "local-pvlldjb" STEP: Cleaning up PVC and PV Nov 13 05:41:37.020: INFO: Deleting PersistentVolumeClaim "pvc-zwzt9" Nov 13 05:41:37.024: INFO: Deleting PersistentVolume "local-pvd7v26" STEP: Cleaning up PVC and PV Nov 13 05:41:37.028: INFO: Deleting PersistentVolumeClaim "pvc-z4xpt" Nov 13 05:41:37.031: INFO: Deleting PersistentVolume "local-pvtrlbs" STEP: Cleaning up PVC and PV Nov 13 05:41:37.035: INFO: Deleting PersistentVolumeClaim "pvc-dw9bw" Nov 13 05:41:37.038: INFO: Deleting PersistentVolume "local-pvs2s7x" STEP: Removing the test directory Nov 13 05:41:37.042: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a143284f-79b5-4ff6-8ed3-ad953b48d289] Namespace:persistent-local-volumes-test-9523 PodName:hostexec-node2-vcp9j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:37.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:41:37.156: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-38b48bd8-fb1d-469b-b6d6-74a50d2907f9] Namespace:persistent-local-volumes-test-9523 PodName:hostexec-node2-vcp9j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:37.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:41:37.249: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-54737371-a02b-4a2f-8a37-a9c2052e3a9a] Namespace:persistent-local-volumes-test-9523 PodName:hostexec-node2-vcp9j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:37.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:41:37.388: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-79d65236-db61-413a-af1f-8bb0088f2283] Namespace:persistent-local-volumes-test-9523 PodName:hostexec-node2-vcp9j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:37.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:41:37.560: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-165863a1-f0ff-4278-84b7-c3a189550d40] Namespace:persistent-local-volumes-test-9523 PodName:hostexec-node2-vcp9j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:37.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:41:37.818: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e65cf900-e82d-4f55-a777-d2c6a9b553c4] Namespace:persistent-local-volumes-test-9523 PodName:hostexec-node2-vcp9j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:37.818: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:41:37.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9523" for this suite. S [SKIPPING] [32.492 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:384 should use volumes spread across nodes when pod management is parallel and pod has anti-affinity [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:425 Runs only when number of nodes >= 3 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:427 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:41:32.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Nov 13 05:41:32.929: INFO: The status of Pod test-hostpath-type-877tv is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:41:34.933: INFO: The status of Pod test-hostpath-type-877tv is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:41:36.933: INFO: The status of Pod test-hostpath-type-877tv is Running (Ready = true) STEP: running on node node2 STEP: Create a character device for further testing Nov 13 05:41:36.935: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-4636 PodName:test-hostpath-type-877tv ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:41:36.935: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to mount character device 'achardev' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:281 [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:41:41.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-4636" for this suite. • [SLOW TEST:8.151 seconds] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount character device 'achardev' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:281 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should be able to mount character device 'achardev' successfully when HostPathType is HostPathUnset","total":-1,"completed":4,"skipped":91,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:39:57.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath W1113 05:39:57.692829 38 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 05:39:57.693: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 05:39:57.694: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that container can restart successfully after configmaps modified /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:131 STEP: Create configmap STEP: Creating pod pod-subpath-test-configmap-tj5d STEP: Failing liveness probe Nov 13 05:40:05.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=subpath-2338 exec pod-subpath-test-configmap-tj5d --container test-container-volume-configmap-tj5d -- /bin/sh -c rm /probe-volume/probe-file' Nov 13 05:40:06.123: INFO: stderr: "" Nov 13 05:40:06.123: INFO: stdout: "" Nov 13 05:40:06.123: INFO: Pod exec output: STEP: Waiting for container to restart Nov 13 05:40:06.126: INFO: Container test-container-subpath-configmap-tj5d, restarts: 0 Nov 13 05:40:16.130: INFO: Container test-container-subpath-configmap-tj5d, restarts: 1 Nov 13 05:40:16.130: INFO: Container has restart count: 1 STEP: Fix liveness probe STEP: Waiting for container to stop restarting Nov 13 05:40:22.141: INFO: Container has restart count: 2 Nov 13 05:40:34.143: INFO: Container has restart count: 3 Nov 13 05:41:36.141: INFO: Container restart has stabilized Nov 13 05:41:36.141: INFO: Deleting pod "pod-subpath-test-configmap-tj5d" in namespace "subpath-2338" Nov 13 05:41:36.146: INFO: Wait up to 5m0s for pod "pod-subpath-test-configmap-tj5d" to be fully deleted [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:41:42.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2338" for this suite. • [SLOW TEST:104.528 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Container restart /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:130 should verify that container can restart successfully after configmaps modified /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:131 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Container restart should verify that container can restart successfully after configmaps modified","total":-1,"completed":1,"skipped":26,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:41:42.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Nov 13 05:41:42.187: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:41:42.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3002" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create volume metrics in Volume Manager [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:292 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:41:31.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:41:35.606: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-6d3e1dc8-7caf-4bdb-9d65-c77ab9e93d54-backend && ln -s /tmp/local-volume-test-6d3e1dc8-7caf-4bdb-9d65-c77ab9e93d54-backend /tmp/local-volume-test-6d3e1dc8-7caf-4bdb-9d65-c77ab9e93d54] Namespace:persistent-local-volumes-test-282 PodName:hostexec-node2-8jkdt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:35.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:41:35.708: INFO: Creating a PV followed by a PVC Nov 13 05:41:35.715: INFO: Waiting for PV local-pv4cnkh to bind to PVC pvc-pftp9 Nov 13 05:41:35.715: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-pftp9] to have phase Bound Nov 13 05:41:35.717: INFO: PersistentVolumeClaim pvc-pftp9 found but phase is Pending instead of Bound. Nov 13 05:41:37.719: INFO: PersistentVolumeClaim pvc-pftp9 found but phase is Pending instead of Bound. Nov 13 05:41:39.724: INFO: PersistentVolumeClaim pvc-pftp9 found but phase is Pending instead of Bound. Nov 13 05:41:41.729: INFO: PersistentVolumeClaim pvc-pftp9 found and phase=Bound (6.01410362s) Nov 13 05:41:41.729: INFO: Waiting up to 3m0s for PersistentVolume local-pv4cnkh to have phase Bound Nov 13 05:41:41.733: INFO: PersistentVolume local-pv4cnkh found and phase=Bound (3.846831ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Nov 13 05:41:45.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-282 exec pod-dd3a8712-2622-497e-b1e7-41db78b57b0f --namespace=persistent-local-volumes-test-282 -- stat -c %g /mnt/volume1' Nov 13 05:41:46.020: INFO: stderr: "" Nov 13 05:41:46.020: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Nov 13 05:41:50.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-282 exec pod-9be53d64-3ba4-4b5b-8fdb-8ce848037c09 --namespace=persistent-local-volumes-test-282 -- stat -c %g /mnt/volume1' Nov 13 05:41:50.281: INFO: stderr: "" Nov 13 05:41:50.281: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod pod-dd3a8712-2622-497e-b1e7-41db78b57b0f in namespace persistent-local-volumes-test-282 STEP: Deleting second pod STEP: Deleting pod pod-9be53d64-3ba4-4b5b-8fdb-8ce848037c09 in namespace persistent-local-volumes-test-282 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:41:50.291: INFO: Deleting PersistentVolumeClaim "pvc-pftp9" Nov 13 05:41:50.294: INFO: Deleting PersistentVolume "local-pv4cnkh" STEP: Removing the test directory Nov 13 05:41:50.297: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6d3e1dc8-7caf-4bdb-9d65-c77ab9e93d54 && rm -r /tmp/local-volume-test-6d3e1dc8-7caf-4bdb-9d65-c77ab9e93d54-backend] Namespace:persistent-local-volumes-test-282 PodName:hostexec-node2-8jkdt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:50.297: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:41:50.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-282" for this suite. • [SLOW TEST:18.868 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":3,"skipped":93,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:41:35.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:41:37.412: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-93046038-9d88-45d0-8f79-5a3e0dcbe23c && mount --bind /tmp/local-volume-test-93046038-9d88-45d0-8f79-5a3e0dcbe23c /tmp/local-volume-test-93046038-9d88-45d0-8f79-5a3e0dcbe23c] Namespace:persistent-local-volumes-test-5223 PodName:hostexec-node1-wncbm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:37.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:41:37.508: INFO: Creating a PV followed by a PVC Nov 13 05:41:37.515: INFO: Waiting for PV local-pvt46ts to bind to PVC pvc-cpnzv Nov 13 05:41:37.515: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-cpnzv] to have phase Bound Nov 13 05:41:37.517: INFO: PersistentVolumeClaim pvc-cpnzv found but phase is Pending instead of Bound. Nov 13 05:41:39.524: INFO: PersistentVolumeClaim pvc-cpnzv found but phase is Pending instead of Bound. Nov 13 05:41:41.527: INFO: PersistentVolumeClaim pvc-cpnzv found but phase is Pending instead of Bound. Nov 13 05:41:43.530: INFO: PersistentVolumeClaim pvc-cpnzv found and phase=Bound (6.014527025s) Nov 13 05:41:43.530: INFO: Waiting up to 3m0s for PersistentVolume local-pvt46ts to have phase Bound Nov 13 05:41:43.532: INFO: PersistentVolume local-pvt46ts found and phase=Bound (1.798298ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Nov 13 05:41:51.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-5223 exec pod-8f2350d2-fc7c-4fa3-b972-d40f8afa3dcd --namespace=persistent-local-volumes-test-5223 -- stat -c %g /mnt/volume1' Nov 13 05:41:51.800: INFO: stderr: "" Nov 13 05:41:51.800: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-8f2350d2-fc7c-4fa3-b972-d40f8afa3dcd in namespace persistent-local-volumes-test-5223 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:41:51.804: INFO: Deleting PersistentVolumeClaim "pvc-cpnzv" Nov 13 05:41:51.808: INFO: Deleting PersistentVolume "local-pvt46ts" STEP: Removing the test directory Nov 13 05:41:51.811: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-93046038-9d88-45d0-8f79-5a3e0dcbe23c && rm -r /tmp/local-volume-test-93046038-9d88-45d0-8f79-5a3e0dcbe23c] Namespace:persistent-local-volumes-test-5223 PodName:hostexec-node1-wncbm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:51.811: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:41:51.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5223" for this suite. • [SLOW TEST:16.576 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":2,"skipped":28,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:40:53.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume by restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 STEP: Building a driver namespace object, basename csi-mock-volumes-8628 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:40:53.688: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8628-3364/csi-attacher Nov 13 05:40:53.691: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8628 Nov 13 05:40:53.691: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-8628 Nov 13 05:40:53.694: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8628 Nov 13 05:40:53.697: INFO: creating *v1.Role: csi-mock-volumes-8628-3364/external-attacher-cfg-csi-mock-volumes-8628 Nov 13 05:40:53.700: INFO: creating *v1.RoleBinding: csi-mock-volumes-8628-3364/csi-attacher-role-cfg Nov 13 05:40:53.702: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8628-3364/csi-provisioner Nov 13 05:40:53.705: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8628 Nov 13 05:40:53.705: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-8628 Nov 13 05:40:53.708: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8628 Nov 13 05:40:53.710: INFO: creating *v1.Role: csi-mock-volumes-8628-3364/external-provisioner-cfg-csi-mock-volumes-8628 Nov 13 05:40:53.712: INFO: creating *v1.RoleBinding: csi-mock-volumes-8628-3364/csi-provisioner-role-cfg Nov 13 05:40:53.715: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8628-3364/csi-resizer Nov 13 05:40:53.718: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8628 Nov 13 05:40:53.718: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-8628 Nov 13 05:40:53.721: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8628 Nov 13 05:40:53.724: INFO: creating *v1.Role: csi-mock-volumes-8628-3364/external-resizer-cfg-csi-mock-volumes-8628 Nov 13 05:40:53.726: INFO: creating *v1.RoleBinding: csi-mock-volumes-8628-3364/csi-resizer-role-cfg Nov 13 05:40:53.729: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8628-3364/csi-snapshotter Nov 13 05:40:53.731: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8628 Nov 13 05:40:53.732: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-8628 Nov 13 05:40:53.734: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8628 Nov 13 05:40:53.737: INFO: creating *v1.Role: csi-mock-volumes-8628-3364/external-snapshotter-leaderelection-csi-mock-volumes-8628 Nov 13 05:40:53.739: INFO: creating *v1.RoleBinding: csi-mock-volumes-8628-3364/external-snapshotter-leaderelection Nov 13 05:40:53.742: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8628-3364/csi-mock Nov 13 05:40:53.744: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8628 Nov 13 05:40:53.747: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8628 Nov 13 05:40:53.750: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8628 Nov 13 05:40:53.752: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8628 Nov 13 05:40:53.755: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8628 Nov 13 05:40:53.757: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8628 Nov 13 05:40:53.760: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8628 Nov 13 05:40:53.762: INFO: creating *v1.StatefulSet: csi-mock-volumes-8628-3364/csi-mockplugin Nov 13 05:40:53.767: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-8628 Nov 13 05:40:53.773: INFO: creating *v1.StatefulSet: csi-mock-volumes-8628-3364/csi-mockplugin-resizer Nov 13 05:40:53.776: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-8628" Nov 13 05:40:53.779: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-8628 to register on node node1 STEP: Creating pod Nov 13 05:41:10.046: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:41:10.051: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-qbpsf] to have phase Bound Nov 13 05:41:10.054: INFO: PersistentVolumeClaim pvc-qbpsf found but phase is Pending instead of Bound. Nov 13 05:41:12.057: INFO: PersistentVolumeClaim pvc-qbpsf found and phase=Bound (2.005332084s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Checking for conditions on pvc STEP: Deleting the previously created pod Nov 13 05:41:22.096: INFO: Deleting pod "pvc-volume-tester-9lnf5" in namespace "csi-mock-volumes-8628" Nov 13 05:41:22.101: INFO: Wait up to 5m0s for pod "pvc-volume-tester-9lnf5" to be fully deleted STEP: Creating a new pod with same volume STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-9lnf5 Nov 13 05:41:28.124: INFO: Deleting pod "pvc-volume-tester-9lnf5" in namespace "csi-mock-volumes-8628" STEP: Deleting pod pvc-volume-tester-qjrmz Nov 13 05:41:28.127: INFO: Deleting pod "pvc-volume-tester-qjrmz" in namespace "csi-mock-volumes-8628" Nov 13 05:41:28.130: INFO: Wait up to 5m0s for pod "pvc-volume-tester-qjrmz" to be fully deleted STEP: Deleting claim pvc-qbpsf Nov 13 05:41:32.143: INFO: Waiting up to 2m0s for PersistentVolume pvc-7d6ffb30-a97b-4f9d-a873-69fba784f54a to get deleted Nov 13 05:41:32.145: INFO: PersistentVolume pvc-7d6ffb30-a97b-4f9d-a873-69fba784f54a found and phase=Bound (1.932752ms) Nov 13 05:41:34.150: INFO: PersistentVolume pvc-7d6ffb30-a97b-4f9d-a873-69fba784f54a was removed STEP: Deleting storageclass csi-mock-volumes-8628-sc277q8 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-8628 STEP: Waiting for namespaces [csi-mock-volumes-8628] to vanish STEP: uninstalling csi mock driver Nov 13 05:41:40.166: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8628-3364/csi-attacher Nov 13 05:41:40.170: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8628 Nov 13 05:41:40.174: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8628 Nov 13 05:41:40.177: INFO: deleting *v1.Role: csi-mock-volumes-8628-3364/external-attacher-cfg-csi-mock-volumes-8628 Nov 13 05:41:40.181: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8628-3364/csi-attacher-role-cfg Nov 13 05:41:40.184: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8628-3364/csi-provisioner Nov 13 05:41:40.187: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8628 Nov 13 05:41:40.191: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8628 Nov 13 05:41:40.198: INFO: deleting *v1.Role: csi-mock-volumes-8628-3364/external-provisioner-cfg-csi-mock-volumes-8628 Nov 13 05:41:40.209: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8628-3364/csi-provisioner-role-cfg Nov 13 05:41:40.218: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8628-3364/csi-resizer Nov 13 05:41:40.221: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8628 Nov 13 05:41:40.225: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8628 Nov 13 05:41:40.228: INFO: deleting *v1.Role: csi-mock-volumes-8628-3364/external-resizer-cfg-csi-mock-volumes-8628 Nov 13 05:41:40.231: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8628-3364/csi-resizer-role-cfg Nov 13 05:41:40.236: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8628-3364/csi-snapshotter Nov 13 05:41:40.239: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8628 Nov 13 05:41:40.243: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8628 Nov 13 05:41:40.246: INFO: deleting *v1.Role: csi-mock-volumes-8628-3364/external-snapshotter-leaderelection-csi-mock-volumes-8628 Nov 13 05:41:40.249: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8628-3364/external-snapshotter-leaderelection Nov 13 05:41:40.252: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8628-3364/csi-mock Nov 13 05:41:40.255: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8628 Nov 13 05:41:40.259: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8628 Nov 13 05:41:40.262: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8628 Nov 13 05:41:40.265: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8628 Nov 13 05:41:40.268: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8628 Nov 13 05:41:40.272: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8628 Nov 13 05:41:40.275: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8628 Nov 13 05:41:40.279: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8628-3364/csi-mockplugin Nov 13 05:41:40.283: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-8628 Nov 13 05:41:40.286: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8628-3364/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-8628-3364 STEP: Waiting for namespaces [csi-mock-volumes-8628-3364] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:41:52.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:58.688 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI Volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561 should expand volume by restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:41:50.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Nov 13 05:41:56.596: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-1048 PodName:hostexec-node1-7dr98 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:56.596: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:41:56.981: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Nov 13 05:41:56.981: INFO: exec node1: stdout: "0\n" Nov 13 05:41:56.982: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Nov 13 05:41:56.982: INFO: exec node1: exit code: 0 Nov 13 05:41:56.982: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:41:56.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1048" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [6.441 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1256 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:41:23.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:41:31.141: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-d6410ad2-7312-43dd-988d-cb4197432b7a] Namespace:persistent-local-volumes-test-9432 PodName:hostexec-node1-zf28r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:31.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:41:31.316: INFO: Creating a PV followed by a PVC Nov 13 05:41:31.323: INFO: Waiting for PV local-pvv7twz to bind to PVC pvc-c8tx2 Nov 13 05:41:31.323: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-c8tx2] to have phase Bound Nov 13 05:41:31.326: INFO: PersistentVolumeClaim pvc-c8tx2 found but phase is Pending instead of Bound. Nov 13 05:41:33.328: INFO: PersistentVolumeClaim pvc-c8tx2 found but phase is Pending instead of Bound. Nov 13 05:41:35.332: INFO: PersistentVolumeClaim pvc-c8tx2 found but phase is Pending instead of Bound. Nov 13 05:41:37.336: INFO: PersistentVolumeClaim pvc-c8tx2 found but phase is Pending instead of Bound. Nov 13 05:41:39.341: INFO: PersistentVolumeClaim pvc-c8tx2 found but phase is Pending instead of Bound. Nov 13 05:41:41.344: INFO: PersistentVolumeClaim pvc-c8tx2 found but phase is Pending instead of Bound. Nov 13 05:41:43.347: INFO: PersistentVolumeClaim pvc-c8tx2 found and phase=Bound (12.023888418s) Nov 13 05:41:43.347: INFO: Waiting up to 3m0s for PersistentVolume local-pvv7twz to have phase Bound Nov 13 05:41:43.350: INFO: PersistentVolume local-pvv7twz found and phase=Bound (2.275338ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Nov 13 05:41:51.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-9432 exec pod-1a07d6b1-3e98-4917-828a-fc6cb1bddf51 --namespace=persistent-local-volumes-test-9432 -- stat -c %g /mnt/volume1' Nov 13 05:41:51.604: INFO: stderr: "" Nov 13 05:41:51.604: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Nov 13 05:41:59.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-9432 exec pod-15925b9d-620d-4c08-b41f-d26f89bdf51f --namespace=persistent-local-volumes-test-9432 -- stat -c %g /mnt/volume1' Nov 13 05:41:59.882: INFO: stderr: "" Nov 13 05:41:59.882: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod pod-1a07d6b1-3e98-4917-828a-fc6cb1bddf51 in namespace persistent-local-volumes-test-9432 STEP: Deleting second pod STEP: Deleting pod pod-15925b9d-620d-4c08-b41f-d26f89bdf51f in namespace persistent-local-volumes-test-9432 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:41:59.894: INFO: Deleting PersistentVolumeClaim "pvc-c8tx2" Nov 13 05:41:59.898: INFO: Deleting PersistentVolume "local-pvv7twz" STEP: Removing the test directory Nov 13 05:41:59.902: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-d6410ad2-7312-43dd-988d-cb4197432b7a] Namespace:persistent-local-volumes-test-9432 PodName:hostexec-node1-zf28r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:59.902: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:42:00.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9432" for this suite. • [SLOW TEST:36.929 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":3,"skipped":62,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:42:00.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename regional-pd STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:68 Nov 13 05:42:00.152: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:42:00.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "regional-pd-4092" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 RegionalPD [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:76 should provision storage [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:77 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:41:57.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Nov 13 05:42:03.085: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-5955 PodName:hostexec-node1-tvfm2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:42:03.085: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:42:03.560: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Nov 13 05:42:03.561: INFO: exec node1: stdout: "0\n" Nov 13 05:42:03.561: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Nov 13 05:42:03.561: INFO: exec node1: exit code: 0 Nov 13 05:42:03.561: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:42:03.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5955" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [6.529 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1256 ------------------------------ SS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:41:13.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] token should be plumbed down when csiServiceAccountTokenEnabled=true /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1402 STEP: Building a driver namespace object, basename csi-mock-volumes-7924 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:41:13.136: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7924-1652/csi-attacher Nov 13 05:41:13.138: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7924 Nov 13 05:41:13.138: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-7924 Nov 13 05:41:13.141: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7924 Nov 13 05:41:13.144: INFO: creating *v1.Role: csi-mock-volumes-7924-1652/external-attacher-cfg-csi-mock-volumes-7924 Nov 13 05:41:13.146: INFO: creating *v1.RoleBinding: csi-mock-volumes-7924-1652/csi-attacher-role-cfg Nov 13 05:41:13.149: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7924-1652/csi-provisioner Nov 13 05:41:13.152: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7924 Nov 13 05:41:13.152: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-7924 Nov 13 05:41:13.154: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7924 Nov 13 05:41:13.157: INFO: creating *v1.Role: csi-mock-volumes-7924-1652/external-provisioner-cfg-csi-mock-volumes-7924 Nov 13 05:41:13.160: INFO: creating *v1.RoleBinding: csi-mock-volumes-7924-1652/csi-provisioner-role-cfg Nov 13 05:41:13.162: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7924-1652/csi-resizer Nov 13 05:41:13.166: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7924 Nov 13 05:41:13.166: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-7924 Nov 13 05:41:13.168: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7924 Nov 13 05:41:13.170: INFO: creating *v1.Role: csi-mock-volumes-7924-1652/external-resizer-cfg-csi-mock-volumes-7924 Nov 13 05:41:13.172: INFO: creating *v1.RoleBinding: csi-mock-volumes-7924-1652/csi-resizer-role-cfg Nov 13 05:41:13.175: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7924-1652/csi-snapshotter Nov 13 05:41:13.177: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7924 Nov 13 05:41:13.177: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-7924 Nov 13 05:41:13.180: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7924 Nov 13 05:41:13.182: INFO: creating *v1.Role: csi-mock-volumes-7924-1652/external-snapshotter-leaderelection-csi-mock-volumes-7924 Nov 13 05:41:13.184: INFO: creating *v1.RoleBinding: csi-mock-volumes-7924-1652/external-snapshotter-leaderelection Nov 13 05:41:13.187: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7924-1652/csi-mock Nov 13 05:41:13.190: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7924 Nov 13 05:41:13.192: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7924 Nov 13 05:41:13.195: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7924 Nov 13 05:41:13.197: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7924 Nov 13 05:41:13.200: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7924 Nov 13 05:41:13.202: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7924 Nov 13 05:41:13.204: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7924 Nov 13 05:41:13.206: INFO: creating *v1.StatefulSet: csi-mock-volumes-7924-1652/csi-mockplugin Nov 13 05:41:13.210: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-7924 Nov 13 05:41:13.212: INFO: creating *v1.StatefulSet: csi-mock-volumes-7924-1652/csi-mockplugin-attacher Nov 13 05:41:13.216: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-7924" Nov 13 05:41:13.218: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-7924 to register on node node1 STEP: Creating pod Nov 13 05:41:29.489: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:41:29.494: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-9pkth] to have phase Bound Nov 13 05:41:29.496: INFO: PersistentVolumeClaim pvc-9pkth found but phase is Pending instead of Bound. Nov 13 05:41:31.499: INFO: PersistentVolumeClaim pvc-9pkth found and phase=Bound (2.005250257s) STEP: Deleting the previously created pod Nov 13 05:41:38.521: INFO: Deleting pod "pvc-volume-tester-hxgjg" in namespace "csi-mock-volumes-7924" Nov 13 05:41:38.526: INFO: Wait up to 5m0s for pod "pvc-volume-tester-hxgjg" to be fully deleted STEP: Checking CSI driver logs Nov 13 05:41:44.545: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.tokens: {"":{"token":"eyJhbGciOiJSUzI1NiIsImtpZCI6IktyTWkzSmRiTk51TF94Sm9XenUydlB2clE4ZDB1UU02V1V1TV9Dc0VvV2MifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjM2NzgyNjk0LCJpYXQiOjE2MzY3ODIwOTQsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJjc2ktbW9jay12b2x1bWVzLTc5MjQiLCJwb2QiOnsibmFtZSI6InB2Yy12b2x1bWUtdGVzdGVyLWh4Z2pnIiwidWlkIjoiMmE2MDRhODAtYWE1MC00NjI3LWI4ZTYtNGIzMTc5MjBhZmUyIn0sInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJkZWZhdWx0IiwidWlkIjoiZDE0ZTgwZmYtODNiOS00MmVmLWJmNTgtOWJjOWY5NzcxMjE2In19LCJuYmYiOjE2MzY3ODIwOTQsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpjc2ktbW9jay12b2x1bWVzLTc5MjQ6ZGVmYXVsdCJ9.ZdMFtlBoQWiDlUoSJKsRVtdPiUOdQTQ4z_yrjpoUnLfRDSOSwzmvWAP3v6GPp2DWD5wgVNh4wWn8iobQ_yvhGQWOWXZLJffXHgRX412F3WV0gUe18D9VEeg8sQ7qssB6xRqCcyKE0Vdb8fCyi5CjXO-CJkqGf6JG4QT4TDd0W6AUW7t0UujUt8odtfEsbYW8AQ8vhLZcO5im-7IVQbvBA41IIaMd3pGsgiDesHMskLpC6bKc31AwxrQGzDSHw_-sfE6CeLW3O5PuF2NGspbKEbUtxlajtSiik77Gz6rhwuS4dyALflQpAbkXpflNY0PGPsTG0f81NyBoM_MSSBBnXQ","expirationTimestamp":"2021-11-13T05:51:34Z"}} Nov 13 05:41:44.545: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/2a604a80-aa50-4627-b8e6-4b317920afe2/volumes/kubernetes.io~csi/pvc-80271782-3928-4cde-ab84-80b0844a66af/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-hxgjg Nov 13 05:41:44.545: INFO: Deleting pod "pvc-volume-tester-hxgjg" in namespace "csi-mock-volumes-7924" STEP: Deleting claim pvc-9pkth Nov 13 05:41:44.555: INFO: Waiting up to 2m0s for PersistentVolume pvc-80271782-3928-4cde-ab84-80b0844a66af to get deleted Nov 13 05:41:44.557: INFO: PersistentVolume pvc-80271782-3928-4cde-ab84-80b0844a66af found and phase=Bound (2.432653ms) Nov 13 05:41:46.561: INFO: PersistentVolume pvc-80271782-3928-4cde-ab84-80b0844a66af was removed STEP: Deleting storageclass csi-mock-volumes-7924-scn4r6v STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-7924 STEP: Waiting for namespaces [csi-mock-volumes-7924] to vanish STEP: uninstalling csi mock driver Nov 13 05:41:52.575: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7924-1652/csi-attacher Nov 13 05:41:52.578: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7924 Nov 13 05:41:52.583: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7924 Nov 13 05:41:52.586: INFO: deleting *v1.Role: csi-mock-volumes-7924-1652/external-attacher-cfg-csi-mock-volumes-7924 Nov 13 05:41:52.589: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7924-1652/csi-attacher-role-cfg Nov 13 05:41:52.594: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7924-1652/csi-provisioner Nov 13 05:41:52.598: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7924 Nov 13 05:41:52.601: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7924 Nov 13 05:41:52.606: INFO: deleting *v1.Role: csi-mock-volumes-7924-1652/external-provisioner-cfg-csi-mock-volumes-7924 Nov 13 05:41:52.609: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7924-1652/csi-provisioner-role-cfg Nov 13 05:41:52.613: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7924-1652/csi-resizer Nov 13 05:41:52.624: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7924 Nov 13 05:41:52.631: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7924 Nov 13 05:41:52.640: INFO: deleting *v1.Role: csi-mock-volumes-7924-1652/external-resizer-cfg-csi-mock-volumes-7924 Nov 13 05:41:52.643: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7924-1652/csi-resizer-role-cfg Nov 13 05:41:52.647: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7924-1652/csi-snapshotter Nov 13 05:41:52.651: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7924 Nov 13 05:41:52.655: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7924 Nov 13 05:41:52.658: INFO: deleting *v1.Role: csi-mock-volumes-7924-1652/external-snapshotter-leaderelection-csi-mock-volumes-7924 Nov 13 05:41:52.662: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7924-1652/external-snapshotter-leaderelection Nov 13 05:41:52.665: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7924-1652/csi-mock Nov 13 05:41:52.669: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7924 Nov 13 05:41:52.673: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7924 Nov 13 05:41:52.676: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7924 Nov 13 05:41:52.679: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7924 Nov 13 05:41:52.682: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7924 Nov 13 05:41:52.686: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7924 Nov 13 05:41:52.689: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7924 Nov 13 05:41:52.692: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7924-1652/csi-mockplugin Nov 13 05:41:52.696: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-7924 Nov 13 05:41:52.700: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7924-1652/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-7924-1652 STEP: Waiting for namespaces [csi-mock-volumes-7924-1652] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:42:08.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:55.651 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIServiceAccountToken /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1374 token should be plumbed down when csiServiceAccountTokenEnabled=true /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1402 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should be plumbed down when csiServiceAccountTokenEnabled=true","total":-1,"completed":4,"skipped":170,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:42:00.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:59 STEP: Creating configMap with name projected-configmap-test-volume-a1941fd6-431d-49c6-b272-ab2d997e7d32 STEP: Creating a pod to test consume configMaps Nov 13 05:42:00.240: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1f43315d-c0b5-4275-a4f3-29c12f38110d" in namespace "projected-2558" to be "Succeeded or Failed" Nov 13 05:42:00.243: INFO: Pod "pod-projected-configmaps-1f43315d-c0b5-4275-a4f3-29c12f38110d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.78762ms Nov 13 05:42:02.247: INFO: Pod "pod-projected-configmaps-1f43315d-c0b5-4275-a4f3-29c12f38110d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006680004s Nov 13 05:42:04.253: INFO: Pod "pod-projected-configmaps-1f43315d-c0b5-4275-a4f3-29c12f38110d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012824846s Nov 13 05:42:06.256: INFO: Pod "pod-projected-configmaps-1f43315d-c0b5-4275-a4f3-29c12f38110d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016461956s Nov 13 05:42:08.261: INFO: Pod "pod-projected-configmaps-1f43315d-c0b5-4275-a4f3-29c12f38110d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.021548431s Nov 13 05:42:10.267: INFO: Pod "pod-projected-configmaps-1f43315d-c0b5-4275-a4f3-29c12f38110d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.027060046s STEP: Saw pod success Nov 13 05:42:10.267: INFO: Pod "pod-projected-configmaps-1f43315d-c0b5-4275-a4f3-29c12f38110d" satisfied condition "Succeeded or Failed" Nov 13 05:42:10.270: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-1f43315d-c0b5-4275-a4f3-29c12f38110d container agnhost-container: STEP: delete the pod Nov 13 05:42:10.283: INFO: Waiting for pod pod-projected-configmaps-1f43315d-c0b5-4275-a4f3-29c12f38110d to disappear Nov 13 05:42:10.285: INFO: Pod pod-projected-configmaps-1f43315d-c0b5-4275-a4f3-29c12f38110d no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:42:10.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2558" for this suite. • [SLOW TEST:10.091 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:59 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":4,"skipped":131,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:41:42.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:41:50.265: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-2d7219ef-4408-4b0e-b4ed-968c00a64132-backend && mount --bind /tmp/local-volume-test-2d7219ef-4408-4b0e-b4ed-968c00a64132-backend /tmp/local-volume-test-2d7219ef-4408-4b0e-b4ed-968c00a64132-backend && ln -s /tmp/local-volume-test-2d7219ef-4408-4b0e-b4ed-968c00a64132-backend /tmp/local-volume-test-2d7219ef-4408-4b0e-b4ed-968c00a64132] Namespace:persistent-local-volumes-test-4095 PodName:hostexec-node1-l5tzl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:50.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:41:50.372: INFO: Creating a PV followed by a PVC Nov 13 05:41:50.379: INFO: Waiting for PV local-pvc4nxv to bind to PVC pvc-fvmkq Nov 13 05:41:50.379: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-fvmkq] to have phase Bound Nov 13 05:41:50.382: INFO: PersistentVolumeClaim pvc-fvmkq found but phase is Pending instead of Bound. Nov 13 05:41:52.385: INFO: PersistentVolumeClaim pvc-fvmkq found but phase is Pending instead of Bound. Nov 13 05:41:54.391: INFO: PersistentVolumeClaim pvc-fvmkq found but phase is Pending instead of Bound. Nov 13 05:41:56.394: INFO: PersistentVolumeClaim pvc-fvmkq found but phase is Pending instead of Bound. Nov 13 05:41:58.401: INFO: PersistentVolumeClaim pvc-fvmkq found and phase=Bound (8.021371355s) Nov 13 05:41:58.401: INFO: Waiting up to 3m0s for PersistentVolume local-pvc4nxv to have phase Bound Nov 13 05:41:58.403: INFO: PersistentVolume local-pvc4nxv found and phase=Bound (2.335608ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Nov 13 05:42:10.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-4095 exec pod-c79f31de-1cc8-448b-a10f-812a6de5d3b2 --namespace=persistent-local-volumes-test-4095 -- stat -c %g /mnt/volume1' Nov 13 05:42:10.930: INFO: stderr: "" Nov 13 05:42:10.930: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-c79f31de-1cc8-448b-a10f-812a6de5d3b2 in namespace persistent-local-volumes-test-4095 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:42:10.935: INFO: Deleting PersistentVolumeClaim "pvc-fvmkq" Nov 13 05:42:10.939: INFO: Deleting PersistentVolume "local-pvc4nxv" STEP: Removing the test directory Nov 13 05:42:10.943: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-2d7219ef-4408-4b0e-b4ed-968c00a64132 && umount /tmp/local-volume-test-2d7219ef-4408-4b0e-b4ed-968c00a64132-backend && rm -r /tmp/local-volume-test-2d7219ef-4408-4b0e-b4ed-968c00a64132-backend] Namespace:persistent-local-volumes-test-4095 PodName:hostexec-node1-l5tzl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:42:10.943: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:42:11.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4095" for this suite. • [SLOW TEST:28.943 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":2,"skipped":35,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:41:37.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-74384d6b-0a79-40ef-a286-fcef58eb8bed" Nov 13 05:41:42.045: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-74384d6b-0a79-40ef-a286-fcef58eb8bed" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-74384d6b-0a79-40ef-a286-fcef58eb8bed" "/tmp/local-volume-test-74384d6b-0a79-40ef-a286-fcef58eb8bed"] Namespace:persistent-local-volumes-test-5032 PodName:hostexec-node1-f6zc6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:41:42.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:41:42.145: INFO: Creating a PV followed by a PVC Nov 13 05:41:42.152: INFO: Waiting for PV local-pvvdzdb to bind to PVC pvc-wbtfq Nov 13 05:41:42.152: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-wbtfq] to have phase Bound Nov 13 05:41:42.154: INFO: PersistentVolumeClaim pvc-wbtfq found but phase is Pending instead of Bound. Nov 13 05:41:44.158: INFO: PersistentVolumeClaim pvc-wbtfq found but phase is Pending instead of Bound. Nov 13 05:41:46.160: INFO: PersistentVolumeClaim pvc-wbtfq found but phase is Pending instead of Bound. Nov 13 05:41:48.163: INFO: PersistentVolumeClaim pvc-wbtfq found but phase is Pending instead of Bound. Nov 13 05:41:50.167: INFO: PersistentVolumeClaim pvc-wbtfq found but phase is Pending instead of Bound. Nov 13 05:41:52.171: INFO: PersistentVolumeClaim pvc-wbtfq found but phase is Pending instead of Bound. Nov 13 05:41:54.177: INFO: PersistentVolumeClaim pvc-wbtfq found but phase is Pending instead of Bound. Nov 13 05:41:56.180: INFO: PersistentVolumeClaim pvc-wbtfq found but phase is Pending instead of Bound. Nov 13 05:41:58.185: INFO: PersistentVolumeClaim pvc-wbtfq found and phase=Bound (16.033146232s) Nov 13 05:41:58.185: INFO: Waiting up to 3m0s for PersistentVolume local-pvvdzdb to have phase Bound Nov 13 05:41:58.188: INFO: PersistentVolume local-pvvdzdb found and phase=Bound (2.835558ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Nov 13 05:42:08.216: INFO: pod "pod-3c779270-dba0-47e1-b74f-fee7f0e758d5" created on Node "node1" STEP: Writing in pod1 Nov 13 05:42:08.216: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5032 PodName:pod-3c779270-dba0-47e1-b74f-fee7f0e758d5 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:42:08.216: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:42:09.867: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 13 05:42:09.867: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5032 PodName:pod-3c779270-dba0-47e1-b74f-fee7f0e758d5 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:42:09.867: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:42:10.605: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Nov 13 05:42:16.642: INFO: pod "pod-5b5b39f0-f13d-450c-a721-76c7e047de4e" created on Node "node1" Nov 13 05:42:16.642: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5032 PodName:pod-5b5b39f0-f13d-450c-a721-76c7e047de4e ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:42:16.642: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:42:16.977: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Nov 13 05:42:16.977: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-74384d6b-0a79-40ef-a286-fcef58eb8bed > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5032 PodName:pod-5b5b39f0-f13d-450c-a721-76c7e047de4e ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:42:16.977: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:42:17.060: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-74384d6b-0a79-40ef-a286-fcef58eb8bed > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Nov 13 05:42:17.060: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5032 PodName:pod-3c779270-dba0-47e1-b74f-fee7f0e758d5 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:42:17.061: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:42:17.152: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-74384d6b-0a79-40ef-a286-fcef58eb8bed", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-3c779270-dba0-47e1-b74f-fee7f0e758d5 in namespace persistent-local-volumes-test-5032 STEP: Deleting pod2 STEP: Deleting pod pod-5b5b39f0-f13d-450c-a721-76c7e047de4e in namespace persistent-local-volumes-test-5032 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:42:17.162: INFO: Deleting PersistentVolumeClaim "pvc-wbtfq" Nov 13 05:42:17.165: INFO: Deleting PersistentVolume "local-pvvdzdb" STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-74384d6b-0a79-40ef-a286-fcef58eb8bed" Nov 13 05:42:17.170: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-74384d6b-0a79-40ef-a286-fcef58eb8bed"] Namespace:persistent-local-volumes-test-5032 PodName:hostexec-node1-f6zc6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:42:17.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:42:17.265: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-74384d6b-0a79-40ef-a286-fcef58eb8bed] Namespace:persistent-local-volumes-test-5032 PodName:hostexec-node1-f6zc6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:42:17.265: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:42:17.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5032" for this suite. • [SLOW TEST:39.398 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":2,"skipped":100,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:42:03.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Nov 13 05:42:03.612: INFO: The status of Pod test-hostpath-type-4pbxv is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:42:05.615: INFO: The status of Pod test-hostpath-type-4pbxv is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:42:07.616: INFO: The status of Pod test-hostpath-type-4pbxv is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:42:09.617: INFO: The status of Pod test-hostpath-type-4pbxv is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:42:11.617: INFO: The status of Pod test-hostpath-type-4pbxv is Running (Ready = true) STEP: running on node node1 [It] Should be able to mount socket 'asocket' successfully when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:208 [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:42:19.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-565" for this suite. • [SLOW TEST:16.074 seconds] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount socket 'asocket' successfully when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:208 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should be able to mount socket 'asocket' successfully when HostPathType is HostPathSocket","total":-1,"completed":4,"skipped":178,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:42:17.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:42:21.532: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-c68d17b7-dfdc-474e-b703-ad3f012e5891] Namespace:persistent-local-volumes-test-9725 PodName:hostexec-node1-qdhnx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:42:21.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:42:21.899: INFO: Creating a PV followed by a PVC Nov 13 05:42:21.905: INFO: Waiting for PV local-pv86jmw to bind to PVC pvc-xvswl Nov 13 05:42:21.905: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-xvswl] to have phase Bound Nov 13 05:42:21.908: INFO: PersistentVolumeClaim pvc-xvswl found but phase is Pending instead of Bound. Nov 13 05:42:23.915: INFO: PersistentVolumeClaim pvc-xvswl found but phase is Pending instead of Bound. Nov 13 05:42:25.917: INFO: PersistentVolumeClaim pvc-xvswl found but phase is Pending instead of Bound. Nov 13 05:42:27.921: INFO: PersistentVolumeClaim pvc-xvswl found and phase=Bound (6.016097932s) Nov 13 05:42:27.921: INFO: Waiting up to 3m0s for PersistentVolume local-pv86jmw to have phase Bound Nov 13 05:42:27.923: INFO: PersistentVolume local-pv86jmw found and phase=Bound (1.580235ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Nov 13 05:42:27.927: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:42:27.929: INFO: Deleting PersistentVolumeClaim "pvc-xvswl" Nov 13 05:42:27.932: INFO: Deleting PersistentVolume "local-pv86jmw" STEP: Removing the test directory Nov 13 05:42:27.938: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c68d17b7-dfdc-474e-b703-ad3f012e5891] Namespace:persistent-local-volumes-test-9725 PodName:hostexec-node1-qdhnx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:42:27.938: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:42:28.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9725" for this suite. S [SKIPPING] [10.776 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:42:11.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-3f2fb5c0-d1fd-4dd1-9903-b49327b075a5" Nov 13 05:42:17.306: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-3f2fb5c0-d1fd-4dd1-9903-b49327b075a5 && dd if=/dev/zero of=/tmp/local-volume-test-3f2fb5c0-d1fd-4dd1-9903-b49327b075a5/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-3f2fb5c0-d1fd-4dd1-9903-b49327b075a5/file] Namespace:persistent-local-volumes-test-3366 PodName:hostexec-node1-6bfkd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:42:17.306: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:42:17.428: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-3f2fb5c0-d1fd-4dd1-9903-b49327b075a5/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-3366 PodName:hostexec-node1-6bfkd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:42:17.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:42:17.763: INFO: Creating a PV followed by a PVC Nov 13 05:42:17.770: INFO: Waiting for PV local-pvjkk4r to bind to PVC pvc-cxbbf Nov 13 05:42:17.770: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-cxbbf] to have phase Bound Nov 13 05:42:17.772: INFO: PersistentVolumeClaim pvc-cxbbf found but phase is Pending instead of Bound. Nov 13 05:42:19.774: INFO: PersistentVolumeClaim pvc-cxbbf found but phase is Pending instead of Bound. Nov 13 05:42:21.778: INFO: PersistentVolumeClaim pvc-cxbbf found but phase is Pending instead of Bound. Nov 13 05:42:23.782: INFO: PersistentVolumeClaim pvc-cxbbf found but phase is Pending instead of Bound. Nov 13 05:42:25.786: INFO: PersistentVolumeClaim pvc-cxbbf found but phase is Pending instead of Bound. Nov 13 05:42:27.790: INFO: PersistentVolumeClaim pvc-cxbbf found and phase=Bound (10.020740307s) Nov 13 05:42:27.790: INFO: Waiting up to 3m0s for PersistentVolume local-pvjkk4r to have phase Bound Nov 13 05:42:27.793: INFO: PersistentVolume local-pvjkk4r found and phase=Bound (2.199626ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:42:31.823: INFO: pod "pod-00dcd1bf-c5f1-4265-8266-c72efeca97c4" created on Node "node1" STEP: Writing in pod1 Nov 13 05:42:31.823: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3366 PodName:pod-00dcd1bf-c5f1-4265-8266-c72efeca97c4 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:42:31.823: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:42:31.911: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n18 bytes (18B) copied, 0.000165 seconds, 106.5KB/s", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Nov 13 05:42:31.911: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-3366 PodName:pod-00dcd1bf-c5f1-4265-8266-c72efeca97c4 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:42:31.911: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:42:31.995: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-00dcd1bf-c5f1-4265-8266-c72efeca97c4 in namespace persistent-local-volumes-test-3366 [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:42:32.001: INFO: Deleting PersistentVolumeClaim "pvc-cxbbf" Nov 13 05:42:32.005: INFO: Deleting PersistentVolume "local-pvjkk4r" Nov 13 05:42:32.010: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-3f2fb5c0-d1fd-4dd1-9903-b49327b075a5/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-3366 PodName:hostexec-node1-6bfkd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:42:32.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node1" at path /tmp/local-volume-test-3f2fb5c0-d1fd-4dd1-9903-b49327b075a5/file Nov 13 05:42:32.111: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-3366 PodName:hostexec-node1-6bfkd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:42:32.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-3f2fb5c0-d1fd-4dd1-9903-b49327b075a5 Nov 13 05:42:32.201: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-3f2fb5c0-d1fd-4dd1-9903-b49327b075a5] Namespace:persistent-local-volumes-test-3366 PodName:hostexec-node1-6bfkd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:42:32.201: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:42:32.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3366" for this suite. • [SLOW TEST:21.049 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":3,"skipped":85,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:42:10.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:42:16.510: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-5a728f2e-c55d-46c9-98c5-202f77f48440-backend && mount --bind /tmp/local-volume-test-5a728f2e-c55d-46c9-98c5-202f77f48440-backend /tmp/local-volume-test-5a728f2e-c55d-46c9-98c5-202f77f48440-backend && ln -s /tmp/local-volume-test-5a728f2e-c55d-46c9-98c5-202f77f48440-backend /tmp/local-volume-test-5a728f2e-c55d-46c9-98c5-202f77f48440] Namespace:persistent-local-volumes-test-4668 PodName:hostexec-node1-mlvx5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:42:16.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:42:16.626: INFO: Creating a PV followed by a PVC Nov 13 05:42:16.635: INFO: Waiting for PV local-pvztlzd to bind to PVC pvc-4st9r Nov 13 05:42:16.635: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-4st9r] to have phase Bound Nov 13 05:42:16.642: INFO: PersistentVolumeClaim pvc-4st9r found but phase is Pending instead of Bound. Nov 13 05:42:18.646: INFO: PersistentVolumeClaim pvc-4st9r found but phase is Pending instead of Bound. Nov 13 05:42:20.651: INFO: PersistentVolumeClaim pvc-4st9r found but phase is Pending instead of Bound. Nov 13 05:42:22.653: INFO: PersistentVolumeClaim pvc-4st9r found but phase is Pending instead of Bound. Nov 13 05:42:24.657: INFO: PersistentVolumeClaim pvc-4st9r found but phase is Pending instead of Bound. Nov 13 05:42:26.659: INFO: PersistentVolumeClaim pvc-4st9r found and phase=Bound (10.023905296s) Nov 13 05:42:26.659: INFO: Waiting up to 3m0s for PersistentVolume local-pvztlzd to have phase Bound Nov 13 05:42:26.661: INFO: PersistentVolume local-pvztlzd found and phase=Bound (1.913782ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:42:32.686: INFO: pod "pod-0b819d7c-8ac8-472e-a55f-5a8ab7d5caf6" created on Node "node1" STEP: Writing in pod1 Nov 13 05:42:32.686: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4668 PodName:pod-0b819d7c-8ac8-472e-a55f-5a8ab7d5caf6 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:42:32.686: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:42:32.773: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Nov 13 05:42:32.773: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4668 PodName:pod-0b819d7c-8ac8-472e-a55f-5a8ab7d5caf6 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:42:32.773: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:42:32.872: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Nov 13 05:42:32.872: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-5a728f2e-c55d-46c9-98c5-202f77f48440 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4668 PodName:pod-0b819d7c-8ac8-472e-a55f-5a8ab7d5caf6 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:42:32.872: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:42:32.950: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-5a728f2e-c55d-46c9-98c5-202f77f48440 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-0b819d7c-8ac8-472e-a55f-5a8ab7d5caf6 in namespace persistent-local-volumes-test-4668 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:42:32.954: INFO: Deleting PersistentVolumeClaim "pvc-4st9r" Nov 13 05:42:32.958: INFO: Deleting PersistentVolume "local-pvztlzd" STEP: Removing the test directory Nov 13 05:42:32.962: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-5a728f2e-c55d-46c9-98c5-202f77f48440 && umount /tmp/local-volume-test-5a728f2e-c55d-46c9-98c5-202f77f48440-backend && rm -r /tmp/local-volume-test-5a728f2e-c55d-46c9-98c5-202f77f48440-backend] Namespace:persistent-local-volumes-test-4668 PodName:hostexec-node1-mlvx5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:42:32.962: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:42:33.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4668" for this suite. • [SLOW TEST:22.617 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":5,"skipped":212,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:42:28.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Nov 13 05:42:28.301: INFO: The status of Pod test-hostpath-type-mngnr is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:42:30.305: INFO: The status of Pod test-hostpath-type-mngnr is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:42:32.305: INFO: The status of Pod test-hostpath-type-mngnr is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:42:34.304: INFO: The status of Pod test-hostpath-type-mngnr is Running (Ready = true) STEP: running on node node1 STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should fail on mounting file 'afile' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:156 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:42:42.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-8074" for this suite. • [SLOW TEST:14.101 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting file 'afile' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:156 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathSocket","total":-1,"completed":3,"skipped":147,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":7,"skipped":226,"failed":0} [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:41:52.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should create and delete persistent volumes [fast] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:794 STEP: creating a Gluster DP server Pod STEP: locating the provisioner pod STEP: creating a StorageClass STEP: Creating a StorageClass STEP: creating a claim object with a suffix for gluster dynamic provisioner Nov 13 05:42:54.363: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: creating claim=&PersistentVolumeClaim{ObjectMeta:{ pvc- volume-provisioning-6735 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {} 2Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*volume-provisioning-6735-glusterdptestng7cb,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} Nov 13 05:42:54.371: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-jtkkk] to have phase Bound Nov 13 05:42:54.374: INFO: PersistentVolumeClaim pvc-jtkkk found but phase is Pending instead of Bound. Nov 13 05:42:56.378: INFO: PersistentVolumeClaim pvc-jtkkk found and phase=Bound (2.006251597s) STEP: checking the claim STEP: checking the PV STEP: deleting claim "volume-provisioning-6735"/"pvc-jtkkk" STEP: deleting the claim's PV "pvc-63eb0906-5a64-4ccb-89a5-bcbe079b3f6f" Nov 13 05:42:56.389: INFO: Waiting up to 20m0s for PersistentVolume pvc-63eb0906-5a64-4ccb-89a5-bcbe079b3f6f to get deleted Nov 13 05:42:56.391: INFO: PersistentVolume pvc-63eb0906-5a64-4ccb-89a5-bcbe079b3f6f found and phase=Bound (2.297696ms) Nov 13 05:43:01.394: INFO: PersistentVolume pvc-63eb0906-5a64-4ccb-89a5-bcbe079b3f6f was removed Nov 13 05:43:01.394: INFO: deleting claim "volume-provisioning-6735"/"pvc-jtkkk" Nov 13 05:43:01.399: INFO: deleting storage class volume-provisioning-6735-glusterdptestng7cb [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:43:01.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-6735" for this suite. • [SLOW TEST:69.103 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 GlusterDynamicProvisioner /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:793 should create and delete persistent volumes [fast] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:794 ------------------------------ {"msg":"PASSED [sig-storage] Dynamic Provisioning GlusterDynamicProvisioner should create and delete persistent volumes [fast]","total":-1,"completed":8,"skipped":226,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:43:01.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Nov 13 05:43:05.465: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-3051 PodName:hostexec-node1-ggwz5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:43:05.465: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:43:05.582: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Nov 13 05:43:05.582: INFO: exec node1: stdout: "0\n" Nov 13 05:43:05.582: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Nov 13 05:43:05.582: INFO: exec node1: exit code: 0 Nov 13 05:43:05.582: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:43:05.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3051" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [4.177 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1256 ------------------------------ SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:42:19.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity used, insufficient capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 STEP: Building a driver namespace object, basename csi-mock-volumes-8237 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:42:19.772: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8237-9188/csi-attacher Nov 13 05:42:19.774: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8237 Nov 13 05:42:19.774: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-8237 Nov 13 05:42:19.776: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8237 Nov 13 05:42:19.779: INFO: creating *v1.Role: csi-mock-volumes-8237-9188/external-attacher-cfg-csi-mock-volumes-8237 Nov 13 05:42:19.782: INFO: creating *v1.RoleBinding: csi-mock-volumes-8237-9188/csi-attacher-role-cfg Nov 13 05:42:19.785: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8237-9188/csi-provisioner Nov 13 05:42:19.787: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8237 Nov 13 05:42:19.788: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-8237 Nov 13 05:42:19.790: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8237 Nov 13 05:42:19.793: INFO: creating *v1.Role: csi-mock-volumes-8237-9188/external-provisioner-cfg-csi-mock-volumes-8237 Nov 13 05:42:19.795: INFO: creating *v1.RoleBinding: csi-mock-volumes-8237-9188/csi-provisioner-role-cfg Nov 13 05:42:19.798: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8237-9188/csi-resizer Nov 13 05:42:19.800: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8237 Nov 13 05:42:19.800: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-8237 Nov 13 05:42:19.804: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8237 Nov 13 05:42:19.808: INFO: creating *v1.Role: csi-mock-volumes-8237-9188/external-resizer-cfg-csi-mock-volumes-8237 Nov 13 05:42:19.810: INFO: creating *v1.RoleBinding: csi-mock-volumes-8237-9188/csi-resizer-role-cfg Nov 13 05:42:19.815: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8237-9188/csi-snapshotter Nov 13 05:42:19.820: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8237 Nov 13 05:42:19.820: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-8237 Nov 13 05:42:19.827: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8237 Nov 13 05:42:19.834: INFO: creating *v1.Role: csi-mock-volumes-8237-9188/external-snapshotter-leaderelection-csi-mock-volumes-8237 Nov 13 05:42:19.838: INFO: creating *v1.RoleBinding: csi-mock-volumes-8237-9188/external-snapshotter-leaderelection Nov 13 05:42:19.843: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8237-9188/csi-mock Nov 13 05:42:19.846: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8237 Nov 13 05:42:19.849: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8237 Nov 13 05:42:19.852: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8237 Nov 13 05:42:19.855: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8237 Nov 13 05:42:19.858: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8237 Nov 13 05:42:19.861: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8237 Nov 13 05:42:19.864: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8237 Nov 13 05:42:19.866: INFO: creating *v1.StatefulSet: csi-mock-volumes-8237-9188/csi-mockplugin Nov 13 05:42:19.870: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-8237 Nov 13 05:42:19.872: INFO: creating *v1.StatefulSet: csi-mock-volumes-8237-9188/csi-mockplugin-attacher Nov 13 05:42:19.876: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-8237" Nov 13 05:42:19.879: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-8237 to register on node node1 STEP: Creating pod Nov 13 05:42:34.405: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Deleting the previously created pod Nov 13 05:42:34.423: INFO: Deleting pod "pvc-volume-tester-nbphk" in namespace "csi-mock-volumes-8237" Nov 13 05:42:34.429: INFO: Wait up to 5m0s for pod "pvc-volume-tester-nbphk" to be fully deleted STEP: Deleting pod pvc-volume-tester-nbphk Nov 13 05:42:34.431: INFO: Deleting pod "pvc-volume-tester-nbphk" in namespace "csi-mock-volumes-8237" STEP: Deleting claim pvc-6h4jt STEP: Deleting storageclass mock-csi-storage-capacity-csi-mock-volumes-8237 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-8237 STEP: Waiting for namespaces [csi-mock-volumes-8237] to vanish STEP: uninstalling csi mock driver Nov 13 05:42:40.455: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8237-9188/csi-attacher Nov 13 05:42:40.460: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8237 Nov 13 05:42:40.464: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8237 Nov 13 05:42:40.468: INFO: deleting *v1.Role: csi-mock-volumes-8237-9188/external-attacher-cfg-csi-mock-volumes-8237 Nov 13 05:42:40.471: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8237-9188/csi-attacher-role-cfg Nov 13 05:42:40.475: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8237-9188/csi-provisioner Nov 13 05:42:40.478: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8237 Nov 13 05:42:40.481: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8237 Nov 13 05:42:40.485: INFO: deleting *v1.Role: csi-mock-volumes-8237-9188/external-provisioner-cfg-csi-mock-volumes-8237 Nov 13 05:42:40.488: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8237-9188/csi-provisioner-role-cfg Nov 13 05:42:40.491: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8237-9188/csi-resizer Nov 13 05:42:40.493: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8237 Nov 13 05:42:40.496: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8237 Nov 13 05:42:40.500: INFO: deleting *v1.Role: csi-mock-volumes-8237-9188/external-resizer-cfg-csi-mock-volumes-8237 Nov 13 05:42:40.503: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8237-9188/csi-resizer-role-cfg Nov 13 05:42:40.506: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8237-9188/csi-snapshotter Nov 13 05:42:40.511: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8237 Nov 13 05:42:40.524: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8237 Nov 13 05:42:40.534: INFO: deleting *v1.Role: csi-mock-volumes-8237-9188/external-snapshotter-leaderelection-csi-mock-volumes-8237 Nov 13 05:42:40.538: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8237-9188/external-snapshotter-leaderelection Nov 13 05:42:40.542: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8237-9188/csi-mock Nov 13 05:42:40.545: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8237 Nov 13 05:42:40.548: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8237 Nov 13 05:42:40.551: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8237 Nov 13 05:42:40.554: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8237 Nov 13 05:42:40.557: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8237 Nov 13 05:42:40.560: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8237 Nov 13 05:42:40.563: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8237 Nov 13 05:42:40.567: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8237-9188/csi-mockplugin Nov 13 05:42:40.570: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-8237 Nov 13 05:42:40.574: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8237-9188/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-8237-9188 STEP: Waiting for namespaces [csi-mock-volumes-8237-9188] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:43:08.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:48.886 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134 CSIStorageCapacity used, insufficient capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","total":-1,"completed":5,"skipped":206,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:43:05.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Nov 13 05:43:05.662: INFO: The status of Pod test-hostpath-type-pxcvd is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:43:07.664: INFO: The status of Pod test-hostpath-type-pxcvd is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:43:09.667: INFO: The status of Pod test-hostpath-type-pxcvd is Running (Ready = true) STEP: running on node node1 STEP: Create a block device for further testing Nov 13 05:43:09.670: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-3506 PodName:test-hostpath-type-pxcvd ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:43:09.671: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:346 [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:43:13.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-3506" for this suite. • [SLOW TEST:8.324 seconds] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:346 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathBlockDev","total":-1,"completed":9,"skipped":239,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:43:13.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Nov 13 05:43:13.998: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:43:13.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-4995" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.035 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 schedule pods each with a PD, delete pod and verify detach [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:93 for RW PD with pod delete grace period of "immediate (0s)" /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:135 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ SS ------------------------------ [BeforeEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:43:14.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:51 Nov 13 05:43:14.035: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PV STEP: Waiting for PV to enter phase Available Nov 13 05:43:14.041: INFO: Waiting up to 30s for PersistentVolume hostpath-mp98w to have phase Available Nov 13 05:43:14.043: INFO: PersistentVolume hostpath-mp98w found but phase is Pending instead of Available. Nov 13 05:43:15.047: INFO: PersistentVolume hostpath-mp98w found and phase=Available (1.006080775s) STEP: Checking that PV Protection finalizer is set [It] Verify "immediate" deletion of a PV that is not bound to a PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:99 STEP: Deleting the PV Nov 13 05:43:15.054: INFO: Waiting up to 3m0s for PersistentVolume hostpath-mp98w to get deleted Nov 13 05:43:15.056: INFO: PersistentVolume hostpath-mp98w found and phase=Available (1.89964ms) Nov 13 05:43:17.061: INFO: PersistentVolume hostpath-mp98w was removed [AfterEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:43:17.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-protection-3636" for this suite. [AfterEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:92 Nov 13 05:43:17.071: INFO: AfterEach: Cleaning up test resources. Nov 13 05:43:17.071: INFO: pvc is nil Nov 13 05:43:17.071: INFO: Deleting PersistentVolume "hostpath-mp98w" • ------------------------------ {"msg":"PASSED [sig-storage] PV Protection Verify \"immediate\" deletion of a PV that is not bound to a PVC","total":-1,"completed":10,"skipped":254,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:43:17.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support memory backed volumes of specified size /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:298 [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:43:17.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5760" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size","total":-1,"completed":11,"skipped":266,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:43:08.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:43:10.654: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-f69c2bef-99b6-44dd-af20-9e3297678629-backend && ln -s /tmp/local-volume-test-f69c2bef-99b6-44dd-af20-9e3297678629-backend /tmp/local-volume-test-f69c2bef-99b6-44dd-af20-9e3297678629] Namespace:persistent-local-volumes-test-5421 PodName:hostexec-node1-t8chp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:43:10.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:43:10.816: INFO: Creating a PV followed by a PVC Nov 13 05:43:10.823: INFO: Waiting for PV local-pvsblds to bind to PVC pvc-gwtvk Nov 13 05:43:10.823: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-gwtvk] to have phase Bound Nov 13 05:43:10.825: INFO: PersistentVolumeClaim pvc-gwtvk found but phase is Pending instead of Bound. Nov 13 05:43:12.829: INFO: PersistentVolumeClaim pvc-gwtvk found and phase=Bound (2.005772093s) Nov 13 05:43:12.829: INFO: Waiting up to 3m0s for PersistentVolume local-pvsblds to have phase Bound Nov 13 05:43:12.832: INFO: PersistentVolume local-pvsblds found and phase=Bound (3.297796ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Nov 13 05:43:16.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-5421 exec pod-7d56f69e-0b2e-4ff8-8a7b-6a6cc3d2284a --namespace=persistent-local-volumes-test-5421 -- stat -c %g /mnt/volume1' Nov 13 05:43:17.123: INFO: stderr: "" Nov 13 05:43:17.123: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-7d56f69e-0b2e-4ff8-8a7b-6a6cc3d2284a in namespace persistent-local-volumes-test-5421 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:43:17.127: INFO: Deleting PersistentVolumeClaim "pvc-gwtvk" Nov 13 05:43:17.132: INFO: Deleting PersistentVolume "local-pvsblds" STEP: Removing the test directory Nov 13 05:43:17.136: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f69c2bef-99b6-44dd-af20-9e3297678629 && rm -r /tmp/local-volume-test-f69c2bef-99b6-44dd-af20-9e3297678629-backend] Namespace:persistent-local-volumes-test-5421 PodName:hostexec-node1-t8chp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:43:17.136: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:43:17.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5421" for this suite. • [SLOW TEST:8.641 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":6,"skipped":209,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:43:17.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91 STEP: Creating a pod to test downward API volume plugin Nov 13 05:43:17.266: INFO: Waiting up to 5m0s for pod "metadata-volume-31f700c6-a525-445b-b3cc-814f247b5ce4" in namespace "projected-3504" to be "Succeeded or Failed" Nov 13 05:43:17.268: INFO: Pod "metadata-volume-31f700c6-a525-445b-b3cc-814f247b5ce4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.621735ms Nov 13 05:43:19.273: INFO: Pod "metadata-volume-31f700c6-a525-445b-b3cc-814f247b5ce4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007206003s Nov 13 05:43:21.276: INFO: Pod "metadata-volume-31f700c6-a525-445b-b3cc-814f247b5ce4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00992253s Nov 13 05:43:23.280: INFO: Pod "metadata-volume-31f700c6-a525-445b-b3cc-814f247b5ce4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014527582s Nov 13 05:43:25.285: INFO: Pod "metadata-volume-31f700c6-a525-445b-b3cc-814f247b5ce4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.018752742s STEP: Saw pod success Nov 13 05:43:25.285: INFO: Pod "metadata-volume-31f700c6-a525-445b-b3cc-814f247b5ce4" satisfied condition "Succeeded or Failed" Nov 13 05:43:25.288: INFO: Trying to get logs from node node1 pod metadata-volume-31f700c6-a525-445b-b3cc-814f247b5ce4 container client-container: STEP: delete the pod Nov 13 05:43:25.306: INFO: Waiting for pod metadata-volume-31f700c6-a525-445b-b3cc-814f247b5ce4 to disappear Nov 13 05:43:25.308: INFO: Pod metadata-volume-31f700c6-a525-445b-b3cc-814f247b5ce4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:43:25.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3504" for this suite. • [SLOW TEST:8.082 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":12,"skipped":315,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:43:25.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:77 Nov 13 05:43:25.368: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:43:25.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-5054" for this suite. [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:111 Nov 13 05:43:25.378: INFO: AfterEach: Cleaning up test resources Nov 13 05:43:25.378: INFO: pvc is nil Nov 13 05:43:25.378: INFO: pv is nil S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should test that deleting the Namespace of a PVC and Pod causes the successful detach of Persistent Disk [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:156 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:41:33.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:49 [It] should allow deletion of pod with invalid volume : projected /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 Nov 13 05:42:03.520: INFO: Deleting pod "pv-5906"/"pod-ephm-test-projected-drfd" Nov 13 05:42:03.520: INFO: Deleting pod "pod-ephm-test-projected-drfd" in namespace "pv-5906" Nov 13 05:42:03.525: INFO: Wait up to 5m0s for pod "pod-ephm-test-projected-drfd" to be fully deleted [AfterEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:43:33.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-5906" for this suite. • [SLOW TEST:120.055 seconds] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 When pod refers to non-existent ephemeral storage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53 should allow deletion of pod with invalid volume : projected /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 ------------------------------ {"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected","total":-1,"completed":5,"skipped":97,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:43:17.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Nov 13 05:43:17.303: INFO: The status of Pod test-hostpath-type-v56nw is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:43:19.307: INFO: The status of Pod test-hostpath-type-v56nw is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:43:21.307: INFO: The status of Pod test-hostpath-type-v56nw is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:43:23.308: INFO: The status of Pod test-hostpath-type-v56nw is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:43:25.307: INFO: The status of Pod test-hostpath-type-v56nw is Running (Ready = true) STEP: running on node node1 STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should fail on mounting file 'afile' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:166 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:43:35.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-5141" for this suite. • [SLOW TEST:18.103 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting file 'afile' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:166 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathBlockDev","total":-1,"completed":7,"skipped":219,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:41:52.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:634 [It] all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:657 STEP: Create a PVC STEP: Create 50 pods to use this PVC STEP: Wait for all pods are running [AfterEach] Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:648 STEP: Clean PV local-pv6c8qh [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:43:37.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5655" for this suite. • [SLOW TEST:105.534 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:629 all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:657 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running","total":-1,"completed":3,"skipped":57,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:43:33.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Nov 13 05:43:33.650: INFO: The status of Pod test-hostpath-type-25lx8 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:43:35.654: INFO: The status of Pod test-hostpath-type-25lx8 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:43:37.653: INFO: The status of Pod test-hostpath-type-25lx8 is Running (Ready = true) STEP: running on node node1 [It] Should fail on mounting socket 'asocket' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:216 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:43:39.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-788" for this suite. • [SLOW TEST:6.077 seconds] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting socket 'asocket' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:216 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathDirectory","total":-1,"completed":6,"skipped":132,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:43:39.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename regional-pd STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:68 Nov 13 05:43:39.818: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:43:39.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "regional-pd-1392" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 RegionalPD [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:76 should provision storage in the allowedTopologies [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:86 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:72 ------------------------------ S ------------------------------ [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:43:35.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48 STEP: Creating a pod to test hostPath mode Nov 13 05:43:35.437: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-456" to be "Succeeded or Failed" Nov 13 05:43:35.440: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.688246ms Nov 13 05:43:37.444: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006929463s Nov 13 05:43:39.449: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01231856s Nov 13 05:43:41.452: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015497771s STEP: Saw pod success Nov 13 05:43:41.452: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Nov 13 05:43:41.455: INFO: Trying to get logs from node node1 pod pod-host-path-test container test-container-1: STEP: delete the pod Nov 13 05:43:41.472: INFO: Waiting for pod pod-host-path-test to disappear Nov 13 05:43:41.473: INFO: Pod pod-host-path-test no longer exists Nov 13 05:43:41.474: FAIL: Unexpected error: <*errors.errorString | 0xc0007a27f0>: { s: "expected \"mode of file \\\"/test-volume\\\": dtrwxrwx\" in container output: Expected\n : mount type of \"/test-volume\": 61267\n mode of file \"/test-volume\": dgtrwxrwxrwx\n \nto contain substring\n : mode of file \"/test-volume\": dtrwxrwx", } expected "mode of file \"/test-volume\": dtrwxrwx" in container output: Expected : mount type of "/test-volume": 61267 mode of file "/test-volume": dgtrwxrwxrwx to contain substring : mode of file "/test-volume": dtrwxrwx occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc0010669a0, 0x6ee4065, 0xd, 0xc001881800, 0x0, 0xc004e751c0, 0x1, 0x1, 0x70ebaa0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:742 +0x1e5 k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:564 k8s.io/kubernetes/test/e2e/common/storage.glob..func5.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:59 +0x299 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001828f00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001828f00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001828f00, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "hostpath-456". STEP: Found 9 events. Nov 13 05:43:41.478: INFO: At 2021-11-13 05:43:35 +0000 UTC - event for pod-host-path-test: {default-scheduler } Scheduled: Successfully assigned hostpath-456/pod-host-path-test to node1 Nov 13 05:43:41.478: INFO: At 2021-11-13 05:43:37 +0000 UTC - event for pod-host-path-test: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 13 05:43:41.478: INFO: At 2021-11-13 05:43:37 +0000 UTC - event for pod-host-path-test: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 295.775068ms Nov 13 05:43:41.478: INFO: At 2021-11-13 05:43:37 +0000 UTC - event for pod-host-path-test: {kubelet node1} Created: Created container test-container-1 Nov 13 05:43:41.478: INFO: At 2021-11-13 05:43:37 +0000 UTC - event for pod-host-path-test: {kubelet node1} Started: Started container test-container-1 Nov 13 05:43:41.478: INFO: At 2021-11-13 05:43:37 +0000 UTC - event for pod-host-path-test: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 13 05:43:41.478: INFO: At 2021-11-13 05:43:38 +0000 UTC - event for pod-host-path-test: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 311.697574ms Nov 13 05:43:41.478: INFO: At 2021-11-13 05:43:38 +0000 UTC - event for pod-host-path-test: {kubelet node1} Created: Created container test-container-2 Nov 13 05:43:41.478: INFO: At 2021-11-13 05:43:38 +0000 UTC - event for pod-host-path-test: {kubelet node1} Started: Started container test-container-2 Nov 13 05:43:41.480: INFO: POD NODE PHASE GRACE CONDITIONS Nov 13 05:43:41.480: INFO: Nov 13 05:43:41.484: INFO: Logging node info for node master1 Nov 13 05:43:41.486: INFO: Node Info: &Node{ObjectMeta:{master1 56d66c54-e52b-494a-a758-e4b658c4b245 201260 0 2021-11-12 21:05:50 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-12 21:05:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-12 21:13:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:25 +0000 UTC,LastTransitionTime:2021-11-12 21:11:25 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 05:43:34 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 05:43:34 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 05:43:34 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 05:43:34 +0000 UTC,LastTransitionTime:2021-11-12 21:11:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:94e600d00e79450a9fb632d8473a11eb,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:6e4bb815-8b93-47c2-9321-93e7ada261f6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:93c03fd0e56363624f0df9368752e6e7c270d969da058ae5066fe8d668541f51 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:57d1a39684ee5a5b5d34638cff843561d440d0f996303b2e841cabf228a4c2af nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 05:43:41.487: INFO: Logging kubelet events for node master1 Nov 13 05:43:41.489: INFO: Logging pods the kubelet thinks is on node master1 Nov 13 05:43:41.521: INFO: kube-multus-ds-amd64-qtmwl started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:41.521: INFO: Container kube-multus ready: true, restart count 1 Nov 13 05:43:41.521: INFO: coredns-8474476ff8-9vc8b started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:41.521: INFO: Container coredns ready: true, restart count 2 Nov 13 05:43:41.521: INFO: node-exporter-zm5hq started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 05:43:41.521: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:43:41.521: INFO: Container node-exporter ready: true, restart count 0 Nov 13 05:43:41.521: INFO: kube-scheduler-master1 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:41.521: INFO: Container kube-scheduler ready: true, restart count 0 Nov 13 05:43:41.521: INFO: kube-controller-manager-master1 started at 2021-11-12 21:15:21 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:41.521: INFO: Container kube-controller-manager ready: true, restart count 2 Nov 13 05:43:41.521: INFO: kube-flannel-79bvx started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 05:43:41.521: INFO: Init container install-cni ready: true, restart count 0 Nov 13 05:43:41.521: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 05:43:41.521: INFO: kube-proxy-6m7qt started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:41.521: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 05:43:41.521: INFO: container-registry-65d7c44b96-qwqcz started at 2021-11-12 21:12:56 +0000 UTC (0+2 container statuses recorded) Nov 13 05:43:41.521: INFO: Container docker-registry ready: true, restart count 0 Nov 13 05:43:41.521: INFO: Container nginx ready: true, restart count 0 Nov 13 05:43:41.521: INFO: kube-apiserver-master1 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:41.521: INFO: Container kube-apiserver ready: true, restart count 0 W1113 05:43:41.536270 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 05:43:41.604: INFO: Latency metrics for node master1 Nov 13 05:43:41.604: INFO: Logging node info for node master2 Nov 13 05:43:41.607: INFO: Node Info: &Node{ObjectMeta:{master2 9cc6c106-2749-4b3a-bbe2-d8a672ab49e0 201272 0 2021-11-12 21:06:20 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-12 21:06:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-11-12 21:16:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-11-12 21:16:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:30 +0000 UTC,LastTransitionTime:2021-11-12 21:11:30 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 05:43:35 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 05:43:35 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 05:43:35 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 05:43:35 +0000 UTC,LastTransitionTime:2021-11-12 21:08:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:65d51a0e6dc44ad1ac5d3b5cd37365f1,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:728abaee-0c5e-4ddb-a22e-72a1345c5ab6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 05:43:41.607: INFO: Logging kubelet events for node master2 Nov 13 05:43:41.609: INFO: Logging pods the kubelet thinks is on node master2 Nov 13 05:43:41.633: INFO: kube-scheduler-master2 started at 2021-11-12 21:15:21 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:41.633: INFO: Container kube-scheduler ready: true, restart count 2 Nov 13 05:43:41.633: INFO: kube-proxy-5xbt9 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:41.633: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 05:43:41.633: INFO: kube-flannel-x76f4 started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 05:43:41.633: INFO: Init container install-cni ready: true, restart count 0 Nov 13 05:43:41.633: INFO: Container kube-flannel ready: true, restart count 1 Nov 13 05:43:41.633: INFO: kube-multus-ds-amd64-8zzgk started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:41.633: INFO: Container kube-multus ready: true, restart count 1 Nov 13 05:43:41.633: INFO: coredns-8474476ff8-s7twh started at 2021-11-12 21:09:11 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:41.633: INFO: Container coredns ready: true, restart count 1 Nov 13 05:43:41.633: INFO: node-exporter-clpwc started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 05:43:41.633: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:43:41.633: INFO: Container node-exporter ready: true, restart count 0 Nov 13 05:43:41.633: INFO: kube-controller-manager-master2 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:41.633: INFO: Container kube-controller-manager ready: true, restart count 2 Nov 13 05:43:41.633: INFO: node-feature-discovery-controller-cff799f9f-c54h8 started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:41.633: INFO: Container nfd-controller ready: true, restart count 0 Nov 13 05:43:41.633: INFO: kube-apiserver-master2 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:41.633: INFO: Container kube-apiserver ready: true, restart count 0 W1113 05:43:41.644493 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 05:43:41.711: INFO: Latency metrics for node master2 Nov 13 05:43:41.711: INFO: Logging node info for node master3 Nov 13 05:43:41.715: INFO: Node Info: &Node{ObjectMeta:{master3 fce0cd54-e4d8-4ce1-b720-522aad2d7989 201214 0 2021-11-12 21:06:31 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-12 21:06:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-12 21:19:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:26 +0000 UTC,LastTransitionTime:2021-11-12 21:11:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 05:43:32 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 05:43:32 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 05:43:32 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 05:43:32 +0000 UTC,LastTransitionTime:2021-11-12 21:11:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:592c271b4697499588d9c2b3767b866a,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:a95de4ca-c566-4b34-8463-623af932d416,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 05:43:41.715: INFO: Logging kubelet events for node master3 Nov 13 05:43:41.717: INFO: Logging pods the kubelet thinks is on node master3 Nov 13 05:43:41.732: INFO: kube-apiserver-master3 started at 2021-11-12 21:11:20 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:41.732: INFO: Container kube-apiserver ready: true, restart count 0 Nov 13 05:43:41.732: INFO: kube-controller-manager-master3 started at 2021-11-12 21:11:20 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:41.732: INFO: Container kube-controller-manager ready: true, restart count 3 Nov 13 05:43:41.732: INFO: kube-scheduler-master3 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:41.732: INFO: Container kube-scheduler ready: true, restart count 2 Nov 13 05:43:41.732: INFO: node-exporter-l4x25 started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 05:43:41.732: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:43:41.732: INFO: Container node-exporter ready: true, restart count 0 Nov 13 05:43:41.732: INFO: kube-proxy-tssd5 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:41.732: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 05:43:41.732: INFO: kube-flannel-vxlrs started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 05:43:41.732: INFO: Init container install-cni ready: true, restart count 0 Nov 13 05:43:41.732: INFO: Container kube-flannel ready: true, restart count 1 Nov 13 05:43:41.732: INFO: kube-multus-ds-amd64-vp8p7 started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:41.732: INFO: Container kube-multus ready: true, restart count 1 Nov 13 05:43:41.732: INFO: dns-autoscaler-7df78bfcfb-d88qs started at 2021-11-12 21:09:13 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:41.732: INFO: Container autoscaler ready: true, restart count 1 W1113 05:43:41.746049 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 05:43:41.814: INFO: Latency metrics for node master3 Nov 13 05:43:41.814: INFO: Logging node info for node node1 Nov 13 05:43:41.817: INFO: Node Info: &Node{ObjectMeta:{node1 6ceb907c-9809-4d18-88c6-b1e10ba80f97 201388 0 2021-11-12 21:07:36 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1389":"csi-mock-csi-mock-volumes-1389","csi-mock-csi-mock-volumes-4684":"csi-mock-csi-mock-volumes-4684"} flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-12 21:16:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-12 21:20:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-13 05:42:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {kube-controller-manager Update v1 2021-11-13 05:43:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}},"f:status":{"f:volumesAttached":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:27 +0000 UTC,LastTransitionTime:2021-11-12 21:11:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 05:43:35 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 05:43:35 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 05:43:35 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 05:43:35 +0000 UTC,LastTransitionTime:2021-11-12 21:08:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7cf6287777fe4e3b9a80df40dea25b6d,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:2125bc5f-9167-464a-b6d0-8e8a192327d3,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432896,},ContainerImage{Names:[localhost:30500/cmk@sha256:9c8712e686132463f8f4d9787e57cff8c3c47bb77fca5fc1d96f6763d2717f29 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:d226f3fd5f293ff513f53573a40c069b89d57d42338a1045b493bf702ac6b1f6 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.5],SizeBytes:60182158,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:51645752,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:46131354,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:46041582,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:760040499bb9aa55e93c6074d538c7fb6c32ef9fc567e4edcc0ef55197276560 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42686989,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:1841df8d4cc71e4f69cc1603012b99570f40d18cd36ee1065933b46f984cf0cd alpine:3.12],SizeBytes:5592390,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:d8d3bc2c183ed2f9f10e7258f84971202325ee6011ba137112e01e30f206de67 k8s.gcr.io/busybox:latest],SizeBytes:2433303,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-1389^4,DevicePath:,},},Config:nil,},} Nov 13 05:43:41.817: INFO: Logging kubelet events for node node1 Nov 13 05:43:41.820: INFO: Logging pods the kubelet thinks is on node node1 Nov 13 05:43:41.838: INFO: node-feature-discovery-worker-zgr4c started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:41.838: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 05:43:41.838: INFO: cmk-webhook-6c9d5f8578-2gp25 started at 2021-11-12 21:21:01 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:41.838: INFO: Container cmk-webhook ready: true, restart count 0 Nov 13 05:43:41.838: INFO: prometheus-k8s-0 started at 2021-11-12 21:22:14 +0000 UTC (0+4 container statuses recorded) Nov 13 05:43:41.838: INFO: Container config-reloader ready: true, restart count 0 Nov 13 05:43:41.838: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 13 05:43:41.838: INFO: Container grafana ready: true, restart count 0 Nov 13 05:43:41.838: INFO: Container prometheus ready: true, restart count 1 Nov 13 05:43:41.838: INFO: kube-flannel-r7bbp started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 05:43:41.838: INFO: Init container install-cni ready: true, restart count 2 Nov 13 05:43:41.838: INFO: Container kube-flannel ready: true, restart count 3 Nov 13 05:43:41.838: INFO: cmk-4tcdw started at 2021-11-12 21:21:00 +0000 UTC (0+2 container statuses recorded) Nov 13 05:43:41.838: INFO: Container nodereport ready: true, restart count 0 Nov 13 05:43:41.838: INFO: Container reconcile ready: true, restart count 0 Nov 13 05:43:41.838: INFO: csi-mockplugin-attacher-0 started at 2021-11-13 05:43:37 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:41.838: INFO: Container csi-attacher ready: false, restart count 0 Nov 13 05:43:41.838: INFO: csi-mockplugin-0 started at 2021-11-13 05:43:37 +0000 UTC (0+3 container statuses recorded) Nov 13 05:43:41.838: INFO: Container csi-provisioner ready: false, restart count 0 Nov 13 05:43:41.838: INFO: Container driver-registrar ready: false, restart count 0 Nov 13 05:43:41.838: INFO: Container mock ready: false, restart count 0 Nov 13 05:43:41.838: INFO: prometheus-operator-585ccfb458-qcz7s started at 2021-11-12 21:21:55 +0000 UTC (0+2 container statuses recorded) Nov 13 05:43:41.838: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:43:41.838: INFO: Container prometheus-operator ready: true, restart count 0 Nov 13 05:43:41.838: INFO: pvc-volume-tester-pns94 started at 2021-11-13 05:43:37 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:41.838: INFO: Container volume-tester ready: false, restart count 0 Nov 13 05:43:41.838: INFO: collectd-74xkn started at 2021-11-12 21:25:58 +0000 UTC (0+3 container statuses recorded) Nov 13 05:43:41.838: INFO: Container collectd ready: true, restart count 0 Nov 13 05:43:41.838: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 05:43:41.838: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 05:43:41.838: INFO: test-hostpath-type-z2k4c started at 2021-11-13 05:43:39 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:41.838: INFO: Container host-path-testing ready: false, restart count 0 Nov 13 05:43:41.838: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-m62v8 started at 2021-11-12 21:17:59 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:41.838: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 05:43:41.838: INFO: csi-mockplugin-resizer-0 started at 2021-11-13 05:42:32 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:41.838: INFO: Container csi-resizer ready: true, restart count 0 Nov 13 05:43:41.838: INFO: test-hostpath-type-v56nw started at 2021-11-13 05:43:17 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:41.838: INFO: Container host-path-testing ready: true, restart count 0 Nov 13 05:43:41.838: INFO: cmk-init-discover-node1-vkj2s started at 2021-11-12 21:20:18 +0000 UTC (0+3 container statuses recorded) Nov 13 05:43:41.838: INFO: Container discover ready: false, restart count 0 Nov 13 05:43:41.838: INFO: Container init ready: false, restart count 0 Nov 13 05:43:41.838: INFO: Container install ready: false, restart count 0 Nov 13 05:43:41.838: INFO: csi-mockplugin-0 started at 2021-11-13 05:43:25 +0000 UTC (0+3 container statuses recorded) Nov 13 05:43:41.838: INFO: Container csi-provisioner ready: true, restart count 0 Nov 13 05:43:41.838: INFO: Container driver-registrar ready: true, restart count 0 Nov 13 05:43:41.838: INFO: Container mock ready: true, restart count 0 Nov 13 05:43:41.838: INFO: pvc-volume-tester-pg2nd started at 2021-11-13 05:42:44 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:41.838: INFO: Container volume-tester ready: true, restart count 0 Nov 13 05:43:41.838: INFO: nginx-proxy-node1 started at 2021-11-12 21:07:36 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:41.838: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 05:43:41.838: INFO: kube-multus-ds-amd64-4wqsv started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:41.838: INFO: Container kube-multus ready: true, restart count 1 Nov 13 05:43:41.838: INFO: node-exporter-hqkfs started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 05:43:41.838: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:43:41.838: INFO: Container node-exporter ready: true, restart count 0 Nov 13 05:43:41.838: INFO: test-hostpath-type-25lx8 started at 2021-11-13 05:43:33 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:41.838: INFO: Container host-path-sh-testing ready: true, restart count 0 Nov 13 05:43:41.838: INFO: csi-mockplugin-0 started at 2021-11-13 05:42:32 +0000 UTC (0+3 container statuses recorded) Nov 13 05:43:41.838: INFO: Container csi-provisioner ready: true, restart count 0 Nov 13 05:43:41.838: INFO: Container driver-registrar ready: true, restart count 0 Nov 13 05:43:41.838: INFO: Container mock ready: true, restart count 0 Nov 13 05:43:41.838: INFO: kube-proxy-p6kbl started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:41.838: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 05:43:41.838: INFO: csi-mockplugin-attacher-0 started at 2021-11-13 05:43:25 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:41.838: INFO: Container csi-attacher ready: true, restart count 0 W1113 05:43:41.852379 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 05:43:42.622: INFO: Latency metrics for node node1 Nov 13 05:43:42.622: INFO: Logging node info for node node2 Nov 13 05:43:42.625: INFO: Node Info: &Node{ObjectMeta:{node2 652722dd-12b1-4529-ba4d-a00c590e4a68 201271 0 2021-11-12 21:07:36 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-7214":"csi-mock-csi-mock-volumes-7214"} flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-12 21:16:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-12 21:20:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kube-controller-manager Update v1 2021-11-13 05:40:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubelet Update v1 2021-11-13 05:42:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:26 +0000 UTC,LastTransitionTime:2021-11-12 21:11:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 05:43:35 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 05:43:35 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 05:43:35 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 05:43:35 +0000 UTC,LastTransitionTime:2021-11-12 21:08:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fec67f7547064c508c27d44a9bf99ae7,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:0a05ac00-ff21-4518-bf68-3564c7a8cf65,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[localhost:30500/cmk@sha256:9c8712e686132463f8f4d9787e57cff8c3c47bb77fca5fc1d96f6763d2717f29 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[k8s.gcr.io/sig-storage/nfs-provisioner@sha256:c1bedac8758029948afe060bf8f6ee63ea489b5e08d29745f44fab68ee0d46ca k8s.gcr.io/sig-storage/nfs-provisioner:v2.2.2],SizeBytes:391772778,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:373281573,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:51645752,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:46131354,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:46041582,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:760040499bb9aa55e93c6074d538c7fb6c32ef9fc567e4edcc0ef55197276560 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42686989,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:42321438,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:93c03fd0e56363624f0df9368752e6e7c270d969da058ae5066fe8d668541f51 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:d8d3bc2c183ed2f9f10e7258f84971202325ee6011ba137112e01e30f206de67 k8s.gcr.io/busybox:latest],SizeBytes:2433303,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-7214^4],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 05:43:42.625: INFO: Logging kubelet events for node node2 Nov 13 05:43:42.628: INFO: Logging pods the kubelet thinks is on node node2 Nov 13 05:43:42.661: INFO: pod-b769eee3-1ad4-40ff-b01d-9992c395c8b6 started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.661: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.661: INFO: pod-df6dbcc0-39cd-4d61-a928-8bd4be26687d started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.661: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.661: INFO: pod-87f8906f-348e-4b35-b67f-9bc08ea2597e started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.661: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.661: INFO: pod-e988419c-7f0e-476e-9a7d-b316979bbd61 started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: csi-mockplugin-0 started at 2021-11-13 05:41:41 +0000 UTC (0+3 container statuses recorded) Nov 13 05:43:42.662: INFO: Container csi-provisioner ready: true, restart count 0 Nov 13 05:43:42.662: INFO: Container driver-registrar ready: true, restart count 0 Nov 13 05:43:42.662: INFO: Container mock ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-36318523-1661-4055-8729-99d804a60ef6 started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-a96e70c7-a4cf-47b8-ae2b-93cfdc6a3a53 started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-4f2d07da-40e7-4343-b987-4bdf08732810 started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-020951cc-a425-4659-b80a-da9eea5a5a02 started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-5924ac3f-fc93-469e-8231-8667c6149c1a started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-436970ae-ca39-40e5-9440-d2ba749674e9 started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-0f0086b6-3d27-4535-b69f-762b19ad3099 started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: glusterdynamic-provisioner-zhpxn started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container glusterdynamic-provisioner ready: false, restart count 0 Nov 13 05:43:42.662: INFO: pod-07625564-cd03-4a40-bacc-92b6e2d88588 started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: node-exporter-hstd9 started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 05:43:42.662: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:43:42.662: INFO: Container node-exporter ready: true, restart count 0 Nov 13 05:43:42.662: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q7m54 started at 2021-11-12 21:25:09 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container tas-extender ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-3db7caea-5816-4118-be2c-95c149cdb277 started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-cf643719-40dd-4d4a-9d37-33e09a681ff9 started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-b84c592e-cfd8-4ca3-9399-3e9b33e77efa started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-15a6e407-863c-4e7a-905e-0a129b25527e started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-fb6fca0c-2ccf-4bb8-96d4-f975f019705b started at 2021-11-13 05:43:33 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:43:42.662: INFO: pod-294cf3ed-ae55-4906-901b-ba5a0408b908 started at 2021-11-13 05:43:35 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:43:42.662: INFO: pod-3d3753f9-f338-44d3-81d2-d22fdc733100 started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-d31b150e-bef6-4c88-86c6-f0f7abfe9849 started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: kube-proxy-pzhf2 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 05:43:42.662: INFO: kubernetes-dashboard-785dcbb76d-w2mls started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 13 05:43:42.662: INFO: pod-0dd0e2dc-7714-477a-ae02-af321f727ff0 started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-configmaps-55358fff-a9e7-4112-95af-1bad4c1052e1 started at 2021-11-13 05:40:18 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container agnhost-container ready: false, restart count 0 Nov 13 05:43:42.662: INFO: node-feature-discovery-worker-mm7xs started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-61c2b660-e5fb-49e1-83f1-11a6a9e4911f started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-5d07fbe0-6df0-4b97-a2c4-f3f8879246cf started at 2021-11-13 05:42:44 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: false, restart count 0 Nov 13 05:43:42.662: INFO: pod-fb96524a-a22a-45af-89e9-af65ef78f834 started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-ffff663a-b7b5-4f6e-a4ca-eb28149e123d started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-6f55793b-65bf-4aa1-b865-072d100f5f31 started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: cmk-qhvr7 started at 2021-11-12 21:21:01 +0000 UTC (0+2 container statuses recorded) Nov 13 05:43:42.662: INFO: Container nodereport ready: true, restart count 0 Nov 13 05:43:42.662: INFO: Container reconcile ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-01d2b05a-3e0a-4f48-98e2-7df4069fbaed started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-2ce80af8-c5fc-487d-b5f7-3184d8caff54 started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-d49c450a-fa6e-475e-a617-3fd355a778c0 started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-2195e011-f074-4485-a22a-83b8498c1c81 started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-ac53b7c3-d7c5-4204-83cd-64bc0dda113c started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-57265d0d-4009-46d9-869f-e63f834933b9 started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-298ab6a0-8a6f-4098-bcdb-31b7a5be06a5 started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-3eee944d-16f9-4866-984a-5ebe51f7a785 started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: cmk-init-discover-node2-5f4hp started at 2021-11-12 21:20:38 +0000 UTC (0+3 container statuses recorded) Nov 13 05:43:42.662: INFO: Container discover ready: false, restart count 0 Nov 13 05:43:42.662: INFO: Container init ready: false, restart count 0 Nov 13 05:43:42.662: INFO: Container install ready: false, restart count 0 Nov 13 05:43:42.662: INFO: pod-d4f4d6c3-02c0-4cc2-bd4b-cc153e821dbe started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-ac77cc73-d4f0-46c1-a36f-6b0cc18ba34c started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-0fe6a9f1-6897-4aa8-b72e-ba5dde3d38b1 started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-ccccebe8-95d1-4e3f-93a1-e8477f95a51c started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-3a1479c2-4f7b-4108-8390-bda6371d81f2 started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-5d369bb4-36d9-4d7d-8fc4-1b2ed37e5112 started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-94a68ba9-2930-4ede-927b-fac27682c878 started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: kubernetes-metrics-scraper-5558854cb-jmbpk started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 13 05:43:42.662: INFO: pod-9fef9506-3c7f-4069-8235-cb86929fdf0c started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-8b02d230-2cf2-4af7-aad8-70c240373c77 started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: nginx-proxy-node2 started at 2021-11-12 21:07:36 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 05:43:42.662: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-7brrh started at 2021-11-12 21:17:59 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 05:43:42.662: INFO: kube-flannel-mg66r started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Init container install-cni ready: true, restart count 2 Nov 13 05:43:42.662: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 05:43:42.662: INFO: kube-multus-ds-amd64-2wqj5 started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container kube-multus ready: true, restart count 1 Nov 13 05:43:42.662: INFO: pod-d40e99aa-1942-4d35-9ff1-3da56708bcd4 started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: collectd-mp2z6 started at 2021-11-12 21:25:58 +0000 UTC (0+3 container statuses recorded) Nov 13 05:43:42.662: INFO: Container collectd ready: true, restart count 0 Nov 13 05:43:42.662: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 05:43:42.662: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pvc-volume-tester-5fjvj started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container volume-tester ready: false, restart count 0 Nov 13 05:43:42.662: INFO: hostexec-node2-g5zg6 started at 2021-11-13 05:42:08 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container agnhost-container ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-87f4d753-2f3a-4080-8d2f-6a392c0aa931 started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-45ab281b-f9ef-46aa-8da8-a3f5d4db93e9 started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-f3573923-e09a-4a81-aa6a-06e567278151 started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-4c0be481-ade4-4c7f-998d-b48ac98d419d started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-e71c92e2-1d59-45a7-bd9d-89c28f91a406 started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-42245241-274f-4266-aae4-83fd2bed4977 started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-3ac06cea-ee55-45d3-97b6-cccfc8d7cee2 started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: hostexec-node2-rg5jv started at 2021-11-13 05:42:33 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container agnhost-container ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-386a6a35-31be-48fc-be6a-87a7aa08df15 started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 Nov 13 05:43:42.662: INFO: pod-a33ff224-7a53-4f00-a67d-98cfdc1cd0e0 started at 2021-11-13 05:41:52 +0000 UTC (0+1 container statuses recorded) Nov 13 05:43:42.662: INFO: Container write-pod ready: true, restart count 0 W1113 05:43:42.676102 29 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 05:43:44.722: INFO: Latency metrics for node node2 Nov 13 05:43:44.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-456" for this suite. • Failure [9.329 seconds] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should give a volume the correct mode [LinuxOnly] [NodeConformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48 Nov 13 05:43:41.474: Unexpected error: <*errors.errorString | 0xc0007a27f0>: { s: "expected \"mode of file \\\"/test-volume\\\": dtrwxrwx\" in container output: Expected\n : mount type of \"/test-volume\": 61267\n mode of file \"/test-volume\": dgtrwxrwxrwx\n \nto contain substring\n : mode of file \"/test-volume\": dtrwxrwx", } expected "mode of file \"/test-volume\": dtrwxrwx" in container output: Expected : mount type of "/test-volume": 61267 mode of file "/test-volume": dgtrwxrwxrwx to contain substring : mode of file "/test-volume": dtrwxrwx occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:742 ------------------------------ {"msg":"FAILED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","total":-1,"completed":7,"skipped":230,"failed":1,"failures":["[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:43:39.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Nov 13 05:43:39.879: INFO: The status of Pod test-hostpath-type-z2k4c is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:43:41.883: INFO: The status of Pod test-hostpath-type-z2k4c is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:43:43.884: INFO: The status of Pod test-hostpath-type-z2k4c is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:43:45.882: INFO: The status of Pod test-hostpath-type-z2k4c is Running (Ready = true) STEP: running on node node1 STEP: Create a character device for further testing Nov 13 05:43:45.884: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-8425 PodName:test-hostpath-type-z2k4c ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:43:45.884: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting character device 'achardev' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:285 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:43:48.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-8425" for this suite. • [SLOW TEST:8.504 seconds] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting character device 'achardev' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:285 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathDirectory","total":-1,"completed":7,"skipped":181,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:43:48.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should create and delete default persistent volumes [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:692 Nov 13 05:43:48.405: INFO: Only supported for providers [openstack gce aws gke vsphere azure] (not local) [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:43:48.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-8290" for this suite. S [SKIPPING] [0.030 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 DynamicProvisioner Default /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:691 should create and delete default persistent volumes [Slow] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:692 Only supported for providers [openstack gce aws gke vsphere azure] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:693 ------------------------------ SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:43:44.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:106 STEP: Creating a pod to test downward API volume plugin Nov 13 05:43:44.957: INFO: Waiting up to 5m0s for pod "metadata-volume-e737d339-981d-4404-808e-6cada6bec034" in namespace "downward-api-2224" to be "Succeeded or Failed" Nov 13 05:43:44.959: INFO: Pod "metadata-volume-e737d339-981d-4404-808e-6cada6bec034": Phase="Pending", Reason="", readiness=false. Elapsed: 1.876644ms Nov 13 05:43:46.962: INFO: Pod "metadata-volume-e737d339-981d-4404-808e-6cada6bec034": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004826365s Nov 13 05:43:48.965: INFO: Pod "metadata-volume-e737d339-981d-4404-808e-6cada6bec034": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008579088s STEP: Saw pod success Nov 13 05:43:48.965: INFO: Pod "metadata-volume-e737d339-981d-4404-808e-6cada6bec034" satisfied condition "Succeeded or Failed" Nov 13 05:43:48.968: INFO: Trying to get logs from node node1 pod metadata-volume-e737d339-981d-4404-808e-6cada6bec034 container client-container: STEP: delete the pod Nov 13 05:43:48.987: INFO: Waiting for pod metadata-volume-e737d339-981d-4404-808e-6cada6bec034 to disappear Nov 13 05:43:48.991: INFO: Pod metadata-volume-e737d339-981d-4404-808e-6cada6bec034 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:43:48.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2224" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":8,"skipped":325,"failed":1,"failures":["[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:43:49.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Nov 13 05:43:49.062: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:43:49.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-8808" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should be able to delete a non-existent PD without error [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:449 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:43:48.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Nov 13 05:43:48.480: INFO: The status of Pod test-hostpath-type-tl5zw is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:43:50.484: INFO: The status of Pod test-hostpath-type-tl5zw is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:43:52.484: INFO: The status of Pod test-hostpath-type-tl5zw is Running (Ready = true) STEP: running on node node1 [It] Should be able to mount socket 'asocket' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:212 [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:43:56.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-4258" for this suite. • [SLOW TEST:8.084 seconds] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount socket 'asocket' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:212 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should be able to mount socket 'asocket' successfully when HostPathType is HostPathUnset","total":-1,"completed":8,"skipped":213,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:43:56.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:77 Nov 13 05:43:56.556: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:43:56.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-2839" for this suite. [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:111 Nov 13 05:43:56.566: INFO: AfterEach: Cleaning up test resources Nov 13 05:43:56.566: INFO: pvc is nil Nov 13 05:43:56.566: INFO: pv is nil S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:142 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:43:56.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Nov 13 05:43:56.727: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:43:56.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-111" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.037 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create metrics for total time taken in volume operations in P/V Controller [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:261 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:43:56.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] volume on tmpfs should have the correct mode using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:75 STEP: Creating a pod to test emptydir volume type on tmpfs Nov 13 05:43:56.831: INFO: Waiting up to 5m0s for pod "pod-26279b1f-3e18-499b-b354-1246be32cf5f" in namespace "emptydir-4931" to be "Succeeded or Failed" Nov 13 05:43:56.833: INFO: Pod "pod-26279b1f-3e18-499b-b354-1246be32cf5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086608ms Nov 13 05:43:58.838: INFO: Pod "pod-26279b1f-3e18-499b-b354-1246be32cf5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007014158s Nov 13 05:44:00.840: INFO: Pod "pod-26279b1f-3e18-499b-b354-1246be32cf5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009466135s STEP: Saw pod success Nov 13 05:44:00.840: INFO: Pod "pod-26279b1f-3e18-499b-b354-1246be32cf5f" satisfied condition "Succeeded or Failed" Nov 13 05:44:00.843: INFO: Trying to get logs from node node1 pod pod-26279b1f-3e18-499b-b354-1246be32cf5f container test-container: STEP: delete the pod Nov 13 05:44:00.858: INFO: Waiting for pod pod-26279b1f-3e18-499b-b354-1246be32cf5f to disappear Nov 13 05:44:00.859: INFO: Pod pod-26279b1f-3e18-499b-b354-1246be32cf5f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:44:00.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4931" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup","total":-1,"completed":9,"skipped":314,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:44:00.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should provision storage with non-default reclaim policy Retain /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:403 Nov 13 05:44:00.899: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:44:00.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-9630" for this suite. S [SKIPPING] [0.033 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 DynamicProvisioner [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:152 should provision storage with non-default reclaim policy Retain [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:403 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:404 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:44:00.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-6af75f49-971f-48f9-825b-900bc11b5fb4" Nov 13 05:44:02.990: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-6af75f49-971f-48f9-825b-900bc11b5fb4 && dd if=/dev/zero of=/tmp/local-volume-test-6af75f49-971f-48f9-825b-900bc11b5fb4/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-6af75f49-971f-48f9-825b-900bc11b5fb4/file] Namespace:persistent-local-volumes-test-7952 PodName:hostexec-node1-gsmnm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:44:02.990: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:44:03.105: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-6af75f49-971f-48f9-825b-900bc11b5fb4/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-7952 PodName:hostexec-node1-gsmnm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:44:03.105: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:44:03.226: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-6af75f49-971f-48f9-825b-900bc11b5fb4 && chmod o+rwx /tmp/local-volume-test-6af75f49-971f-48f9-825b-900bc11b5fb4] Namespace:persistent-local-volumes-test-7952 PodName:hostexec-node1-gsmnm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:44:03.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:44:03.480: INFO: Creating a PV followed by a PVC Nov 13 05:44:03.487: INFO: Waiting for PV local-pv6grxh to bind to PVC pvc-6shb2 Nov 13 05:44:03.487: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-6shb2] to have phase Bound Nov 13 05:44:03.491: INFO: PersistentVolumeClaim pvc-6shb2 found but phase is Pending instead of Bound. Nov 13 05:44:05.495: INFO: PersistentVolumeClaim pvc-6shb2 found but phase is Pending instead of Bound. Nov 13 05:44:07.498: INFO: PersistentVolumeClaim pvc-6shb2 found but phase is Pending instead of Bound. Nov 13 05:44:09.503: INFO: PersistentVolumeClaim pvc-6shb2 found but phase is Pending instead of Bound. Nov 13 05:44:11.506: INFO: PersistentVolumeClaim pvc-6shb2 found but phase is Pending instead of Bound. Nov 13 05:44:13.510: INFO: PersistentVolumeClaim pvc-6shb2 found and phase=Bound (10.022972857s) Nov 13 05:44:13.510: INFO: Waiting up to 3m0s for PersistentVolume local-pv6grxh to have phase Bound Nov 13 05:44:13.513: INFO: PersistentVolume local-pv6grxh found and phase=Bound (2.459238ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:44:17.539: INFO: pod "pod-776e4e29-5833-4ae1-84b4-3cb1aa81a0dd" created on Node "node1" STEP: Writing in pod1 Nov 13 05:44:17.539: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7952 PodName:pod-776e4e29-5833-4ae1-84b4-3cb1aa81a0dd ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:44:17.539: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:44:17.630: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Nov 13 05:44:17.630: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7952 PodName:pod-776e4e29-5833-4ae1-84b4-3cb1aa81a0dd ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:44:17.630: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:44:17.710: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Nov 13 05:44:17.710: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-6af75f49-971f-48f9-825b-900bc11b5fb4 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7952 PodName:pod-776e4e29-5833-4ae1-84b4-3cb1aa81a0dd ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:44:17.710: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:44:17.794: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-6af75f49-971f-48f9-825b-900bc11b5fb4 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-776e4e29-5833-4ae1-84b4-3cb1aa81a0dd in namespace persistent-local-volumes-test-7952 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:44:17.799: INFO: Deleting PersistentVolumeClaim "pvc-6shb2" Nov 13 05:44:17.803: INFO: Deleting PersistentVolume "local-pv6grxh" Nov 13 05:44:17.807: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-6af75f49-971f-48f9-825b-900bc11b5fb4] Namespace:persistent-local-volumes-test-7952 PodName:hostexec-node1-gsmnm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:44:17.807: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:44:17.896: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-6af75f49-971f-48f9-825b-900bc11b5fb4/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-7952 PodName:hostexec-node1-gsmnm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:44:17.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node1" at path /tmp/local-volume-test-6af75f49-971f-48f9-825b-900bc11b5fb4/file Nov 13 05:44:17.999: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-7952 PodName:hostexec-node1-gsmnm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:44:17.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-6af75f49-971f-48f9-825b-900bc11b5fb4 Nov 13 05:44:18.110: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6af75f49-971f-48f9-825b-900bc11b5fb4] Namespace:persistent-local-volumes-test-7952 PodName:hostexec-node1-gsmnm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:44:18.110: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:44:18.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7952" for this suite. • [SLOW TEST:17.264 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":10,"skipped":332,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:43:37.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] token should not be plumbed down when CSIDriver is not deployed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1402 STEP: Building a driver namespace object, basename csi-mock-volumes-5006 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:43:37.605: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5006-4904/csi-attacher Nov 13 05:43:37.608: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5006 Nov 13 05:43:37.608: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-5006 Nov 13 05:43:37.610: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5006 Nov 13 05:43:37.614: INFO: creating *v1.Role: csi-mock-volumes-5006-4904/external-attacher-cfg-csi-mock-volumes-5006 Nov 13 05:43:37.616: INFO: creating *v1.RoleBinding: csi-mock-volumes-5006-4904/csi-attacher-role-cfg Nov 13 05:43:37.619: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5006-4904/csi-provisioner Nov 13 05:43:37.621: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5006 Nov 13 05:43:37.621: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-5006 Nov 13 05:43:37.624: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5006 Nov 13 05:43:37.626: INFO: creating *v1.Role: csi-mock-volumes-5006-4904/external-provisioner-cfg-csi-mock-volumes-5006 Nov 13 05:43:37.628: INFO: creating *v1.RoleBinding: csi-mock-volumes-5006-4904/csi-provisioner-role-cfg Nov 13 05:43:37.632: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5006-4904/csi-resizer Nov 13 05:43:37.635: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5006 Nov 13 05:43:37.635: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-5006 Nov 13 05:43:37.638: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5006 Nov 13 05:43:37.641: INFO: creating *v1.Role: csi-mock-volumes-5006-4904/external-resizer-cfg-csi-mock-volumes-5006 Nov 13 05:43:37.644: INFO: creating *v1.RoleBinding: csi-mock-volumes-5006-4904/csi-resizer-role-cfg Nov 13 05:43:37.646: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5006-4904/csi-snapshotter Nov 13 05:43:37.649: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5006 Nov 13 05:43:37.649: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-5006 Nov 13 05:43:37.652: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5006 Nov 13 05:43:37.655: INFO: creating *v1.Role: csi-mock-volumes-5006-4904/external-snapshotter-leaderelection-csi-mock-volumes-5006 Nov 13 05:43:37.657: INFO: creating *v1.RoleBinding: csi-mock-volumes-5006-4904/external-snapshotter-leaderelection Nov 13 05:43:37.660: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5006-4904/csi-mock Nov 13 05:43:37.663: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5006 Nov 13 05:43:37.666: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5006 Nov 13 05:43:37.668: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5006 Nov 13 05:43:37.671: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5006 Nov 13 05:43:37.673: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5006 Nov 13 05:43:37.676: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5006 Nov 13 05:43:37.678: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5006 Nov 13 05:43:37.681: INFO: creating *v1.StatefulSet: csi-mock-volumes-5006-4904/csi-mockplugin Nov 13 05:43:37.685: INFO: creating *v1.StatefulSet: csi-mock-volumes-5006-4904/csi-mockplugin-attacher Nov 13 05:43:37.689: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5006 to register on node node1 STEP: Creating pod Nov 13 05:43:47.203: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:43:47.208: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-l2khl] to have phase Bound Nov 13 05:43:47.210: INFO: PersistentVolumeClaim pvc-l2khl found but phase is Pending instead of Bound. Nov 13 05:43:49.214: INFO: PersistentVolumeClaim pvc-l2khl found and phase=Bound (2.006143915s) STEP: Deleting the previously created pod Nov 13 05:44:03.233: INFO: Deleting pod "pvc-volume-tester-gfhb8" in namespace "csi-mock-volumes-5006" Nov 13 05:44:03.238: INFO: Wait up to 5m0s for pod "pvc-volume-tester-gfhb8" to be fully deleted STEP: Checking CSI driver logs Nov 13 05:44:07.252: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/76609df0-a947-4212-b8a1-00ac718f8487/volumes/kubernetes.io~csi/pvc-440ac6b0-1c61-4aaa-964a-30d1d7c9c40b/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-gfhb8 Nov 13 05:44:07.252: INFO: Deleting pod "pvc-volume-tester-gfhb8" in namespace "csi-mock-volumes-5006" STEP: Deleting claim pvc-l2khl Nov 13 05:44:07.263: INFO: Waiting up to 2m0s for PersistentVolume pvc-440ac6b0-1c61-4aaa-964a-30d1d7c9c40b to get deleted Nov 13 05:44:07.265: INFO: PersistentVolume pvc-440ac6b0-1c61-4aaa-964a-30d1d7c9c40b found and phase=Bound (2.231268ms) Nov 13 05:44:09.269: INFO: PersistentVolume pvc-440ac6b0-1c61-4aaa-964a-30d1d7c9c40b was removed STEP: Deleting storageclass csi-mock-volumes-5006-sccbdq4 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-5006 STEP: Waiting for namespaces [csi-mock-volumes-5006] to vanish STEP: uninstalling csi mock driver Nov 13 05:44:15.284: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5006-4904/csi-attacher Nov 13 05:44:15.289: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5006 Nov 13 05:44:15.292: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5006 Nov 13 05:44:15.296: INFO: deleting *v1.Role: csi-mock-volumes-5006-4904/external-attacher-cfg-csi-mock-volumes-5006 Nov 13 05:44:15.300: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5006-4904/csi-attacher-role-cfg Nov 13 05:44:15.305: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5006-4904/csi-provisioner Nov 13 05:44:15.309: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5006 Nov 13 05:44:15.313: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5006 Nov 13 05:44:15.316: INFO: deleting *v1.Role: csi-mock-volumes-5006-4904/external-provisioner-cfg-csi-mock-volumes-5006 Nov 13 05:44:15.320: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5006-4904/csi-provisioner-role-cfg Nov 13 05:44:15.323: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5006-4904/csi-resizer Nov 13 05:44:15.327: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5006 Nov 13 05:44:15.330: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5006 Nov 13 05:44:15.334: INFO: deleting *v1.Role: csi-mock-volumes-5006-4904/external-resizer-cfg-csi-mock-volumes-5006 Nov 13 05:44:15.338: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5006-4904/csi-resizer-role-cfg Nov 13 05:44:15.341: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5006-4904/csi-snapshotter Nov 13 05:44:15.345: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5006 Nov 13 05:44:15.349: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5006 Nov 13 05:44:15.352: INFO: deleting *v1.Role: csi-mock-volumes-5006-4904/external-snapshotter-leaderelection-csi-mock-volumes-5006 Nov 13 05:44:15.355: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5006-4904/external-snapshotter-leaderelection Nov 13 05:44:15.359: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5006-4904/csi-mock Nov 13 05:44:15.362: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5006 Nov 13 05:44:15.365: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5006 Nov 13 05:44:15.368: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5006 Nov 13 05:44:15.371: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5006 Nov 13 05:44:15.375: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5006 Nov 13 05:44:15.378: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5006 Nov 13 05:44:15.383: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5006 Nov 13 05:44:15.386: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5006-4904/csi-mockplugin Nov 13 05:44:15.390: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5006-4904/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-5006-4904 STEP: Waiting for namespaces [csi-mock-volumes-5006-4904] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:44:27.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:49.865 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIServiceAccountToken /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1374 token should not be plumbed down when CSIDriver is not deployed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1402 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when CSIDriver is not deployed","total":-1,"completed":4,"skipped":59,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:42:08.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-46eb4846-0c01-4aef-b0bb-4a98e8bb7755" Nov 13 05:43:30.787: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-46eb4846-0c01-4aef-b0bb-4a98e8bb7755 && dd if=/dev/zero of=/tmp/local-volume-test-46eb4846-0c01-4aef-b0bb-4a98e8bb7755/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-46eb4846-0c01-4aef-b0bb-4a98e8bb7755/file] Namespace:persistent-local-volumes-test-5775 PodName:hostexec-node2-g5zg6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:43:30.787: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:43:30.980: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-46eb4846-0c01-4aef-b0bb-4a98e8bb7755/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-5775 PodName:hostexec-node2-g5zg6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:43:30.980: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:43:31.115: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-46eb4846-0c01-4aef-b0bb-4a98e8bb7755 && chmod o+rwx /tmp/local-volume-test-46eb4846-0c01-4aef-b0bb-4a98e8bb7755] Namespace:persistent-local-volumes-test-5775 PodName:hostexec-node2-g5zg6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:43:31.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:43:31.410: INFO: Creating a PV followed by a PVC Nov 13 05:43:31.421: INFO: Waiting for PV local-pvwsznz to bind to PVC pvc-svgmc Nov 13 05:43:31.421: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-svgmc] to have phase Bound Nov 13 05:43:31.423: INFO: PersistentVolumeClaim pvc-svgmc found but phase is Pending instead of Bound. Nov 13 05:43:33.426: INFO: PersistentVolumeClaim pvc-svgmc found and phase=Bound (2.005022752s) Nov 13 05:43:33.426: INFO: Waiting up to 3m0s for PersistentVolume local-pvwsznz to have phase Bound Nov 13 05:43:33.429: INFO: PersistentVolume local-pvwsznz found and phase=Bound (2.530024ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:44:27.455: INFO: pod "pod-fb6fca0c-2ccf-4bb8-96d4-f975f019705b" created on Node "node2" STEP: Writing in pod1 Nov 13 05:44:27.456: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5775 PodName:pod-fb6fca0c-2ccf-4bb8-96d4-f975f019705b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:44:27.456: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:44:28.236: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Nov 13 05:44:28.237: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5775 PodName:pod-fb6fca0c-2ccf-4bb8-96d4-f975f019705b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:44:28.237: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:44:28.328: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-fb6fca0c-2ccf-4bb8-96d4-f975f019705b in namespace persistent-local-volumes-test-5775 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:44:28.334: INFO: Deleting PersistentVolumeClaim "pvc-svgmc" Nov 13 05:44:28.338: INFO: Deleting PersistentVolume "local-pvwsznz" Nov 13 05:44:28.342: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-46eb4846-0c01-4aef-b0bb-4a98e8bb7755] Namespace:persistent-local-volumes-test-5775 PodName:hostexec-node2-g5zg6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:44:28.342: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:44:28.486: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-46eb4846-0c01-4aef-b0bb-4a98e8bb7755/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-5775 PodName:hostexec-node2-g5zg6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:44:28.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-46eb4846-0c01-4aef-b0bb-4a98e8bb7755/file Nov 13 05:44:28.596: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-5775 PodName:hostexec-node2-g5zg6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:44:28.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-46eb4846-0c01-4aef-b0bb-4a98e8bb7755 Nov 13 05:44:28.703: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-46eb4846-0c01-4aef-b0bb-4a98e8bb7755] Namespace:persistent-local-volumes-test-5775 PodName:hostexec-node2-g5zg6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:44:28.703: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:44:28.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5775" for this suite. • [SLOW TEST:140.101 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":5,"skipped":179,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:43:25.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should preserve attachment policy when no CSIDriver present /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 STEP: Building a driver namespace object, basename csi-mock-volumes-1389 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:43:25.508: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1389-1766/csi-attacher Nov 13 05:43:25.511: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1389 Nov 13 05:43:25.511: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-1389 Nov 13 05:43:25.514: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1389 Nov 13 05:43:25.517: INFO: creating *v1.Role: csi-mock-volumes-1389-1766/external-attacher-cfg-csi-mock-volumes-1389 Nov 13 05:43:25.520: INFO: creating *v1.RoleBinding: csi-mock-volumes-1389-1766/csi-attacher-role-cfg Nov 13 05:43:25.522: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1389-1766/csi-provisioner Nov 13 05:43:25.525: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1389 Nov 13 05:43:25.525: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-1389 Nov 13 05:43:25.528: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1389 Nov 13 05:43:25.531: INFO: creating *v1.Role: csi-mock-volumes-1389-1766/external-provisioner-cfg-csi-mock-volumes-1389 Nov 13 05:43:25.533: INFO: creating *v1.RoleBinding: csi-mock-volumes-1389-1766/csi-provisioner-role-cfg Nov 13 05:43:25.536: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1389-1766/csi-resizer Nov 13 05:43:25.538: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1389 Nov 13 05:43:25.538: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-1389 Nov 13 05:43:25.541: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1389 Nov 13 05:43:25.543: INFO: creating *v1.Role: csi-mock-volumes-1389-1766/external-resizer-cfg-csi-mock-volumes-1389 Nov 13 05:43:25.545: INFO: creating *v1.RoleBinding: csi-mock-volumes-1389-1766/csi-resizer-role-cfg Nov 13 05:43:25.548: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1389-1766/csi-snapshotter Nov 13 05:43:25.550: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1389 Nov 13 05:43:25.550: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-1389 Nov 13 05:43:25.553: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1389 Nov 13 05:43:25.555: INFO: creating *v1.Role: csi-mock-volumes-1389-1766/external-snapshotter-leaderelection-csi-mock-volumes-1389 Nov 13 05:43:25.558: INFO: creating *v1.RoleBinding: csi-mock-volumes-1389-1766/external-snapshotter-leaderelection Nov 13 05:43:25.560: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1389-1766/csi-mock Nov 13 05:43:25.563: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1389 Nov 13 05:43:25.566: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1389 Nov 13 05:43:25.568: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1389 Nov 13 05:43:25.571: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1389 Nov 13 05:43:25.573: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1389 Nov 13 05:43:25.576: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1389 Nov 13 05:43:25.578: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1389 Nov 13 05:43:25.580: INFO: creating *v1.StatefulSet: csi-mock-volumes-1389-1766/csi-mockplugin Nov 13 05:43:25.585: INFO: creating *v1.StatefulSet: csi-mock-volumes-1389-1766/csi-mockplugin-attacher Nov 13 05:43:25.587: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-1389 to register on node node1 STEP: Creating pod Nov 13 05:43:35.103: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:43:35.108: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-n2v2p] to have phase Bound Nov 13 05:43:35.110: INFO: PersistentVolumeClaim pvc-n2v2p found but phase is Pending instead of Bound. Nov 13 05:43:37.115: INFO: PersistentVolumeClaim pvc-n2v2p found and phase=Bound (2.006482506s) STEP: Checking if VolumeAttachment was created for the pod STEP: Deleting pod pvc-volume-tester-pns94 Nov 13 05:43:59.144: INFO: Deleting pod "pvc-volume-tester-pns94" in namespace "csi-mock-volumes-1389" Nov 13 05:43:59.151: INFO: Wait up to 5m0s for pod "pvc-volume-tester-pns94" to be fully deleted STEP: Deleting claim pvc-n2v2p Nov 13 05:44:13.163: INFO: Waiting up to 2m0s for PersistentVolume pvc-db74f286-468d-4a50-bdab-0dec7256b117 to get deleted Nov 13 05:44:13.165: INFO: PersistentVolume pvc-db74f286-468d-4a50-bdab-0dec7256b117 found and phase=Bound (1.951353ms) Nov 13 05:44:15.169: INFO: PersistentVolume pvc-db74f286-468d-4a50-bdab-0dec7256b117 was removed STEP: Deleting storageclass csi-mock-volumes-1389-sckk9vq STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-1389 STEP: Waiting for namespaces [csi-mock-volumes-1389] to vanish STEP: uninstalling csi mock driver Nov 13 05:44:21.183: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1389-1766/csi-attacher Nov 13 05:44:21.186: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1389 Nov 13 05:44:21.190: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1389 Nov 13 05:44:21.194: INFO: deleting *v1.Role: csi-mock-volumes-1389-1766/external-attacher-cfg-csi-mock-volumes-1389 Nov 13 05:44:21.197: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1389-1766/csi-attacher-role-cfg Nov 13 05:44:21.201: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1389-1766/csi-provisioner Nov 13 05:44:21.203: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1389 Nov 13 05:44:21.208: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1389 Nov 13 05:44:21.211: INFO: deleting *v1.Role: csi-mock-volumes-1389-1766/external-provisioner-cfg-csi-mock-volumes-1389 Nov 13 05:44:21.214: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1389-1766/csi-provisioner-role-cfg Nov 13 05:44:21.218: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1389-1766/csi-resizer Nov 13 05:44:21.221: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1389 Nov 13 05:44:21.224: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1389 Nov 13 05:44:21.227: INFO: deleting *v1.Role: csi-mock-volumes-1389-1766/external-resizer-cfg-csi-mock-volumes-1389 Nov 13 05:44:21.230: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1389-1766/csi-resizer-role-cfg Nov 13 05:44:21.234: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1389-1766/csi-snapshotter Nov 13 05:44:21.237: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1389 Nov 13 05:44:21.240: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1389 Nov 13 05:44:21.244: INFO: deleting *v1.Role: csi-mock-volumes-1389-1766/external-snapshotter-leaderelection-csi-mock-volumes-1389 Nov 13 05:44:21.247: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1389-1766/external-snapshotter-leaderelection Nov 13 05:44:21.250: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1389-1766/csi-mock Nov 13 05:44:21.254: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1389 Nov 13 05:44:21.257: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1389 Nov 13 05:44:21.261: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1389 Nov 13 05:44:21.264: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1389 Nov 13 05:44:21.268: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1389 Nov 13 05:44:21.271: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1389 Nov 13 05:44:21.274: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1389 Nov 13 05:44:21.277: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1389-1766/csi-mockplugin Nov 13 05:44:21.281: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1389-1766/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-1389-1766 STEP: Waiting for namespaces [csi-mock-volumes-1389-1766] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:44:33.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:67.844 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI attach test using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:316 should preserve attachment policy when no CSIDriver present /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present","total":-1,"completed":13,"skipped":364,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:42:32.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume without restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687 STEP: Building a driver namespace object, basename csi-mock-volumes-4684 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:42:32.406: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4684-501/csi-attacher Nov 13 05:42:32.409: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4684 Nov 13 05:42:32.409: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4684 Nov 13 05:42:32.412: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4684 Nov 13 05:42:32.415: INFO: creating *v1.Role: csi-mock-volumes-4684-501/external-attacher-cfg-csi-mock-volumes-4684 Nov 13 05:42:32.417: INFO: creating *v1.RoleBinding: csi-mock-volumes-4684-501/csi-attacher-role-cfg Nov 13 05:42:32.419: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4684-501/csi-provisioner Nov 13 05:42:32.422: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4684 Nov 13 05:42:32.422: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4684 Nov 13 05:42:32.424: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4684 Nov 13 05:42:32.427: INFO: creating *v1.Role: csi-mock-volumes-4684-501/external-provisioner-cfg-csi-mock-volumes-4684 Nov 13 05:42:32.430: INFO: creating *v1.RoleBinding: csi-mock-volumes-4684-501/csi-provisioner-role-cfg Nov 13 05:42:32.433: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4684-501/csi-resizer Nov 13 05:42:32.435: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4684 Nov 13 05:42:32.436: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4684 Nov 13 05:42:32.438: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4684 Nov 13 05:42:32.440: INFO: creating *v1.Role: csi-mock-volumes-4684-501/external-resizer-cfg-csi-mock-volumes-4684 Nov 13 05:42:32.443: INFO: creating *v1.RoleBinding: csi-mock-volumes-4684-501/csi-resizer-role-cfg Nov 13 05:42:32.445: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4684-501/csi-snapshotter Nov 13 05:42:32.448: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4684 Nov 13 05:42:32.448: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4684 Nov 13 05:42:32.451: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4684 Nov 13 05:42:32.454: INFO: creating *v1.Role: csi-mock-volumes-4684-501/external-snapshotter-leaderelection-csi-mock-volumes-4684 Nov 13 05:42:32.457: INFO: creating *v1.RoleBinding: csi-mock-volumes-4684-501/external-snapshotter-leaderelection Nov 13 05:42:32.460: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4684-501/csi-mock Nov 13 05:42:32.463: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4684 Nov 13 05:42:32.465: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4684 Nov 13 05:42:32.468: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4684 Nov 13 05:42:32.472: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4684 Nov 13 05:42:32.475: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4684 Nov 13 05:42:32.477: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4684 Nov 13 05:42:32.480: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4684 Nov 13 05:42:32.483: INFO: creating *v1.StatefulSet: csi-mock-volumes-4684-501/csi-mockplugin Nov 13 05:42:32.487: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4684 Nov 13 05:42:32.490: INFO: creating *v1.StatefulSet: csi-mock-volumes-4684-501/csi-mockplugin-resizer Nov 13 05:42:32.493: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4684" Nov 13 05:42:32.495: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4684 to register on node node1 STEP: Creating pod Nov 13 05:42:42.012: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:42:42.016: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-m5rsd] to have phase Bound Nov 13 05:42:42.018: INFO: PersistentVolumeClaim pvc-m5rsd found but phase is Pending instead of Bound. Nov 13 05:42:44.022: INFO: PersistentVolumeClaim pvc-m5rsd found and phase=Bound (2.006099668s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-pg2nd Nov 13 05:44:12.068: INFO: Deleting pod "pvc-volume-tester-pg2nd" in namespace "csi-mock-volumes-4684" Nov 13 05:44:12.074: INFO: Wait up to 5m0s for pod "pvc-volume-tester-pg2nd" to be fully deleted STEP: Deleting claim pvc-m5rsd Nov 13 05:44:16.088: INFO: Waiting up to 2m0s for PersistentVolume pvc-c225c3d1-6162-4d9b-9485-ef4dd7f788c2 to get deleted Nov 13 05:44:16.090: INFO: PersistentVolume pvc-c225c3d1-6162-4d9b-9485-ef4dd7f788c2 found and phase=Bound (2.468012ms) Nov 13 05:44:18.095: INFO: PersistentVolume pvc-c225c3d1-6162-4d9b-9485-ef4dd7f788c2 was removed STEP: Deleting storageclass csi-mock-volumes-4684-sc2wjt4 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4684 STEP: Waiting for namespaces [csi-mock-volumes-4684] to vanish STEP: uninstalling csi mock driver Nov 13 05:44:24.110: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4684-501/csi-attacher Nov 13 05:44:24.114: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4684 Nov 13 05:44:24.118: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4684 Nov 13 05:44:24.121: INFO: deleting *v1.Role: csi-mock-volumes-4684-501/external-attacher-cfg-csi-mock-volumes-4684 Nov 13 05:44:24.125: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4684-501/csi-attacher-role-cfg Nov 13 05:44:24.128: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4684-501/csi-provisioner Nov 13 05:44:24.132: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4684 Nov 13 05:44:24.138: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4684 Nov 13 05:44:24.146: INFO: deleting *v1.Role: csi-mock-volumes-4684-501/external-provisioner-cfg-csi-mock-volumes-4684 Nov 13 05:44:24.154: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4684-501/csi-provisioner-role-cfg Nov 13 05:44:24.161: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4684-501/csi-resizer Nov 13 05:44:24.165: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4684 Nov 13 05:44:24.168: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4684 Nov 13 05:44:24.172: INFO: deleting *v1.Role: csi-mock-volumes-4684-501/external-resizer-cfg-csi-mock-volumes-4684 Nov 13 05:44:24.175: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4684-501/csi-resizer-role-cfg Nov 13 05:44:24.178: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4684-501/csi-snapshotter Nov 13 05:44:24.181: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4684 Nov 13 05:44:24.184: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4684 Nov 13 05:44:24.187: INFO: deleting *v1.Role: csi-mock-volumes-4684-501/external-snapshotter-leaderelection-csi-mock-volumes-4684 Nov 13 05:44:24.191: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4684-501/external-snapshotter-leaderelection Nov 13 05:44:24.194: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4684-501/csi-mock Nov 13 05:44:24.199: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4684 Nov 13 05:44:24.202: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4684 Nov 13 05:44:24.206: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4684 Nov 13 05:44:24.209: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4684 Nov 13 05:44:24.212: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4684 Nov 13 05:44:24.216: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4684 Nov 13 05:44:24.219: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4684 Nov 13 05:44:24.222: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4684-501/csi-mockplugin Nov 13 05:44:24.226: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-4684 Nov 13 05:44:24.230: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4684-501/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-4684-501 STEP: Waiting for namespaces [csi-mock-volumes-4684-501] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:44:36.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:123.918 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI online volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:672 should expand volume without restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":4,"skipped":98,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:44:28.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Nov 13 05:44:28.956: INFO: The status of Pod test-hostpath-type-vdfbg is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:44:30.960: INFO: The status of Pod test-hostpath-type-vdfbg is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:44:32.960: INFO: The status of Pod test-hostpath-type-vdfbg is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:44:34.960: INFO: The status of Pod test-hostpath-type-vdfbg is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:44:36.959: INFO: The status of Pod test-hostpath-type-vdfbg is Running (Ready = true) STEP: running on node node1 STEP: Create a block device for further testing Nov 13 05:44:36.961: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-7937 PodName:test-hostpath-type-vdfbg ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:44:36.961: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:369 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:44:39.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-7937" for this suite. • [SLOW TEST:10.959 seconds] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting block device 'ablkdev' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:369 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathCharDev","total":-1,"completed":6,"skipped":212,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:44:36.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Nov 13 05:44:36.459: INFO: The status of Pod test-hostpath-type-2tnn8 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:44:38.463: INFO: The status of Pod test-hostpath-type-2tnn8 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:44:40.463: INFO: The status of Pod test-hostpath-type-2tnn8 is Running (Ready = true) STEP: running on node node1 STEP: Create a character device for further testing Nov 13 05:44:40.465: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-1619 PodName:test-hostpath-type-2tnn8 ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:44:40.466: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to mount character device 'achardev' successfully when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:277 [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:44:44.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-1619" for this suite. • [SLOW TEST:8.188 seconds] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount character device 'achardev' successfully when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:277 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should be able to mount character device 'achardev' successfully when HostPathType is HostPathCharDev","total":-1,"completed":5,"skipped":184,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:44:44.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Nov 13 05:44:44.666: INFO: The status of Pod test-hostpath-type-c85b7 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:44:46.670: INFO: The status of Pod test-hostpath-type-c85b7 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:44:48.672: INFO: The status of Pod test-hostpath-type-c85b7 is Running (Ready = true) STEP: running on node node1 [It] Should fail on mounting socket 'asocket' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:221 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:44:50.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-199" for this suite. • [SLOW TEST:6.079 seconds] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting socket 'asocket' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:221 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathFile","total":-1,"completed":6,"skipped":193,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:42:33.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-4ce9a04b-b899-43f3-8814-0412d685d51f" Nov 13 05:43:33.169: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-4ce9a04b-b899-43f3-8814-0412d685d51f && dd if=/dev/zero of=/tmp/local-volume-test-4ce9a04b-b899-43f3-8814-0412d685d51f/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-4ce9a04b-b899-43f3-8814-0412d685d51f/file] Namespace:persistent-local-volumes-test-4924 PodName:hostexec-node2-rg5jv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:43:33.169: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:43:33.319: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-4ce9a04b-b899-43f3-8814-0412d685d51f/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-4924 PodName:hostexec-node2-rg5jv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:43:33.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:43:33.495: INFO: Creating a PV followed by a PVC Nov 13 05:43:33.504: INFO: Waiting for PV local-pvknjds to bind to PVC pvc-jvz67 Nov 13 05:43:33.504: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-jvz67] to have phase Bound Nov 13 05:43:33.506: INFO: PersistentVolumeClaim pvc-jvz67 found but phase is Pending instead of Bound. Nov 13 05:43:35.509: INFO: PersistentVolumeClaim pvc-jvz67 found and phase=Bound (2.005835411s) Nov 13 05:43:35.509: INFO: Waiting up to 3m0s for PersistentVolume local-pvknjds to have phase Bound Nov 13 05:43:35.512: INFO: PersistentVolume local-pvknjds found and phase=Bound (2.127427ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:44:29.539: INFO: pod "pod-294cf3ed-ae55-4906-901b-ba5a0408b908" created on Node "node2" STEP: Writing in pod1 Nov 13 05:44:29.539: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4924 PodName:pod-294cf3ed-ae55-4906-901b-ba5a0408b908 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:44:29.539: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:44:29.619: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 13 05:44:29.619: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4924 PodName:pod-294cf3ed-ae55-4906-901b-ba5a0408b908 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:44:29.619: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:44:29.706: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-294cf3ed-ae55-4906-901b-ba5a0408b908 in namespace persistent-local-volumes-test-4924 STEP: Creating pod2 STEP: Creating a pod Nov 13 05:44:55.737: INFO: pod "pod-83fb7efb-0fa5-4301-abdd-174081db42af" created on Node "node2" STEP: Reading in pod2 Nov 13 05:44:55.737: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4924 PodName:pod-83fb7efb-0fa5-4301-abdd-174081db42af ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:44:55.737: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:44:55.817: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-83fb7efb-0fa5-4301-abdd-174081db42af in namespace persistent-local-volumes-test-4924 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:44:55.822: INFO: Deleting PersistentVolumeClaim "pvc-jvz67" Nov 13 05:44:55.826: INFO: Deleting PersistentVolume "local-pvknjds" Nov 13 05:44:55.830: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-4ce9a04b-b899-43f3-8814-0412d685d51f/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-4924 PodName:hostexec-node2-rg5jv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:44:55.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop1" on node "node2" at path /tmp/local-volume-test-4ce9a04b-b899-43f3-8814-0412d685d51f/file Nov 13 05:44:55.914: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop1] Namespace:persistent-local-volumes-test-4924 PodName:hostexec-node2-rg5jv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:44:55.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-4ce9a04b-b899-43f3-8814-0412d685d51f Nov 13 05:44:55.996: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4ce9a04b-b899-43f3-8814-0412d685d51f] Namespace:persistent-local-volumes-test-4924 PodName:hostexec-node2-rg5jv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:44:55.996: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:44:56.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4924" for this suite. • [SLOW TEST:142.990 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":6,"skipped":233,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:44:56.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Nov 13 05:44:56.345: INFO: The status of Pod test-hostpath-type-7fnpk is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:44:58.349: INFO: The status of Pod test-hostpath-type-7fnpk is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:45:00.348: INFO: The status of Pod test-hostpath-type-7fnpk is Running (Ready = true) STEP: running on node node1 STEP: Create a block device for further testing Nov 13 05:45:00.350: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-4470 PodName:test-hostpath-type-7fnpk ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:45:00.350: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting non-existent block device 'does-not-exist-blk-dev' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:340 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:45:02.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-4470" for this suite. • [SLOW TEST:6.170 seconds] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting non-existent block device 'does-not-exist-blk-dev' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:340 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should fail on mounting non-existent block device 'does-not-exist-blk-dev' when HostPathType is HostPathBlockDev","total":-1,"completed":7,"skipped":336,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:44:50.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-71ede72e-e2cf-4475-b41d-dcd24b0de42e" Nov 13 05:44:52.829: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-71ede72e-e2cf-4475-b41d-dcd24b0de42e && dd if=/dev/zero of=/tmp/local-volume-test-71ede72e-e2cf-4475-b41d-dcd24b0de42e/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-71ede72e-e2cf-4475-b41d-dcd24b0de42e/file] Namespace:persistent-local-volumes-test-7676 PodName:hostexec-node1-tmr6g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:44:52.830: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:44:53.035: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-71ede72e-e2cf-4475-b41d-dcd24b0de42e/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-7676 PodName:hostexec-node1-tmr6g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:44:53.035: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:44:53.150: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-71ede72e-e2cf-4475-b41d-dcd24b0de42e && chmod o+rwx /tmp/local-volume-test-71ede72e-e2cf-4475-b41d-dcd24b0de42e] Namespace:persistent-local-volumes-test-7676 PodName:hostexec-node1-tmr6g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:44:53.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:44:53.396: INFO: Creating a PV followed by a PVC Nov 13 05:44:53.403: INFO: Waiting for PV local-pv7j7q8 to bind to PVC pvc-xzx4f Nov 13 05:44:53.403: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-xzx4f] to have phase Bound Nov 13 05:44:53.405: INFO: PersistentVolumeClaim pvc-xzx4f found but phase is Pending instead of Bound. Nov 13 05:44:55.410: INFO: PersistentVolumeClaim pvc-xzx4f found and phase=Bound (2.007019483s) Nov 13 05:44:55.410: INFO: Waiting up to 3m0s for PersistentVolume local-pv7j7q8 to have phase Bound Nov 13 05:44:55.412: INFO: PersistentVolume local-pv7j7q8 found and phase=Bound (1.93056ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:44:59.442: INFO: pod "pod-19842e8a-5498-42a7-b12b-56efe8bfae85" created on Node "node1" STEP: Writing in pod1 Nov 13 05:44:59.442: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7676 PodName:pod-19842e8a-5498-42a7-b12b-56efe8bfae85 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:44:59.442: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:44:59.546: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 13 05:44:59.546: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7676 PodName:pod-19842e8a-5498-42a7-b12b-56efe8bfae85 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:44:59.546: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:44:59.626: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-19842e8a-5498-42a7-b12b-56efe8bfae85 in namespace persistent-local-volumes-test-7676 STEP: Creating pod2 STEP: Creating a pod Nov 13 05:45:03.653: INFO: pod "pod-d7b94e5f-6197-46de-a3fd-7894c5cac505" created on Node "node1" STEP: Reading in pod2 Nov 13 05:45:03.653: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7676 PodName:pod-d7b94e5f-6197-46de-a3fd-7894c5cac505 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:45:03.653: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:45:03.731: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-d7b94e5f-6197-46de-a3fd-7894c5cac505 in namespace persistent-local-volumes-test-7676 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:45:03.736: INFO: Deleting PersistentVolumeClaim "pvc-xzx4f" Nov 13 05:45:03.740: INFO: Deleting PersistentVolume "local-pv7j7q8" Nov 13 05:45:03.744: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-71ede72e-e2cf-4475-b41d-dcd24b0de42e] Namespace:persistent-local-volumes-test-7676 PodName:hostexec-node1-tmr6g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:45:03.744: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:45:03.845: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-71ede72e-e2cf-4475-b41d-dcd24b0de42e/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-7676 PodName:hostexec-node1-tmr6g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:45:03.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node1" at path /tmp/local-volume-test-71ede72e-e2cf-4475-b41d-dcd24b0de42e/file Nov 13 05:45:03.984: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-7676 PodName:hostexec-node1-tmr6g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:45:03.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-71ede72e-e2cf-4475-b41d-dcd24b0de42e Nov 13 05:45:04.068: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-71ede72e-e2cf-4475-b41d-dcd24b0de42e] Namespace:persistent-local-volumes-test-7676 PodName:hostexec-node1-tmr6g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:45:04.068: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:45:04.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7676" for this suite. • [SLOW TEST:13.381 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":7,"skipped":225,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:44:18.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-3838d7f0-9dd1-41b1-9fd5-8f9c61ddbd7c" Nov 13 05:45:04.409: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-3838d7f0-9dd1-41b1-9fd5-8f9c61ddbd7c && dd if=/dev/zero of=/tmp/local-volume-test-3838d7f0-9dd1-41b1-9fd5-8f9c61ddbd7c/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-3838d7f0-9dd1-41b1-9fd5-8f9c61ddbd7c/file] Namespace:persistent-local-volumes-test-9547 PodName:hostexec-node2-nljrr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:45:04.409: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:45:04.960: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-3838d7f0-9dd1-41b1-9fd5-8f9c61ddbd7c/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9547 PodName:hostexec-node2-nljrr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:45:04.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:45:05.046: INFO: Creating a PV followed by a PVC Nov 13 05:45:05.055: INFO: Waiting for PV local-pv6b5v5 to bind to PVC pvc-kkggj Nov 13 05:45:05.055: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-kkggj] to have phase Bound Nov 13 05:45:05.057: INFO: PersistentVolumeClaim pvc-kkggj found but phase is Pending instead of Bound. Nov 13 05:45:07.063: INFO: PersistentVolumeClaim pvc-kkggj found and phase=Bound (2.007933243s) Nov 13 05:45:07.063: INFO: Waiting up to 3m0s for PersistentVolume local-pv6b5v5 to have phase Bound Nov 13 05:45:07.066: INFO: PersistentVolume local-pv6b5v5 found and phase=Bound (3.072443ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Nov 13 05:45:07.071: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:45:07.073: INFO: Deleting PersistentVolumeClaim "pvc-kkggj" Nov 13 05:45:07.077: INFO: Deleting PersistentVolume "local-pv6b5v5" Nov 13 05:45:07.081: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-3838d7f0-9dd1-41b1-9fd5-8f9c61ddbd7c/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9547 PodName:hostexec-node2-nljrr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:45:07.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop1" on node "node2" at path /tmp/local-volume-test-3838d7f0-9dd1-41b1-9fd5-8f9c61ddbd7c/file Nov 13 05:45:07.176: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop1] Namespace:persistent-local-volumes-test-9547 PodName:hostexec-node2-nljrr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:45:07.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-3838d7f0-9dd1-41b1-9fd5-8f9c61ddbd7c Nov 13 05:45:07.265: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-3838d7f0-9dd1-41b1-9fd5-8f9c61ddbd7c] Namespace:persistent-local-volumes-test-9547 PodName:hostexec-node2-nljrr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:45:07.265: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:45:07.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9547" for this suite. S [SKIPPING] [49.040 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:45:07.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Nov 13 05:45:07.512: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:45:07.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-1933" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 schedule pods each with a PD, delete pod and verify detach [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:93 for read-only PD with pod delete grace period of "immediate (0s)" /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:135 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:44:33.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:49 [It] should allow deletion of pod with invalid volume : configmap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 Nov 13 05:45:03.371: INFO: Deleting pod "pv-5989"/"pod-ephm-test-projected-shpq" Nov 13 05:45:03.371: INFO: Deleting pod "pod-ephm-test-projected-shpq" in namespace "pv-5989" Nov 13 05:45:03.376: INFO: Wait up to 5m0s for pod "pod-ephm-test-projected-shpq" to be fully deleted [AfterEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:45:11.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-5989" for this suite. • [SLOW TEST:38.057 seconds] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 When pod refers to non-existent ephemeral storage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53 should allow deletion of pod with invalid volume : configmap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 ------------------------------ {"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap","total":-1,"completed":14,"skipped":379,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:40:18.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:460 STEP: Creating the pod [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:45:18.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1826" for this suite. • [SLOW TEST:300.056 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:460 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap Should fail non-optional pod creation due to configMap object does not exist [Slow]","total":-1,"completed":2,"skipped":136,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:44:27.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-e9bcb36b-9814-4870-b9cb-8f8044480338" Nov 13 05:44:57.496: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-e9bcb36b-9814-4870-b9cb-8f8044480338 && dd if=/dev/zero of=/tmp/local-volume-test-e9bcb36b-9814-4870-b9cb-8f8044480338/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-e9bcb36b-9814-4870-b9cb-8f8044480338/file] Namespace:persistent-local-volumes-test-8561 PodName:hostexec-node2-k7ccq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:44:57.496: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:44:58.341: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-e9bcb36b-9814-4870-b9cb-8f8044480338/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-8561 PodName:hostexec-node2-k7ccq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:44:58.341: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:44:58.534: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-e9bcb36b-9814-4870-b9cb-8f8044480338 && chmod o+rwx /tmp/local-volume-test-e9bcb36b-9814-4870-b9cb-8f8044480338] Namespace:persistent-local-volumes-test-8561 PodName:hostexec-node2-k7ccq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:44:58.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:44:58.970: INFO: Creating a PV followed by a PVC Nov 13 05:44:58.981: INFO: Waiting for PV local-pvfpg7v to bind to PVC pvc-44zt8 Nov 13 05:44:58.981: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-44zt8] to have phase Bound Nov 13 05:44:58.983: INFO: PersistentVolumeClaim pvc-44zt8 found but phase is Pending instead of Bound. Nov 13 05:45:00.985: INFO: PersistentVolumeClaim pvc-44zt8 found and phase=Bound (2.004283528s) Nov 13 05:45:00.985: INFO: Waiting up to 3m0s for PersistentVolume local-pvfpg7v to have phase Bound Nov 13 05:45:00.987: INFO: PersistentVolume local-pvfpg7v found and phase=Bound (1.92693ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Nov 13 05:45:13.012: INFO: pod "pod-b1dc8579-6f3b-4161-9594-5aebfbfac1bc" created on Node "node2" STEP: Writing in pod1 Nov 13 05:45:13.012: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8561 PodName:pod-b1dc8579-6f3b-4161-9594-5aebfbfac1bc ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:45:13.012: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:45:13.099: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 13 05:45:13.099: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8561 PodName:pod-b1dc8579-6f3b-4161-9594-5aebfbfac1bc ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:45:13.099: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:45:13.251: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Nov 13 05:45:19.274: INFO: pod "pod-924a87e2-1373-4dd4-8024-c8a917c4d605" created on Node "node2" Nov 13 05:45:19.275: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8561 PodName:pod-924a87e2-1373-4dd4-8024-c8a917c4d605 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:45:19.275: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:45:19.351: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Nov 13 05:45:19.351: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-e9bcb36b-9814-4870-b9cb-8f8044480338 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8561 PodName:pod-924a87e2-1373-4dd4-8024-c8a917c4d605 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:45:19.352: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:45:19.450: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-e9bcb36b-9814-4870-b9cb-8f8044480338 > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Nov 13 05:45:19.450: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8561 PodName:pod-b1dc8579-6f3b-4161-9594-5aebfbfac1bc ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:45:19.451: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:45:19.640: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-e9bcb36b-9814-4870-b9cb-8f8044480338", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-b1dc8579-6f3b-4161-9594-5aebfbfac1bc in namespace persistent-local-volumes-test-8561 STEP: Deleting pod2 STEP: Deleting pod pod-924a87e2-1373-4dd4-8024-c8a917c4d605 in namespace persistent-local-volumes-test-8561 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:45:19.651: INFO: Deleting PersistentVolumeClaim "pvc-44zt8" Nov 13 05:45:19.655: INFO: Deleting PersistentVolume "local-pvfpg7v" Nov 13 05:45:19.659: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-e9bcb36b-9814-4870-b9cb-8f8044480338] Namespace:persistent-local-volumes-test-8561 PodName:hostexec-node2-k7ccq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:45:19.659: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:45:19.756: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-e9bcb36b-9814-4870-b9cb-8f8044480338/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-8561 PodName:hostexec-node2-k7ccq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:45:19.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-e9bcb36b-9814-4870-b9cb-8f8044480338/file Nov 13 05:45:19.854: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-8561 PodName:hostexec-node2-k7ccq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:45:19.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-e9bcb36b-9814-4870-b9cb-8f8044480338 Nov 13 05:45:19.943: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e9bcb36b-9814-4870-b9cb-8f8044480338] Namespace:persistent-local-volumes-test-8561 PodName:hostexec-node2-k7ccq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:45:19.943: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:45:20.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8561" for this suite. • [SLOW TEST:52.597 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":76,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:45:18.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-f2b43fab-0998-4b9d-87ec-e548bac6ff7e" Nov 13 05:45:21.017: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-f2b43fab-0998-4b9d-87ec-e548bac6ff7e && dd if=/dev/zero of=/tmp/local-volume-test-f2b43fab-0998-4b9d-87ec-e548bac6ff7e/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-f2b43fab-0998-4b9d-87ec-e548bac6ff7e/file] Namespace:persistent-local-volumes-test-6944 PodName:hostexec-node1-jcks9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:45:21.017: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:45:21.181: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-f2b43fab-0998-4b9d-87ec-e548bac6ff7e/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6944 PodName:hostexec-node1-jcks9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:45:21.181: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:45:21.285: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-f2b43fab-0998-4b9d-87ec-e548bac6ff7e && chmod o+rwx /tmp/local-volume-test-f2b43fab-0998-4b9d-87ec-e548bac6ff7e] Namespace:persistent-local-volumes-test-6944 PodName:hostexec-node1-jcks9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:45:21.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:45:22.114: INFO: Creating a PV followed by a PVC Nov 13 05:45:22.124: INFO: Waiting for PV local-pvp8hd9 to bind to PVC pvc-jbjg2 Nov 13 05:45:22.125: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-jbjg2] to have phase Bound Nov 13 05:45:22.127: INFO: PersistentVolumeClaim pvc-jbjg2 found but phase is Pending instead of Bound. Nov 13 05:45:24.131: INFO: PersistentVolumeClaim pvc-jbjg2 found and phase=Bound (2.006744766s) Nov 13 05:45:24.131: INFO: Waiting up to 3m0s for PersistentVolume local-pvp8hd9 to have phase Bound Nov 13 05:45:24.133: INFO: PersistentVolume local-pvp8hd9 found and phase=Bound (1.865384ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Nov 13 05:45:24.137: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:45:24.139: INFO: Deleting PersistentVolumeClaim "pvc-jbjg2" Nov 13 05:45:24.143: INFO: Deleting PersistentVolume "local-pvp8hd9" Nov 13 05:45:24.146: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-f2b43fab-0998-4b9d-87ec-e548bac6ff7e] Namespace:persistent-local-volumes-test-6944 PodName:hostexec-node1-jcks9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:45:24.146: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:45:24.243: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-f2b43fab-0998-4b9d-87ec-e548bac6ff7e/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6944 PodName:hostexec-node1-jcks9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:45:24.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node1" at path /tmp/local-volume-test-f2b43fab-0998-4b9d-87ec-e548bac6ff7e/file Nov 13 05:45:24.354: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-6944 PodName:hostexec-node1-jcks9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:45:24.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-f2b43fab-0998-4b9d-87ec-e548bac6ff7e Nov 13 05:45:24.448: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f2b43fab-0998-4b9d-87ec-e548bac6ff7e] Namespace:persistent-local-volumes-test-6944 PodName:hostexec-node1-jcks9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:45:24.448: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:45:24.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6944" for this suite. S [SKIPPING] [5.595 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:44:39.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:45:05.968: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-f79c11a4-f3ef-4a3e-8250-2d200c2c1677 && mount --bind /tmp/local-volume-test-f79c11a4-f3ef-4a3e-8250-2d200c2c1677 /tmp/local-volume-test-f79c11a4-f3ef-4a3e-8250-2d200c2c1677] Namespace:persistent-local-volumes-test-4449 PodName:hostexec-node2-gs8fs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:45:05.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:45:06.347: INFO: Creating a PV followed by a PVC Nov 13 05:45:06.355: INFO: Waiting for PV local-pvhggd6 to bind to PVC pvc-whszz Nov 13 05:45:06.355: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-whszz] to have phase Bound Nov 13 05:45:06.357: INFO: PersistentVolumeClaim pvc-whszz found but phase is Pending instead of Bound. Nov 13 05:45:08.363: INFO: PersistentVolumeClaim pvc-whszz found and phase=Bound (2.007566269s) Nov 13 05:45:08.363: INFO: Waiting up to 3m0s for PersistentVolume local-pvhggd6 to have phase Bound Nov 13 05:45:08.365: INFO: PersistentVolume local-pvhggd6 found and phase=Bound (2.533737ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Nov 13 05:45:18.390: INFO: pod "pod-7154ae3d-1d3b-4c69-89a6-78dbd96702e8" created on Node "node2" STEP: Writing in pod1 Nov 13 05:45:18.390: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4449 PodName:pod-7154ae3d-1d3b-4c69-89a6-78dbd96702e8 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:45:18.390: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:45:18.501: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 13 05:45:18.501: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4449 PodName:pod-7154ae3d-1d3b-4c69-89a6-78dbd96702e8 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:45:18.501: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:45:19.057: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Nov 13 05:45:27.083: INFO: pod "pod-a96245f3-5932-4aff-92d2-3b61e25c8c5b" created on Node "node2" Nov 13 05:45:27.083: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4449 PodName:pod-a96245f3-5932-4aff-92d2-3b61e25c8c5b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:45:27.083: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:45:28.424: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Nov 13 05:45:28.424: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-f79c11a4-f3ef-4a3e-8250-2d200c2c1677 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4449 PodName:pod-a96245f3-5932-4aff-92d2-3b61e25c8c5b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:45:28.424: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:45:28.536: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-f79c11a4-f3ef-4a3e-8250-2d200c2c1677 > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Nov 13 05:45:28.536: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4449 PodName:pod-7154ae3d-1d3b-4c69-89a6-78dbd96702e8 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:45:28.536: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:45:28.769: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-f79c11a4-f3ef-4a3e-8250-2d200c2c1677", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-7154ae3d-1d3b-4c69-89a6-78dbd96702e8 in namespace persistent-local-volumes-test-4449 STEP: Deleting pod2 STEP: Deleting pod pod-a96245f3-5932-4aff-92d2-3b61e25c8c5b in namespace persistent-local-volumes-test-4449 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:45:28.778: INFO: Deleting PersistentVolumeClaim "pvc-whszz" Nov 13 05:45:28.782: INFO: Deleting PersistentVolume "local-pvhggd6" STEP: Removing the test directory Nov 13 05:45:28.786: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-f79c11a4-f3ef-4a3e-8250-2d200c2c1677 && rm -r /tmp/local-volume-test-f79c11a4-f3ef-4a3e-8250-2d200c2c1677] Namespace:persistent-local-volumes-test-4449 PodName:hostexec-node2-gs8fs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:45:28.786: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:45:28.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4449" for this suite. • [SLOW TEST:49.009 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":7,"skipped":230,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:45:29.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename regional-pd STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:68 Nov 13 05:45:29.043: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:45:29.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "regional-pd-6521" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 RegionalPD [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:76 should provision storage with delayed binding [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:81 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:72 ------------------------------ SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:45:24.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-3dcaceeb-bf92-481e-ab43-45e88764a064" Nov 13 05:45:30.636: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-3dcaceeb-bf92-481e-ab43-45e88764a064" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-3dcaceeb-bf92-481e-ab43-45e88764a064" "/tmp/local-volume-test-3dcaceeb-bf92-481e-ab43-45e88764a064"] Namespace:persistent-local-volumes-test-2966 PodName:hostexec-node1-2c5z9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:45:30.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:45:30.816: INFO: Creating a PV followed by a PVC Nov 13 05:45:30.823: INFO: Waiting for PV local-pvsk8zq to bind to PVC pvc-7khzk Nov 13 05:45:30.823: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-7khzk] to have phase Bound Nov 13 05:45:30.825: INFO: PersistentVolumeClaim pvc-7khzk found but phase is Pending instead of Bound. Nov 13 05:45:32.829: INFO: PersistentVolumeClaim pvc-7khzk found and phase=Bound (2.00600186s) Nov 13 05:45:32.829: INFO: Waiting up to 3m0s for PersistentVolume local-pvsk8zq to have phase Bound Nov 13 05:45:32.832: INFO: PersistentVolume local-pvsk8zq found and phase=Bound (2.9472ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Nov 13 05:45:32.836: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:45:32.838: INFO: Deleting PersistentVolumeClaim "pvc-7khzk" Nov 13 05:45:32.842: INFO: Deleting PersistentVolume "local-pvsk8zq" STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-3dcaceeb-bf92-481e-ab43-45e88764a064" Nov 13 05:45:32.846: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-3dcaceeb-bf92-481e-ab43-45e88764a064"] Namespace:persistent-local-volumes-test-2966 PodName:hostexec-node1-2c5z9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:45:32.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:45:32.972: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-3dcaceeb-bf92-481e-ab43-45e88764a064] Namespace:persistent-local-volumes-test-2966 PodName:hostexec-node1-2c5z9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:45:32.972: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:45:33.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2966" for this suite. S [SKIPPING] [8.497 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SS ------------------------------ [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:45:20.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Nov 13 05:45:20.159: INFO: The status of Pod test-hostpath-type-cnqxc is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:45:22.162: INFO: The status of Pod test-hostpath-type-cnqxc is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:45:24.163: INFO: The status of Pod test-hostpath-type-cnqxc is Running (Ready = true) STEP: running on node node1 STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should be able to mount file 'afile' successfully when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:143 [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:45:34.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-1560" for this suite. • [SLOW TEST:14.097 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount file 'afile' successfully when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:143 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should be able to mount file 'afile' successfully when HostPathType is HostPathFile","total":-1,"completed":6,"skipped":111,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:45:34.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Nov 13 05:45:34.330: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:45:34.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-352" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create unbound pv count metrics for pvc controller after creating pv only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:485 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:45:02.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:45:14.562: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-fe6f179e-b85e-4707-847c-98e1c8e40d8c && mount --bind /tmp/local-volume-test-fe6f179e-b85e-4707-847c-98e1c8e40d8c /tmp/local-volume-test-fe6f179e-b85e-4707-847c-98e1c8e40d8c] Namespace:persistent-local-volumes-test-8966 PodName:hostexec-node2-vzrg9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:45:14.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:45:14.652: INFO: Creating a PV followed by a PVC Nov 13 05:45:14.658: INFO: Waiting for PV local-pv8hrqp to bind to PVC pvc-v8n7h Nov 13 05:45:14.658: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-v8n7h] to have phase Bound Nov 13 05:45:14.661: INFO: PersistentVolumeClaim pvc-v8n7h found but phase is Pending instead of Bound. Nov 13 05:45:16.665: INFO: PersistentVolumeClaim pvc-v8n7h found and phase=Bound (2.00648497s) Nov 13 05:45:16.665: INFO: Waiting up to 3m0s for PersistentVolume local-pv8hrqp to have phase Bound Nov 13 05:45:16.667: INFO: PersistentVolume local-pv8hrqp found and phase=Bound (1.76153ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Nov 13 05:45:22.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-8966 exec pod-c7549309-c142-4db0-93ec-01fd1bfd8e84 --namespace=persistent-local-volumes-test-8966 -- stat -c %g /mnt/volume1' Nov 13 05:45:22.934: INFO: stderr: "" Nov 13 05:45:22.934: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Nov 13 05:45:36.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-8966 exec pod-59069b5d-9924-4564-abdc-d65a791bb2f1 --namespace=persistent-local-volumes-test-8966 -- stat -c %g /mnt/volume1' Nov 13 05:45:37.196: INFO: stderr: "" Nov 13 05:45:37.196: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod pod-c7549309-c142-4db0-93ec-01fd1bfd8e84 in namespace persistent-local-volumes-test-8966 STEP: Deleting second pod STEP: Deleting pod pod-59069b5d-9924-4564-abdc-d65a791bb2f1 in namespace persistent-local-volumes-test-8966 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:45:37.208: INFO: Deleting PersistentVolumeClaim "pvc-v8n7h" Nov 13 05:45:37.211: INFO: Deleting PersistentVolume "local-pv8hrqp" STEP: Removing the test directory Nov 13 05:45:37.215: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-fe6f179e-b85e-4707-847c-98e1c8e40d8c && rm -r /tmp/local-volume-test-fe6f179e-b85e-4707-847c-98e1c8e40d8c] Namespace:persistent-local-volumes-test-8966 PodName:hostexec-node2-vzrg9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:45:37.215: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:45:37.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8966" for this suite. • [SLOW TEST:34.815 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":8,"skipped":350,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:45:07.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:45:13.630: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-6d457c47-c522-4dad-a063-412d7bc704a3 && mount --bind /tmp/local-volume-test-6d457c47-c522-4dad-a063-412d7bc704a3 /tmp/local-volume-test-6d457c47-c522-4dad-a063-412d7bc704a3] Namespace:persistent-local-volumes-test-8808 PodName:hostexec-node2-sc5lq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:45:13.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:45:13.737: INFO: Creating a PV followed by a PVC Nov 13 05:45:13.744: INFO: Waiting for PV local-pvktv5w to bind to PVC pvc-d4tbq Nov 13 05:45:13.744: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-d4tbq] to have phase Bound Nov 13 05:45:13.746: INFO: PersistentVolumeClaim pvc-d4tbq found but phase is Pending instead of Bound. Nov 13 05:45:15.750: INFO: PersistentVolumeClaim pvc-d4tbq found but phase is Pending instead of Bound. Nov 13 05:45:17.753: INFO: PersistentVolumeClaim pvc-d4tbq found but phase is Pending instead of Bound. Nov 13 05:45:19.758: INFO: PersistentVolumeClaim pvc-d4tbq found but phase is Pending instead of Bound. Nov 13 05:45:21.762: INFO: PersistentVolumeClaim pvc-d4tbq found but phase is Pending instead of Bound. Nov 13 05:45:23.767: INFO: PersistentVolumeClaim pvc-d4tbq found but phase is Pending instead of Bound. Nov 13 05:45:25.773: INFO: PersistentVolumeClaim pvc-d4tbq found but phase is Pending instead of Bound. Nov 13 05:45:27.776: INFO: PersistentVolumeClaim pvc-d4tbq found and phase=Bound (14.032747744s) Nov 13 05:45:27.776: INFO: Waiting up to 3m0s for PersistentVolume local-pvktv5w to have phase Bound Nov 13 05:45:27.778: INFO: PersistentVolume local-pvktv5w found and phase=Bound (2.023164ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:45:39.809: INFO: pod "pod-caf28c21-e71a-4285-90e8-d1005ad5e1f9" created on Node "node2" STEP: Writing in pod1 Nov 13 05:45:39.809: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8808 PodName:pod-caf28c21-e71a-4285-90e8-d1005ad5e1f9 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:45:39.809: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:45:40.025: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Nov 13 05:45:40.025: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8808 PodName:pod-caf28c21-e71a-4285-90e8-d1005ad5e1f9 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:45:40.025: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:45:40.112: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-caf28c21-e71a-4285-90e8-d1005ad5e1f9 in namespace persistent-local-volumes-test-8808 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:45:40.118: INFO: Deleting PersistentVolumeClaim "pvc-d4tbq" Nov 13 05:45:40.122: INFO: Deleting PersistentVolume "local-pvktv5w" STEP: Removing the test directory Nov 13 05:45:40.126: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-6d457c47-c522-4dad-a063-412d7bc704a3 && rm -r /tmp/local-volume-test-6d457c47-c522-4dad-a063-412d7bc704a3] Namespace:persistent-local-volumes-test-8808 PodName:hostexec-node2-sc5lq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:45:40.126: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:45:40.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8808" for this suite. • [SLOW TEST:33.058 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":11,"skipped":485,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:43:49.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume without restarting pod if nodeExpansion=off /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 STEP: Building a driver namespace object, basename csi-mock-volumes-887 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:43:49.218: INFO: creating *v1.ServiceAccount: csi-mock-volumes-887-2009/csi-attacher Nov 13 05:43:49.221: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-887 Nov 13 05:43:49.221: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-887 Nov 13 05:43:49.226: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-887 Nov 13 05:43:49.228: INFO: creating *v1.Role: csi-mock-volumes-887-2009/external-attacher-cfg-csi-mock-volumes-887 Nov 13 05:43:49.231: INFO: creating *v1.RoleBinding: csi-mock-volumes-887-2009/csi-attacher-role-cfg Nov 13 05:43:49.234: INFO: creating *v1.ServiceAccount: csi-mock-volumes-887-2009/csi-provisioner Nov 13 05:43:49.237: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-887 Nov 13 05:43:49.237: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-887 Nov 13 05:43:49.240: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-887 Nov 13 05:43:49.243: INFO: creating *v1.Role: csi-mock-volumes-887-2009/external-provisioner-cfg-csi-mock-volumes-887 Nov 13 05:43:49.246: INFO: creating *v1.RoleBinding: csi-mock-volumes-887-2009/csi-provisioner-role-cfg Nov 13 05:43:49.248: INFO: creating *v1.ServiceAccount: csi-mock-volumes-887-2009/csi-resizer Nov 13 05:43:49.250: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-887 Nov 13 05:43:49.250: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-887 Nov 13 05:43:49.253: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-887 Nov 13 05:43:49.256: INFO: creating *v1.Role: csi-mock-volumes-887-2009/external-resizer-cfg-csi-mock-volumes-887 Nov 13 05:43:49.258: INFO: creating *v1.RoleBinding: csi-mock-volumes-887-2009/csi-resizer-role-cfg Nov 13 05:43:49.261: INFO: creating *v1.ServiceAccount: csi-mock-volumes-887-2009/csi-snapshotter Nov 13 05:43:49.264: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-887 Nov 13 05:43:49.264: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-887 Nov 13 05:43:49.266: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-887 Nov 13 05:43:49.269: INFO: creating *v1.Role: csi-mock-volumes-887-2009/external-snapshotter-leaderelection-csi-mock-volumes-887 Nov 13 05:43:49.271: INFO: creating *v1.RoleBinding: csi-mock-volumes-887-2009/external-snapshotter-leaderelection Nov 13 05:43:49.273: INFO: creating *v1.ServiceAccount: csi-mock-volumes-887-2009/csi-mock Nov 13 05:43:49.276: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-887 Nov 13 05:43:49.278: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-887 Nov 13 05:43:49.281: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-887 Nov 13 05:43:49.283: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-887 Nov 13 05:43:49.286: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-887 Nov 13 05:43:49.289: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-887 Nov 13 05:43:49.292: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-887 Nov 13 05:43:49.294: INFO: creating *v1.StatefulSet: csi-mock-volumes-887-2009/csi-mockplugin Nov 13 05:43:49.298: INFO: creating *v1.StatefulSet: csi-mock-volumes-887-2009/csi-mockplugin-attacher Nov 13 05:43:49.302: INFO: creating *v1.StatefulSet: csi-mock-volumes-887-2009/csi-mockplugin-resizer Nov 13 05:43:49.305: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-887 to register on node node2 STEP: Creating pod Nov 13 05:44:30.901: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:44:30.907: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-xdtvq] to have phase Bound Nov 13 05:44:30.909: INFO: PersistentVolumeClaim pvc-xdtvq found but phase is Pending instead of Bound. Nov 13 05:44:32.913: INFO: PersistentVolumeClaim pvc-xdtvq found and phase=Bound (2.005558727s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-h5qg9 Nov 13 05:44:56.952: INFO: Deleting pod "pvc-volume-tester-h5qg9" in namespace "csi-mock-volumes-887" Nov 13 05:44:56.956: INFO: Wait up to 5m0s for pod "pvc-volume-tester-h5qg9" to be fully deleted STEP: Deleting claim pvc-xdtvq Nov 13 05:45:16.971: INFO: Waiting up to 2m0s for PersistentVolume pvc-d9351923-6bf2-4ff0-9699-0fba348e2ef5 to get deleted Nov 13 05:45:16.973: INFO: PersistentVolume pvc-d9351923-6bf2-4ff0-9699-0fba348e2ef5 found and phase=Bound (2.096135ms) Nov 13 05:45:18.976: INFO: PersistentVolume pvc-d9351923-6bf2-4ff0-9699-0fba348e2ef5 was removed STEP: Deleting storageclass csi-mock-volumes-887-scl9djk STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-887 STEP: Waiting for namespaces [csi-mock-volumes-887] to vanish STEP: uninstalling csi mock driver Nov 13 05:45:24.990: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-887-2009/csi-attacher Nov 13 05:45:24.994: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-887 Nov 13 05:45:24.998: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-887 Nov 13 05:45:25.001: INFO: deleting *v1.Role: csi-mock-volumes-887-2009/external-attacher-cfg-csi-mock-volumes-887 Nov 13 05:45:25.006: INFO: deleting *v1.RoleBinding: csi-mock-volumes-887-2009/csi-attacher-role-cfg Nov 13 05:45:25.010: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-887-2009/csi-provisioner Nov 13 05:45:25.014: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-887 Nov 13 05:45:25.021: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-887 Nov 13 05:45:25.024: INFO: deleting *v1.Role: csi-mock-volumes-887-2009/external-provisioner-cfg-csi-mock-volumes-887 Nov 13 05:45:25.031: INFO: deleting *v1.RoleBinding: csi-mock-volumes-887-2009/csi-provisioner-role-cfg Nov 13 05:45:25.038: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-887-2009/csi-resizer Nov 13 05:45:25.041: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-887 Nov 13 05:45:25.045: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-887 Nov 13 05:45:25.049: INFO: deleting *v1.Role: csi-mock-volumes-887-2009/external-resizer-cfg-csi-mock-volumes-887 Nov 13 05:45:25.052: INFO: deleting *v1.RoleBinding: csi-mock-volumes-887-2009/csi-resizer-role-cfg Nov 13 05:45:25.055: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-887-2009/csi-snapshotter Nov 13 05:45:25.058: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-887 Nov 13 05:45:25.061: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-887 Nov 13 05:45:25.065: INFO: deleting *v1.Role: csi-mock-volumes-887-2009/external-snapshotter-leaderelection-csi-mock-volumes-887 Nov 13 05:45:25.068: INFO: deleting *v1.RoleBinding: csi-mock-volumes-887-2009/external-snapshotter-leaderelection Nov 13 05:45:25.072: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-887-2009/csi-mock Nov 13 05:45:25.075: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-887 Nov 13 05:45:25.079: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-887 Nov 13 05:45:25.082: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-887 Nov 13 05:45:25.086: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-887 Nov 13 05:45:25.089: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-887 Nov 13 05:45:25.093: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-887 Nov 13 05:45:25.096: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-887 Nov 13 05:45:25.099: INFO: deleting *v1.StatefulSet: csi-mock-volumes-887-2009/csi-mockplugin Nov 13 05:45:25.103: INFO: deleting *v1.StatefulSet: csi-mock-volumes-887-2009/csi-mockplugin-attacher Nov 13 05:45:25.106: INFO: deleting *v1.StatefulSet: csi-mock-volumes-887-2009/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-887-2009 STEP: Waiting for namespaces [csi-mock-volumes-887-2009] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:45:41.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:111.960 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI Volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561 should expand volume without restarting pod if nodeExpansion=off /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off","total":-1,"completed":9,"skipped":391,"failed":1,"failures":["[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} SSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:45:29.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Nov 13 05:45:29.112: INFO: The status of Pod test-hostpath-type-pd82z is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:45:31.117: INFO: The status of Pod test-hostpath-type-pd82z is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:45:33.115: INFO: The status of Pod test-hostpath-type-pd82z is Running (Ready = true) STEP: running on node node1 STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should fail on mounting directory 'adir' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:94 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:45:41.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-9620" for this suite. • [SLOW TEST:12.103 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting directory 'adir' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:94 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathCharDev","total":-1,"completed":8,"skipped":290,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:45:33.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Nov 13 05:45:33.127: INFO: The status of Pod test-hostpath-type-sdw28 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:45:35.131: INFO: The status of Pod test-hostpath-type-sdw28 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:45:37.130: INFO: The status of Pod test-hostpath-type-sdw28 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:45:39.131: INFO: The status of Pod test-hostpath-type-sdw28 is Running (Ready = true) STEP: running on node node1 STEP: Create a block device for further testing Nov 13 05:45:39.134: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-4489 PodName:test-hostpath-type-sdw28 ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:45:39.134: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:354 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:45:41.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-4489" for this suite. • [SLOW TEST:8.186 seconds] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting block device 'ablkdev' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:354 ------------------------------ SS ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathDirectory","total":-1,"completed":3,"skipped":163,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:45:11.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-b08b7a42-e645-4344-a0a8-73c401ad1da7" Nov 13 05:45:19.524: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-b08b7a42-e645-4344-a0a8-73c401ad1da7 && dd if=/dev/zero of=/tmp/local-volume-test-b08b7a42-e645-4344-a0a8-73c401ad1da7/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-b08b7a42-e645-4344-a0a8-73c401ad1da7/file] Namespace:persistent-local-volumes-test-7367 PodName:hostexec-node2-5f9qx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:45:19.524: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:45:19.698: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-b08b7a42-e645-4344-a0a8-73c401ad1da7/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-7367 PodName:hostexec-node2-5f9qx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:45:19.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:45:19.800: INFO: Creating a PV followed by a PVC Nov 13 05:45:19.807: INFO: Waiting for PV local-pv67xkp to bind to PVC pvc-cwc6m Nov 13 05:45:19.807: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-cwc6m] to have phase Bound Nov 13 05:45:19.809: INFO: PersistentVolumeClaim pvc-cwc6m found but phase is Pending instead of Bound. Nov 13 05:45:21.813: INFO: PersistentVolumeClaim pvc-cwc6m found but phase is Pending instead of Bound. Nov 13 05:45:23.818: INFO: PersistentVolumeClaim pvc-cwc6m found but phase is Pending instead of Bound. Nov 13 05:45:25.821: INFO: PersistentVolumeClaim pvc-cwc6m found but phase is Pending instead of Bound. Nov 13 05:45:27.824: INFO: PersistentVolumeClaim pvc-cwc6m found and phase=Bound (8.017004349s) Nov 13 05:45:27.824: INFO: Waiting up to 3m0s for PersistentVolume local-pv67xkp to have phase Bound Nov 13 05:45:27.826: INFO: PersistentVolume local-pv67xkp found and phase=Bound (2.119159ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Nov 13 05:45:37.854: INFO: pod "pod-336b7ec2-8de9-454b-a609-2ce935018db0" created on Node "node2" STEP: Writing in pod1 Nov 13 05:45:37.854: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7367 PodName:pod-336b7ec2-8de9-454b-a609-2ce935018db0 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:45:37.854: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:45:37.993: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 13 05:45:37.993: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7367 PodName:pod-336b7ec2-8de9-454b-a609-2ce935018db0 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:45:37.993: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:45:38.172: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Nov 13 05:45:46.194: INFO: pod "pod-3a134cd7-a426-4fda-92bd-fd73f22998fa" created on Node "node2" Nov 13 05:45:46.194: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7367 PodName:pod-3a134cd7-a426-4fda-92bd-fd73f22998fa ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:45:46.194: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:45:46.302: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Nov 13 05:45:46.302: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /dev/loop1 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7367 PodName:pod-3a134cd7-a426-4fda-92bd-fd73f22998fa ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:45:46.302: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:45:46.440: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /dev/loop1 > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Nov 13 05:45:46.440: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7367 PodName:pod-336b7ec2-8de9-454b-a609-2ce935018db0 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:45:46.440: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:45:46.538: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/dev/loop1", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-336b7ec2-8de9-454b-a609-2ce935018db0 in namespace persistent-local-volumes-test-7367 STEP: Deleting pod2 STEP: Deleting pod pod-3a134cd7-a426-4fda-92bd-fd73f22998fa in namespace persistent-local-volumes-test-7367 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:45:46.547: INFO: Deleting PersistentVolumeClaim "pvc-cwc6m" Nov 13 05:45:46.551: INFO: Deleting PersistentVolume "local-pv67xkp" Nov 13 05:45:46.554: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-b08b7a42-e645-4344-a0a8-73c401ad1da7/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-7367 PodName:hostexec-node2-5f9qx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:45:46.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop1" on node "node2" at path /tmp/local-volume-test-b08b7a42-e645-4344-a0a8-73c401ad1da7/file Nov 13 05:45:46.694: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop1] Namespace:persistent-local-volumes-test-7367 PodName:hostexec-node2-5f9qx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:45:46.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-b08b7a42-e645-4344-a0a8-73c401ad1da7 Nov 13 05:45:46.863: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-b08b7a42-e645-4344-a0a8-73c401ad1da7] Namespace:persistent-local-volumes-test-7367 PodName:hostexec-node2-5f9qx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:45:46.863: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:45:47.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7367" for this suite. • [SLOW TEST:35.552 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:45:40.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Nov 13 05:45:40.777: INFO: The status of Pod test-hostpath-type-225bx is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:45:42.780: INFO: The status of Pod test-hostpath-type-225bx is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:45:44.780: INFO: The status of Pod test-hostpath-type-225bx is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:45:46.779: INFO: The status of Pod test-hostpath-type-225bx is Running (Ready = true) STEP: running on node node1 STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should fail on mounting file 'afile' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:151 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:45:54.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-1184" for this suite. • [SLOW TEST:14.095 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting file 'afile' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:151 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathDirectory","total":-1,"completed":12,"skipped":535,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:41:41.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should bringup pod after deploying CSIDriver attach=false [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:373 STEP: Building a driver namespace object, basename csi-mock-volumes-7214 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:41:41.110: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7214-4672/csi-attacher Nov 13 05:41:41.113: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7214 Nov 13 05:41:41.113: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-7214 Nov 13 05:41:41.119: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7214 Nov 13 05:41:41.125: INFO: creating *v1.Role: csi-mock-volumes-7214-4672/external-attacher-cfg-csi-mock-volumes-7214 Nov 13 05:41:41.127: INFO: creating *v1.RoleBinding: csi-mock-volumes-7214-4672/csi-attacher-role-cfg Nov 13 05:41:41.131: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7214-4672/csi-provisioner Nov 13 05:41:41.134: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7214 Nov 13 05:41:41.134: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-7214 Nov 13 05:41:41.136: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7214 Nov 13 05:41:41.139: INFO: creating *v1.Role: csi-mock-volumes-7214-4672/external-provisioner-cfg-csi-mock-volumes-7214 Nov 13 05:41:41.141: INFO: creating *v1.RoleBinding: csi-mock-volumes-7214-4672/csi-provisioner-role-cfg Nov 13 05:41:41.144: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7214-4672/csi-resizer Nov 13 05:41:41.147: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7214 Nov 13 05:41:41.147: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-7214 Nov 13 05:41:41.149: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7214 Nov 13 05:41:41.153: INFO: creating *v1.Role: csi-mock-volumes-7214-4672/external-resizer-cfg-csi-mock-volumes-7214 Nov 13 05:41:41.156: INFO: creating *v1.RoleBinding: csi-mock-volumes-7214-4672/csi-resizer-role-cfg Nov 13 05:41:41.158: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7214-4672/csi-snapshotter Nov 13 05:41:41.161: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7214 Nov 13 05:41:41.161: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-7214 Nov 13 05:41:41.164: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7214 Nov 13 05:41:41.167: INFO: creating *v1.Role: csi-mock-volumes-7214-4672/external-snapshotter-leaderelection-csi-mock-volumes-7214 Nov 13 05:41:41.169: INFO: creating *v1.RoleBinding: csi-mock-volumes-7214-4672/external-snapshotter-leaderelection Nov 13 05:41:41.172: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7214-4672/csi-mock Nov 13 05:41:41.174: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7214 Nov 13 05:41:41.177: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7214 Nov 13 05:41:41.179: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7214 Nov 13 05:41:41.182: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7214 Nov 13 05:41:41.184: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7214 Nov 13 05:41:41.187: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7214 Nov 13 05:41:41.190: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7214 Nov 13 05:41:41.194: INFO: creating *v1.StatefulSet: csi-mock-volumes-7214-4672/csi-mockplugin Nov 13 05:41:41.198: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-7214 to register on node node2 STEP: Creating pod Nov 13 05:41:50.714: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:41:50.718: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-gkn6w] to have phase Bound Nov 13 05:41:50.720: INFO: PersistentVolumeClaim pvc-gkn6w found but phase is Pending instead of Bound. Nov 13 05:41:52.723: INFO: PersistentVolumeClaim pvc-gkn6w found and phase=Bound (2.005229544s) STEP: Checking if attaching failed and pod cannot start STEP: Checking if VolumeAttachment was created for the pod STEP: Deploy CSIDriver object with attachRequired=false Nov 13 05:43:54.752: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-7214 STEP: Wait for the pod in running status STEP: Wait for the volumeattachment to be deleted up to 7m0s STEP: Deleting pod pvc-volume-tester-5fjvj Nov 13 05:45:54.776: INFO: Deleting pod "pvc-volume-tester-5fjvj" in namespace "csi-mock-volumes-7214" Nov 13 05:45:54.781: INFO: Wait up to 5m0s for pod "pvc-volume-tester-5fjvj" to be fully deleted STEP: Deleting claim pvc-gkn6w Nov 13 05:45:58.792: INFO: Waiting up to 2m0s for PersistentVolume pvc-b6cae3c6-bf17-426f-b680-3150f49b9c7b to get deleted Nov 13 05:45:58.794: INFO: PersistentVolume pvc-b6cae3c6-bf17-426f-b680-3150f49b9c7b found and phase=Bound (2.043178ms) Nov 13 05:46:00.799: INFO: PersistentVolume pvc-b6cae3c6-bf17-426f-b680-3150f49b9c7b was removed STEP: Deleting storageclass csi-mock-volumes-7214-sc47bgj STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-7214 STEP: Waiting for namespaces [csi-mock-volumes-7214] to vanish STEP: uninstalling csi mock driver Nov 13 05:46:06.811: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7214-4672/csi-attacher Nov 13 05:46:06.814: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7214 Nov 13 05:46:06.818: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7214 Nov 13 05:46:06.821: INFO: deleting *v1.Role: csi-mock-volumes-7214-4672/external-attacher-cfg-csi-mock-volumes-7214 Nov 13 05:46:06.824: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7214-4672/csi-attacher-role-cfg Nov 13 05:46:06.828: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7214-4672/csi-provisioner Nov 13 05:46:06.831: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7214 Nov 13 05:46:06.835: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7214 Nov 13 05:46:06.840: INFO: deleting *v1.Role: csi-mock-volumes-7214-4672/external-provisioner-cfg-csi-mock-volumes-7214 Nov 13 05:46:06.850: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7214-4672/csi-provisioner-role-cfg Nov 13 05:46:06.858: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7214-4672/csi-resizer Nov 13 05:46:06.865: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7214 Nov 13 05:46:06.868: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7214 Nov 13 05:46:06.871: INFO: deleting *v1.Role: csi-mock-volumes-7214-4672/external-resizer-cfg-csi-mock-volumes-7214 Nov 13 05:46:06.875: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7214-4672/csi-resizer-role-cfg Nov 13 05:46:06.879: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7214-4672/csi-snapshotter Nov 13 05:46:06.882: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7214 Nov 13 05:46:06.885: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7214 Nov 13 05:46:06.888: INFO: deleting *v1.Role: csi-mock-volumes-7214-4672/external-snapshotter-leaderelection-csi-mock-volumes-7214 Nov 13 05:46:06.891: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7214-4672/external-snapshotter-leaderelection Nov 13 05:46:06.895: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7214-4672/csi-mock Nov 13 05:46:06.898: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7214 Nov 13 05:46:06.901: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7214 Nov 13 05:46:06.904: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7214 Nov 13 05:46:06.907: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7214 Nov 13 05:46:06.912: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7214 Nov 13 05:46:06.915: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7214 Nov 13 05:46:06.918: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7214 Nov 13 05:46:06.921: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7214-4672/csi-mockplugin STEP: deleting the driver namespace: csi-mock-volumes-7214-4672 STEP: Waiting for namespaces [csi-mock-volumes-7214-4672] to vanish Nov 13 05:46:12.933: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-7214 [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:46:12.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:271.923 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI CSIDriver deployment after pod creation using non-attachable mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:372 should bringup pod after deploying CSIDriver attach=false [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:373 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI CSIDriver deployment after pod creation using non-attachable mock driver should bringup pod after deploying CSIDriver attach=false [Slow]","total":-1,"completed":5,"skipped":93,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:45:41.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] contain ephemeral=true when using inline volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 STEP: Building a driver namespace object, basename csi-mock-volumes-3168 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:45:41.456: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3168-8842/csi-attacher Nov 13 05:45:41.458: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3168 Nov 13 05:45:41.458: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-3168 Nov 13 05:45:41.461: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3168 Nov 13 05:45:41.464: INFO: creating *v1.Role: csi-mock-volumes-3168-8842/external-attacher-cfg-csi-mock-volumes-3168 Nov 13 05:45:41.466: INFO: creating *v1.RoleBinding: csi-mock-volumes-3168-8842/csi-attacher-role-cfg Nov 13 05:45:41.469: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3168-8842/csi-provisioner Nov 13 05:45:41.472: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3168 Nov 13 05:45:41.472: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-3168 Nov 13 05:45:41.474: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3168 Nov 13 05:45:41.477: INFO: creating *v1.Role: csi-mock-volumes-3168-8842/external-provisioner-cfg-csi-mock-volumes-3168 Nov 13 05:45:41.480: INFO: creating *v1.RoleBinding: csi-mock-volumes-3168-8842/csi-provisioner-role-cfg Nov 13 05:45:41.482: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3168-8842/csi-resizer Nov 13 05:45:41.485: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3168 Nov 13 05:45:41.485: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-3168 Nov 13 05:45:41.487: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3168 Nov 13 05:45:41.489: INFO: creating *v1.Role: csi-mock-volumes-3168-8842/external-resizer-cfg-csi-mock-volumes-3168 Nov 13 05:45:41.492: INFO: creating *v1.RoleBinding: csi-mock-volumes-3168-8842/csi-resizer-role-cfg Nov 13 05:45:41.495: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3168-8842/csi-snapshotter Nov 13 05:45:41.497: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3168 Nov 13 05:45:41.497: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-3168 Nov 13 05:45:41.499: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3168 Nov 13 05:45:41.502: INFO: creating *v1.Role: csi-mock-volumes-3168-8842/external-snapshotter-leaderelection-csi-mock-volumes-3168 Nov 13 05:45:41.504: INFO: creating *v1.RoleBinding: csi-mock-volumes-3168-8842/external-snapshotter-leaderelection Nov 13 05:45:41.507: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3168-8842/csi-mock Nov 13 05:45:41.510: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3168 Nov 13 05:45:41.512: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3168 Nov 13 05:45:41.515: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3168 Nov 13 05:45:41.517: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3168 Nov 13 05:45:41.520: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3168 Nov 13 05:45:41.523: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3168 Nov 13 05:45:41.526: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3168 Nov 13 05:45:41.529: INFO: creating *v1.StatefulSet: csi-mock-volumes-3168-8842/csi-mockplugin Nov 13 05:45:41.534: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-3168 Nov 13 05:45:41.537: INFO: creating *v1.StatefulSet: csi-mock-volumes-3168-8842/csi-mockplugin-attacher Nov 13 05:45:41.540: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-3168" Nov 13 05:45:41.542: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-3168 to register on node node2 STEP: Creating pod STEP: checking for CSIInlineVolumes feature Nov 13 05:45:59.086: INFO: Pod inline-volume-fhxr2 has the following logs: Nov 13 05:45:59.090: INFO: Deleting pod "inline-volume-fhxr2" in namespace "csi-mock-volumes-3168" Nov 13 05:45:59.094: INFO: Wait up to 5m0s for pod "inline-volume-fhxr2" to be fully deleted STEP: Deleting the previously created pod Nov 13 05:46:01.099: INFO: Deleting pod "pvc-volume-tester-q9wnt" in namespace "csi-mock-volumes-3168" Nov 13 05:46:01.104: INFO: Wait up to 5m0s for pod "pvc-volume-tester-q9wnt" to be fully deleted STEP: Checking CSI driver logs Nov 13 05:46:05.121: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-q9wnt Nov 13 05:46:05.121: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-3168 Nov 13 05:46:05.121: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: badfe0bb-5327-4233-8bd7-3f19fbb27f75 Nov 13 05:46:05.121: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default Nov 13 05:46:05.121: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: true Nov 13 05:46:05.121: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"csi-8f8f38ef16998a1e71b09c501ad64b124bf2568031f1c815e2d57911676053bc","target_path":"/var/lib/kubelet/pods/badfe0bb-5327-4233-8bd7-3f19fbb27f75/volumes/kubernetes.io~csi/my-volume/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-q9wnt Nov 13 05:46:05.121: INFO: Deleting pod "pvc-volume-tester-q9wnt" in namespace "csi-mock-volumes-3168" STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-3168 STEP: Waiting for namespaces [csi-mock-volumes-3168] to vanish STEP: uninstalling csi mock driver Nov 13 05:46:11.134: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3168-8842/csi-attacher Nov 13 05:46:11.138: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3168 Nov 13 05:46:11.142: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3168 Nov 13 05:46:11.145: INFO: deleting *v1.Role: csi-mock-volumes-3168-8842/external-attacher-cfg-csi-mock-volumes-3168 Nov 13 05:46:11.148: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3168-8842/csi-attacher-role-cfg Nov 13 05:46:11.151: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3168-8842/csi-provisioner Nov 13 05:46:11.155: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3168 Nov 13 05:46:11.158: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3168 Nov 13 05:46:11.161: INFO: deleting *v1.Role: csi-mock-volumes-3168-8842/external-provisioner-cfg-csi-mock-volumes-3168 Nov 13 05:46:11.165: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3168-8842/csi-provisioner-role-cfg Nov 13 05:46:11.169: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3168-8842/csi-resizer Nov 13 05:46:11.172: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3168 Nov 13 05:46:11.176: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3168 Nov 13 05:46:11.179: INFO: deleting *v1.Role: csi-mock-volumes-3168-8842/external-resizer-cfg-csi-mock-volumes-3168 Nov 13 05:46:11.183: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3168-8842/csi-resizer-role-cfg Nov 13 05:46:11.187: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3168-8842/csi-snapshotter Nov 13 05:46:11.190: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3168 Nov 13 05:46:11.194: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3168 Nov 13 05:46:11.197: INFO: deleting *v1.Role: csi-mock-volumes-3168-8842/external-snapshotter-leaderelection-csi-mock-volumes-3168 Nov 13 05:46:11.200: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3168-8842/external-snapshotter-leaderelection Nov 13 05:46:11.204: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3168-8842/csi-mock Nov 13 05:46:11.207: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3168 Nov 13 05:46:11.211: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3168 Nov 13 05:46:11.218: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3168 Nov 13 05:46:11.227: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3168 Nov 13 05:46:11.231: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3168 Nov 13 05:46:11.235: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3168 Nov 13 05:46:11.239: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3168 Nov 13 05:46:11.243: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3168-8842/csi-mockplugin Nov 13 05:46:11.247: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-3168 Nov 13 05:46:11.251: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3168-8842/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-3168-8842 STEP: Waiting for namespaces [csi-mock-volumes-3168-8842] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:46:23.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:41.891 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443 contain ephemeral=true when using inline volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","total":-1,"completed":4,"skipped":211,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:45:54.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity used, no capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 STEP: Building a driver namespace object, basename csi-mock-volumes-8056 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:45:54.943: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8056-5736/csi-attacher Nov 13 05:45:54.946: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8056 Nov 13 05:45:54.946: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-8056 Nov 13 05:45:54.948: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8056 Nov 13 05:45:54.951: INFO: creating *v1.Role: csi-mock-volumes-8056-5736/external-attacher-cfg-csi-mock-volumes-8056 Nov 13 05:45:54.953: INFO: creating *v1.RoleBinding: csi-mock-volumes-8056-5736/csi-attacher-role-cfg Nov 13 05:45:54.956: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8056-5736/csi-provisioner Nov 13 05:45:54.958: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8056 Nov 13 05:45:54.958: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-8056 Nov 13 05:45:54.961: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8056 Nov 13 05:45:54.963: INFO: creating *v1.Role: csi-mock-volumes-8056-5736/external-provisioner-cfg-csi-mock-volumes-8056 Nov 13 05:45:54.966: INFO: creating *v1.RoleBinding: csi-mock-volumes-8056-5736/csi-provisioner-role-cfg Nov 13 05:45:54.968: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8056-5736/csi-resizer Nov 13 05:45:54.970: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8056 Nov 13 05:45:54.970: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-8056 Nov 13 05:45:54.973: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8056 Nov 13 05:45:54.975: INFO: creating *v1.Role: csi-mock-volumes-8056-5736/external-resizer-cfg-csi-mock-volumes-8056 Nov 13 05:45:54.978: INFO: creating *v1.RoleBinding: csi-mock-volumes-8056-5736/csi-resizer-role-cfg Nov 13 05:45:54.980: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8056-5736/csi-snapshotter Nov 13 05:45:54.983: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8056 Nov 13 05:45:54.983: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-8056 Nov 13 05:45:54.986: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8056 Nov 13 05:45:54.988: INFO: creating *v1.Role: csi-mock-volumes-8056-5736/external-snapshotter-leaderelection-csi-mock-volumes-8056 Nov 13 05:45:54.990: INFO: creating *v1.RoleBinding: csi-mock-volumes-8056-5736/external-snapshotter-leaderelection Nov 13 05:45:54.993: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8056-5736/csi-mock Nov 13 05:45:54.995: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8056 Nov 13 05:45:54.998: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8056 Nov 13 05:45:55.000: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8056 Nov 13 05:45:55.002: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8056 Nov 13 05:45:55.005: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8056 Nov 13 05:45:55.007: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8056 Nov 13 05:45:55.010: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8056 Nov 13 05:45:55.013: INFO: creating *v1.StatefulSet: csi-mock-volumes-8056-5736/csi-mockplugin Nov 13 05:45:55.017: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-8056 Nov 13 05:45:55.019: INFO: creating *v1.StatefulSet: csi-mock-volumes-8056-5736/csi-mockplugin-attacher Nov 13 05:45:55.023: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-8056" Nov 13 05:45:55.025: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-8056 to register on node node1 STEP: Creating pod Nov 13 05:46:09.549: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Deleting the previously created pod Nov 13 05:46:09.569: INFO: Deleting pod "pvc-volume-tester-jrd9h" in namespace "csi-mock-volumes-8056" Nov 13 05:46:09.574: INFO: Wait up to 5m0s for pod "pvc-volume-tester-jrd9h" to be fully deleted STEP: Deleting pod pvc-volume-tester-jrd9h Nov 13 05:46:09.576: INFO: Deleting pod "pvc-volume-tester-jrd9h" in namespace "csi-mock-volumes-8056" STEP: Deleting claim pvc-9xjvj STEP: Deleting storageclass mock-csi-storage-capacity-csi-mock-volumes-8056 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-8056 STEP: Waiting for namespaces [csi-mock-volumes-8056] to vanish STEP: uninstalling csi mock driver Nov 13 05:46:15.595: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8056-5736/csi-attacher Nov 13 05:46:15.600: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8056 Nov 13 05:46:15.606: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8056 Nov 13 05:46:15.609: INFO: deleting *v1.Role: csi-mock-volumes-8056-5736/external-attacher-cfg-csi-mock-volumes-8056 Nov 13 05:46:15.613: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8056-5736/csi-attacher-role-cfg Nov 13 05:46:15.616: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8056-5736/csi-provisioner Nov 13 05:46:15.620: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8056 Nov 13 05:46:15.624: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8056 Nov 13 05:46:15.628: INFO: deleting *v1.Role: csi-mock-volumes-8056-5736/external-provisioner-cfg-csi-mock-volumes-8056 Nov 13 05:46:15.634: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8056-5736/csi-provisioner-role-cfg Nov 13 05:46:15.640: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8056-5736/csi-resizer Nov 13 05:46:15.644: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8056 Nov 13 05:46:15.652: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8056 Nov 13 05:46:15.655: INFO: deleting *v1.Role: csi-mock-volumes-8056-5736/external-resizer-cfg-csi-mock-volumes-8056 Nov 13 05:46:15.659: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8056-5736/csi-resizer-role-cfg Nov 13 05:46:15.663: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8056-5736/csi-snapshotter Nov 13 05:46:15.666: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8056 Nov 13 05:46:15.670: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8056 Nov 13 05:46:15.673: INFO: deleting *v1.Role: csi-mock-volumes-8056-5736/external-snapshotter-leaderelection-csi-mock-volumes-8056 Nov 13 05:46:15.676: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8056-5736/external-snapshotter-leaderelection Nov 13 05:46:15.679: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8056-5736/csi-mock Nov 13 05:46:15.682: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8056 Nov 13 05:46:15.685: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8056 Nov 13 05:46:15.689: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8056 Nov 13 05:46:15.693: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8056 Nov 13 05:46:15.697: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8056 Nov 13 05:46:15.700: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8056 Nov 13 05:46:15.704: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8056 Nov 13 05:46:15.707: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8056-5736/csi-mockplugin Nov 13 05:46:15.710: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-8056 Nov 13 05:46:15.713: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8056-5736/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-8056-5736 STEP: Waiting for namespaces [csi-mock-volumes-8056-5736] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:46:27.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:32.861 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134 CSIStorageCapacity used, no capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","total":-1,"completed":13,"skipped":549,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:46:27.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Nov 13 05:46:27.774: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:46:27.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-6921" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 schedule a pod w/ RW PD(s) mounted to 1 or more containers, write to PD, verify content, delete pod, and repeat in rapid succession [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:231 using 1 containers and 2 PDs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:254 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:46:12.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:46:19.040: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-7763c073-76c8-4e85-a605-2b2be066d4f1 && mount --bind /tmp/local-volume-test-7763c073-76c8-4e85-a605-2b2be066d4f1 /tmp/local-volume-test-7763c073-76c8-4e85-a605-2b2be066d4f1] Namespace:persistent-local-volumes-test-3982 PodName:hostexec-node2-lkhrn ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:46:19.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:46:19.192: INFO: Creating a PV followed by a PVC Nov 13 05:46:19.198: INFO: Waiting for PV local-pv6hwnr to bind to PVC pvc-fz84s Nov 13 05:46:19.198: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-fz84s] to have phase Bound Nov 13 05:46:19.200: INFO: PersistentVolumeClaim pvc-fz84s found but phase is Pending instead of Bound. Nov 13 05:46:21.203: INFO: PersistentVolumeClaim pvc-fz84s found but phase is Pending instead of Bound. Nov 13 05:46:23.205: INFO: PersistentVolumeClaim pvc-fz84s found but phase is Pending instead of Bound. Nov 13 05:46:25.209: INFO: PersistentVolumeClaim pvc-fz84s found but phase is Pending instead of Bound. Nov 13 05:46:27.213: INFO: PersistentVolumeClaim pvc-fz84s found and phase=Bound (8.014809582s) Nov 13 05:46:27.213: INFO: Waiting up to 3m0s for PersistentVolume local-pv6hwnr to have phase Bound Nov 13 05:46:27.215: INFO: PersistentVolume local-pv6hwnr found and phase=Bound (2.069343ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Nov 13 05:46:27.219: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:46:27.221: INFO: Deleting PersistentVolumeClaim "pvc-fz84s" Nov 13 05:46:27.225: INFO: Deleting PersistentVolume "local-pv6hwnr" STEP: Removing the test directory Nov 13 05:46:27.229: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-7763c073-76c8-4e85-a605-2b2be066d4f1 && rm -r /tmp/local-volume-test-7763c073-76c8-4e85-a605-2b2be066d4f1] Namespace:persistent-local-volumes-test-3982 PodName:hostexec-node2-lkhrn ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:46:27.229: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:46:27.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3982" for this suite. S [SKIPPING] [14.832 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:45:04.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not modify fsGroup if fsGroupPolicy=None /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1460 STEP: Building a driver namespace object, basename csi-mock-volumes-1662 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:45:04.237: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1662-2197/csi-attacher Nov 13 05:45:04.240: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1662 Nov 13 05:45:04.240: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-1662 Nov 13 05:45:04.243: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1662 Nov 13 05:45:04.246: INFO: creating *v1.Role: csi-mock-volumes-1662-2197/external-attacher-cfg-csi-mock-volumes-1662 Nov 13 05:45:04.249: INFO: creating *v1.RoleBinding: csi-mock-volumes-1662-2197/csi-attacher-role-cfg Nov 13 05:45:04.252: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1662-2197/csi-provisioner Nov 13 05:45:04.255: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1662 Nov 13 05:45:04.255: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-1662 Nov 13 05:45:04.258: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1662 Nov 13 05:45:04.261: INFO: creating *v1.Role: csi-mock-volumes-1662-2197/external-provisioner-cfg-csi-mock-volumes-1662 Nov 13 05:45:04.264: INFO: creating *v1.RoleBinding: csi-mock-volumes-1662-2197/csi-provisioner-role-cfg Nov 13 05:45:04.267: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1662-2197/csi-resizer Nov 13 05:45:04.270: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1662 Nov 13 05:45:04.270: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-1662 Nov 13 05:45:04.272: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1662 Nov 13 05:45:04.276: INFO: creating *v1.Role: csi-mock-volumes-1662-2197/external-resizer-cfg-csi-mock-volumes-1662 Nov 13 05:45:04.279: INFO: creating *v1.RoleBinding: csi-mock-volumes-1662-2197/csi-resizer-role-cfg Nov 13 05:45:04.281: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1662-2197/csi-snapshotter Nov 13 05:45:04.284: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1662 Nov 13 05:45:04.284: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-1662 Nov 13 05:45:04.287: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1662 Nov 13 05:45:04.290: INFO: creating *v1.Role: csi-mock-volumes-1662-2197/external-snapshotter-leaderelection-csi-mock-volumes-1662 Nov 13 05:45:04.293: INFO: creating *v1.RoleBinding: csi-mock-volumes-1662-2197/external-snapshotter-leaderelection Nov 13 05:45:04.296: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1662-2197/csi-mock Nov 13 05:45:04.298: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1662 Nov 13 05:45:04.302: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1662 Nov 13 05:45:04.307: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1662 Nov 13 05:45:04.312: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1662 Nov 13 05:45:04.318: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1662 Nov 13 05:45:04.322: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1662 Nov 13 05:45:04.327: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1662 Nov 13 05:45:04.331: INFO: creating *v1.StatefulSet: csi-mock-volumes-1662-2197/csi-mockplugin Nov 13 05:45:04.336: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-1662 Nov 13 05:45:04.338: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-1662" Nov 13 05:45:04.341: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-1662 to register on node node2 STEP: Creating pod with fsGroup Nov 13 05:45:18.861: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:45:18.866: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-9pbln] to have phase Bound Nov 13 05:45:18.868: INFO: PersistentVolumeClaim pvc-9pbln found but phase is Pending instead of Bound. Nov 13 05:45:20.872: INFO: PersistentVolumeClaim pvc-9pbln found and phase=Bound (2.005667669s) Nov 13 05:45:32.892: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir /mnt/test/csi-mock-volumes-1662] Namespace:csi-mock-volumes-1662 PodName:pvc-volume-tester-9prmv ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:45:32.892: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:45:32.966: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'filecontents' > '/mnt/test/csi-mock-volumes-1662/csi-mock-volumes-1662'; sync] Namespace:csi-mock-volumes-1662 PodName:pvc-volume-tester-9prmv ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:45:32.966: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:45:35.870: INFO: ExecWithOptions {Command:[/bin/sh -c ls -l /mnt/test/csi-mock-volumes-1662/csi-mock-volumes-1662] Namespace:csi-mock-volumes-1662 PodName:pvc-volume-tester-9prmv ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:45:35.870: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:45:36.007: INFO: pod csi-mock-volumes-1662/pvc-volume-tester-9prmv exec for cmd ls -l /mnt/test/csi-mock-volumes-1662/csi-mock-volumes-1662, stdout: -rw-r--r-- 1 root root 13 Nov 13 05:45 /mnt/test/csi-mock-volumes-1662/csi-mock-volumes-1662, stderr: Nov 13 05:45:36.007: INFO: ExecWithOptions {Command:[/bin/sh -c rm -fr /mnt/test/csi-mock-volumes-1662] Namespace:csi-mock-volumes-1662 PodName:pvc-volume-tester-9prmv ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:45:36.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod pvc-volume-tester-9prmv Nov 13 05:45:36.124: INFO: Deleting pod "pvc-volume-tester-9prmv" in namespace "csi-mock-volumes-1662" Nov 13 05:45:36.129: INFO: Wait up to 5m0s for pod "pvc-volume-tester-9prmv" to be fully deleted STEP: Deleting claim pvc-9pbln Nov 13 05:46:08.145: INFO: Waiting up to 2m0s for PersistentVolume pvc-f9a4b739-8204-4b17-8eb7-378ab52ce7f6 to get deleted Nov 13 05:46:08.147: INFO: PersistentVolume pvc-f9a4b739-8204-4b17-8eb7-378ab52ce7f6 found and phase=Bound (1.97734ms) Nov 13 05:46:10.150: INFO: PersistentVolume pvc-f9a4b739-8204-4b17-8eb7-378ab52ce7f6 was removed STEP: Deleting storageclass csi-mock-volumes-1662-sc4hq8g STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-1662 STEP: Waiting for namespaces [csi-mock-volumes-1662] to vanish STEP: uninstalling csi mock driver Nov 13 05:46:16.162: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1662-2197/csi-attacher Nov 13 05:46:16.166: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1662 Nov 13 05:46:16.170: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1662 Nov 13 05:46:16.173: INFO: deleting *v1.Role: csi-mock-volumes-1662-2197/external-attacher-cfg-csi-mock-volumes-1662 Nov 13 05:46:16.177: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1662-2197/csi-attacher-role-cfg Nov 13 05:46:16.181: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1662-2197/csi-provisioner Nov 13 05:46:16.184: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1662 Nov 13 05:46:16.188: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1662 Nov 13 05:46:16.191: INFO: deleting *v1.Role: csi-mock-volumes-1662-2197/external-provisioner-cfg-csi-mock-volumes-1662 Nov 13 05:46:16.194: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1662-2197/csi-provisioner-role-cfg Nov 13 05:46:16.197: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1662-2197/csi-resizer Nov 13 05:46:16.201: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1662 Nov 13 05:46:16.205: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1662 Nov 13 05:46:16.208: INFO: deleting *v1.Role: csi-mock-volumes-1662-2197/external-resizer-cfg-csi-mock-volumes-1662 Nov 13 05:46:16.211: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1662-2197/csi-resizer-role-cfg Nov 13 05:46:16.214: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1662-2197/csi-snapshotter Nov 13 05:46:16.217: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1662 Nov 13 05:46:16.221: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1662 Nov 13 05:46:16.224: INFO: deleting *v1.Role: csi-mock-volumes-1662-2197/external-snapshotter-leaderelection-csi-mock-volumes-1662 Nov 13 05:46:16.228: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1662-2197/external-snapshotter-leaderelection Nov 13 05:46:16.231: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1662-2197/csi-mock Nov 13 05:46:16.234: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1662 Nov 13 05:46:16.237: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1662 Nov 13 05:46:16.240: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1662 Nov 13 05:46:16.244: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1662 Nov 13 05:46:16.248: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1662 Nov 13 05:46:16.251: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1662 Nov 13 05:46:16.254: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1662 Nov 13 05:46:16.257: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1662-2197/csi-mockplugin Nov 13 05:46:16.261: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-1662 STEP: deleting the driver namespace: csi-mock-volumes-1662-2197 STEP: Waiting for namespaces [csi-mock-volumes-1662-2197] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:46:28.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:84.104 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI FSGroupPolicy [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1436 should not modify fsGroup if fsGroupPolicy=None /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1460 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should not modify fsGroup if fsGroupPolicy=None","total":-1,"completed":8,"skipped":229,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:46:28.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pvc-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:72 Nov 13 05:46:28.383: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PVC Nov 13 05:46:28.388: INFO: error finding default storageClass : No default storage class found [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:46:28.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pvc-protection-5508" for this suite. [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:108 S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Verify "immediate" deletion of a PVC that is not in active use by a pod [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:114 error finding default storageClass : No default storage class found /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pv/pv.go:819 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:46:28.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pvc-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:72 Nov 13 05:46:28.497: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PVC Nov 13 05:46:28.501: INFO: error finding default storageClass : No default storage class found [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:46:28.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pvc-protection-8356" for this suite. [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:108 S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Verify that PVC in active use by a pod is not removed immediately [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:126 error finding default storageClass : No default storage class found /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pv/pv.go:819 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:46:23.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Nov 13 05:46:23.313: INFO: The status of Pod test-hostpath-type-k8jxv is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:46:25.317: INFO: The status of Pod test-hostpath-type-k8jxv is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:46:27.319: INFO: The status of Pod test-hostpath-type-k8jxv is Running (Ready = true) STEP: running on node node1 [It] Should fail on mounting socket 'asocket' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:231 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:46:29.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-9306" for this suite. • [SLOW TEST:6.078 seconds] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting socket 'asocket' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:231 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathBlockDev","total":-1,"completed":5,"skipped":213,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:46:27.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Nov 13 05:46:31.931: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-6897 PodName:hostexec-node2-5jpnb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:46:31.931: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:46:32.137: INFO: exec node2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Nov 13 05:46:32.137: INFO: exec node2: stdout: "0\n" Nov 13 05:46:32.137: INFO: exec node2: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Nov 13 05:46:32.137: INFO: exec node2: exit code: 0 Nov 13 05:46:32.137: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:46:32.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6897" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [4.268 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1256 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:46:28.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] new files should be created with FSGroup ownership when container is root /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:55 STEP: Creating a pod to test emptydir 0644 on tmpfs Nov 13 05:46:28.597: INFO: Waiting up to 5m0s for pod "pod-fb68237c-c54b-4e9e-af68-4b6d155305b5" in namespace "emptydir-2849" to be "Succeeded or Failed" Nov 13 05:46:28.600: INFO: Pod "pod-fb68237c-c54b-4e9e-af68-4b6d155305b5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.331203ms Nov 13 05:46:30.603: INFO: Pod "pod-fb68237c-c54b-4e9e-af68-4b6d155305b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006599638s Nov 13 05:46:32.607: INFO: Pod "pod-fb68237c-c54b-4e9e-af68-4b6d155305b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010334799s STEP: Saw pod success Nov 13 05:46:32.607: INFO: Pod "pod-fb68237c-c54b-4e9e-af68-4b6d155305b5" satisfied condition "Succeeded or Failed" Nov 13 05:46:32.610: INFO: Trying to get logs from node node2 pod pod-fb68237c-c54b-4e9e-af68-4b6d155305b5 container test-container: STEP: delete the pod Nov 13 05:46:32.626: INFO: Waiting for pod pod-fb68237c-c54b-4e9e-af68-4b6d155305b5 to disappear Nov 13 05:46:32.628: INFO: Pod pod-fb68237c-c54b-4e9e-af68-4b6d155305b5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:46:32.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2849" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root","total":-1,"completed":9,"skipped":342,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:46:32.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Nov 13 05:46:32.694: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:46:32.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-2130" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create bound pv/pvc count metrics for pvc controller after creating both pv and pvc /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:503 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:46:29.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] nonexistent volume subPath should have the correct mode and owner using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:63 STEP: Creating a pod to test emptydir subpath on tmpfs Nov 13 05:46:29.467: INFO: Waiting up to 5m0s for pod "pod-bcdee9be-0fe0-41c3-875a-29cc2d3ee7f4" in namespace "emptydir-467" to be "Succeeded or Failed" Nov 13 05:46:29.470: INFO: Pod "pod-bcdee9be-0fe0-41c3-875a-29cc2d3ee7f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.765497ms Nov 13 05:46:31.473: INFO: Pod "pod-bcdee9be-0fe0-41c3-875a-29cc2d3ee7f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005880586s Nov 13 05:46:33.477: INFO: Pod "pod-bcdee9be-0fe0-41c3-875a-29cc2d3ee7f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009777035s STEP: Saw pod success Nov 13 05:46:33.477: INFO: Pod "pod-bcdee9be-0fe0-41c3-875a-29cc2d3ee7f4" satisfied condition "Succeeded or Failed" Nov 13 05:46:33.479: INFO: Trying to get logs from node node2 pod pod-bcdee9be-0fe0-41c3-875a-29cc2d3ee7f4 container test-container: STEP: delete the pod Nov 13 05:46:33.521: INFO: Waiting for pod pod-bcdee9be-0fe0-41c3-875a-29cc2d3ee7f4 to disappear Nov 13 05:46:33.523: INFO: Pod pod-bcdee9be-0fe0-41c3-875a-29cc2d3ee7f4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:46:33.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-467" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup","total":-1,"completed":6,"skipped":248,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:46:32.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:110 STEP: Creating configMap with name configmap-test-volume-map-8529ca04-da0a-4403-a98d-47a1815139b7 STEP: Creating a pod to test consume configMaps Nov 13 05:46:32.196: INFO: Waiting up to 5m0s for pod "pod-configmaps-67c65dba-ac74-4462-a02a-0f3ef8e863ef" in namespace "configmap-436" to be "Succeeded or Failed" Nov 13 05:46:32.201: INFO: Pod "pod-configmaps-67c65dba-ac74-4462-a02a-0f3ef8e863ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.817388ms Nov 13 05:46:34.204: INFO: Pod "pod-configmaps-67c65dba-ac74-4462-a02a-0f3ef8e863ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007553994s Nov 13 05:46:36.208: INFO: Pod "pod-configmaps-67c65dba-ac74-4462-a02a-0f3ef8e863ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011795417s STEP: Saw pod success Nov 13 05:46:36.208: INFO: Pod "pod-configmaps-67c65dba-ac74-4462-a02a-0f3ef8e863ef" satisfied condition "Succeeded or Failed" Nov 13 05:46:36.211: INFO: Trying to get logs from node node1 pod pod-configmaps-67c65dba-ac74-4462-a02a-0f3ef8e863ef container agnhost-container: STEP: delete the pod Nov 13 05:46:36.229: INFO: Waiting for pod pod-configmaps-67c65dba-ac74-4462-a02a-0f3ef8e863ef to disappear Nov 13 05:46:36.231: INFO: Pod pod-configmaps-67c65dba-ac74-4462-a02a-0f3ef8e863ef no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:46:36.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-436" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":6,"skipped":133,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:46:27.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-7d8280a2-fa4f-4223-a343-d59c502bcdea" Nov 13 05:46:29.924: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-7d8280a2-fa4f-4223-a343-d59c502bcdea && dd if=/dev/zero of=/tmp/local-volume-test-7d8280a2-fa4f-4223-a343-d59c502bcdea/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-7d8280a2-fa4f-4223-a343-d59c502bcdea/file] Namespace:persistent-local-volumes-test-2938 PodName:hostexec-node1-vv929 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:46:29.924: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:46:30.071: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-7d8280a2-fa4f-4223-a343-d59c502bcdea/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-2938 PodName:hostexec-node1-vv929 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:46:30.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:46:30.198: INFO: Creating a PV followed by a PVC Nov 13 05:46:30.207: INFO: Waiting for PV local-pvnm6kn to bind to PVC pvc-t4z9f Nov 13 05:46:30.207: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-t4z9f] to have phase Bound Nov 13 05:46:30.209: INFO: PersistentVolumeClaim pvc-t4z9f found but phase is Pending instead of Bound. Nov 13 05:46:32.217: INFO: PersistentVolumeClaim pvc-t4z9f found and phase=Bound (2.010306787s) Nov 13 05:46:32.217: INFO: Waiting up to 3m0s for PersistentVolume local-pvnm6kn to have phase Bound Nov 13 05:46:32.220: INFO: PersistentVolume local-pvnm6kn found and phase=Bound (3.298786ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:46:36.250: INFO: pod "pod-163012be-a284-426b-b053-6c2fccb9e370" created on Node "node1" STEP: Writing in pod1 Nov 13 05:46:36.250: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2938 PodName:pod-163012be-a284-426b-b053-6c2fccb9e370 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:46:36.250: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:46:36.349: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Nov 13 05:46:36.349: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2938 PodName:pod-163012be-a284-426b-b053-6c2fccb9e370 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:46:36.349: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:46:36.458: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-163012be-a284-426b-b053-6c2fccb9e370 in namespace persistent-local-volumes-test-2938 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:46:36.463: INFO: Deleting PersistentVolumeClaim "pvc-t4z9f" Nov 13 05:46:36.466: INFO: Deleting PersistentVolume "local-pvnm6kn" Nov 13 05:46:36.470: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-7d8280a2-fa4f-4223-a343-d59c502bcdea/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-2938 PodName:hostexec-node1-vv929 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:46:36.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node1" at path /tmp/local-volume-test-7d8280a2-fa4f-4223-a343-d59c502bcdea/file Nov 13 05:46:36.598: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-2938 PodName:hostexec-node1-vv929 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:46:36.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-7d8280a2-fa4f-4223-a343-d59c502bcdea Nov 13 05:46:36.697: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-7d8280a2-fa4f-4223-a343-d59c502bcdea] Namespace:persistent-local-volumes-test-2938 PodName:hostexec-node1-vv929 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:46:36.697: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:46:36.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2938" for this suite. • [SLOW TEST:8.920 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":14,"skipped":602,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:46:36.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Nov 13 05:46:36.877: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:46:36.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-4538" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.029 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 schedule pods each with a PD, delete pod and verify detach [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:93 for read-only PD with pod delete grace period of "default (30s)" /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:135 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ SSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:46:32.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-7012c6e6-2130-4ca0-b58f-54652a218f3c" Nov 13 05:46:36.800: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-7012c6e6-2130-4ca0-b58f-54652a218f3c" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-7012c6e6-2130-4ca0-b58f-54652a218f3c" "/tmp/local-volume-test-7012c6e6-2130-4ca0-b58f-54652a218f3c"] Namespace:persistent-local-volumes-test-5650 PodName:hostexec-node1-bttwk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:46:36.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:46:36.909: INFO: Creating a PV followed by a PVC Nov 13 05:46:36.915: INFO: Waiting for PV local-pvs45fn to bind to PVC pvc-7mk5p Nov 13 05:46:36.915: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-7mk5p] to have phase Bound Nov 13 05:46:36.918: INFO: PersistentVolumeClaim pvc-7mk5p found but phase is Pending instead of Bound. Nov 13 05:46:38.923: INFO: PersistentVolumeClaim pvc-7mk5p found but phase is Pending instead of Bound. Nov 13 05:46:40.925: INFO: PersistentVolumeClaim pvc-7mk5p found but phase is Pending instead of Bound. Nov 13 05:46:42.929: INFO: PersistentVolumeClaim pvc-7mk5p found and phase=Bound (6.013284047s) Nov 13 05:46:42.929: INFO: Waiting up to 3m0s for PersistentVolume local-pvs45fn to have phase Bound Nov 13 05:46:42.931: INFO: PersistentVolume local-pvs45fn found and phase=Bound (2.007937ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Nov 13 05:46:46.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-5650 exec pod-15a320b9-b8e0-41a3-9ddd-8fcce0c78e3a --namespace=persistent-local-volumes-test-5650 -- stat -c %g /mnt/volume1' Nov 13 05:46:47.235: INFO: stderr: "" Nov 13 05:46:47.235: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-15a320b9-b8e0-41a3-9ddd-8fcce0c78e3a in namespace persistent-local-volumes-test-5650 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:46:47.240: INFO: Deleting PersistentVolumeClaim "pvc-7mk5p" Nov 13 05:46:47.243: INFO: Deleting PersistentVolume "local-pvs45fn" STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-7012c6e6-2130-4ca0-b58f-54652a218f3c" Nov 13 05:46:47.247: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-7012c6e6-2130-4ca0-b58f-54652a218f3c"] Namespace:persistent-local-volumes-test-5650 PodName:hostexec-node1-bttwk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:46:47.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:46:47.357: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-7012c6e6-2130-4ca0-b58f-54652a218f3c] Namespace:persistent-local-volumes-test-5650 PodName:hostexec-node1-bttwk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:46:47.357: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:46:47.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5650" for this suite. • [SLOW TEST:14.710 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":10,"skipped":381,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:46:33.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Nov 13 05:46:33.607: INFO: The status of Pod test-hostpath-type-kr9vs is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:46:35.613: INFO: The status of Pod test-hostpath-type-kr9vs is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:46:37.610: INFO: The status of Pod test-hostpath-type-kr9vs is Running (Ready = true) STEP: running on node node2 STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should be able to mount directory 'adir' successfully when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:76 [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:46:47.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-5012" for this suite. • [SLOW TEST:14.089 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount directory 'adir' successfully when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:76 ------------------------------ [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:46:36.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Nov 13 05:46:36.348: INFO: The status of Pod test-hostpath-type-cnh5t is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:46:38.353: INFO: The status of Pod test-hostpath-type-cnh5t is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:46:40.353: INFO: The status of Pod test-hostpath-type-cnh5t is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:46:42.352: INFO: The status of Pod test-hostpath-type-cnh5t is Running (Ready = true) STEP: running on node node2 STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should fail on mounting non-existent file 'does-not-exist-file' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:137 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:46:48.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-9823" for this suite. • [SLOW TEST:12.105 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting non-existent file 'does-not-exist-file' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:137 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should fail on mounting non-existent file 'does-not-exist-file' when HostPathType is HostPathFile","total":-1,"completed":7,"skipped":167,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:46:48.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Nov 13 05:46:48.442: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:46:48.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-2664" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create prometheus metrics for volume provisioning and attach/detach [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:101 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:45:41.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] exhausted, late binding, no topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 STEP: Building a driver namespace object, basename csi-mock-volumes-4162 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Nov 13 05:45:41.375: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4162-6174/csi-attacher Nov 13 05:45:41.378: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4162 Nov 13 05:45:41.378: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4162 Nov 13 05:45:41.380: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4162 Nov 13 05:45:41.383: INFO: creating *v1.Role: csi-mock-volumes-4162-6174/external-attacher-cfg-csi-mock-volumes-4162 Nov 13 05:45:41.387: INFO: creating *v1.RoleBinding: csi-mock-volumes-4162-6174/csi-attacher-role-cfg Nov 13 05:45:41.390: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4162-6174/csi-provisioner Nov 13 05:45:41.393: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4162 Nov 13 05:45:41.393: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4162 Nov 13 05:45:41.396: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4162 Nov 13 05:45:41.399: INFO: creating *v1.Role: csi-mock-volumes-4162-6174/external-provisioner-cfg-csi-mock-volumes-4162 Nov 13 05:45:41.401: INFO: creating *v1.RoleBinding: csi-mock-volumes-4162-6174/csi-provisioner-role-cfg Nov 13 05:45:41.404: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4162-6174/csi-resizer Nov 13 05:45:41.406: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4162 Nov 13 05:45:41.406: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4162 Nov 13 05:45:41.410: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4162 Nov 13 05:45:41.412: INFO: creating *v1.Role: csi-mock-volumes-4162-6174/external-resizer-cfg-csi-mock-volumes-4162 Nov 13 05:45:41.415: INFO: creating *v1.RoleBinding: csi-mock-volumes-4162-6174/csi-resizer-role-cfg Nov 13 05:45:41.419: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4162-6174/csi-snapshotter Nov 13 05:45:41.422: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4162 Nov 13 05:45:41.422: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4162 Nov 13 05:45:41.428: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4162 Nov 13 05:45:41.441: INFO: creating *v1.Role: csi-mock-volumes-4162-6174/external-snapshotter-leaderelection-csi-mock-volumes-4162 Nov 13 05:45:41.445: INFO: creating *v1.RoleBinding: csi-mock-volumes-4162-6174/external-snapshotter-leaderelection Nov 13 05:45:41.448: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4162-6174/csi-mock Nov 13 05:45:41.451: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4162 Nov 13 05:45:41.454: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4162 Nov 13 05:45:41.457: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4162 Nov 13 05:45:41.460: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4162 Nov 13 05:45:41.463: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4162 Nov 13 05:45:41.466: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4162 Nov 13 05:45:41.469: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4162 Nov 13 05:45:41.472: INFO: creating *v1.StatefulSet: csi-mock-volumes-4162-6174/csi-mockplugin Nov 13 05:45:41.476: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4162 Nov 13 05:45:41.480: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4162" Nov 13 05:45:41.482: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4162 to register on node node2 I1113 05:45:53.578656 37 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-4162","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1113 05:45:53.648982 37 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I1113 05:45:53.651321 37 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-4162","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1113 05:45:53.653347 37 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I1113 05:45:53.656054 37 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I1113 05:45:54.102470 37 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-4162"},"Error":"","FullError":null} STEP: Creating pod Nov 13 05:45:57.753: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil I1113 05:45:57.786859 37 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-f2a38de8-0a56-4356-9545-539c71005ebf","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}} I1113 05:45:57.970332 37 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-f2a38de8-0a56-4356-9545-539c71005ebf","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-f2a38de8-0a56-4356-9545-539c71005ebf"}}},"Error":"","FullError":null} I1113 05:46:01.015652 37 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Nov 13 05:46:01.017: INFO: >>> kubeConfig: /root/.kube/config I1113 05:46:01.112706 37 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-f2a38de8-0a56-4356-9545-539c71005ebf/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-f2a38de8-0a56-4356-9545-539c71005ebf","storage.kubernetes.io/csiProvisionerIdentity":"1636782353651-8081-csi-mock-csi-mock-volumes-4162"}},"Response":{},"Error":"","FullError":null} I1113 05:46:01.670585 37 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Nov 13 05:46:01.672: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:46:01.825: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:46:01.952: INFO: >>> kubeConfig: /root/.kube/config I1113 05:46:02.036428 37 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-f2a38de8-0a56-4356-9545-539c71005ebf/globalmount","target_path":"/var/lib/kubelet/pods/da33f92a-f412-4f5c-b34c-793234e30b10/volumes/kubernetes.io~csi/pvc-f2a38de8-0a56-4356-9545-539c71005ebf/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-f2a38de8-0a56-4356-9545-539c71005ebf","storage.kubernetes.io/csiProvisionerIdentity":"1636782353651-8081-csi-mock-csi-mock-volumes-4162"}},"Response":{},"Error":"","FullError":null} Nov 13 05:46:05.773: INFO: Deleting pod "pvc-volume-tester-hz54c" in namespace "csi-mock-volumes-4162" Nov 13 05:46:05.778: INFO: Wait up to 5m0s for pod "pvc-volume-tester-hz54c" to be fully deleted Nov 13 05:46:09.280: INFO: >>> kubeConfig: /root/.kube/config I1113 05:46:09.590449 37 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/da33f92a-f412-4f5c-b34c-793234e30b10/volumes/kubernetes.io~csi/pvc-f2a38de8-0a56-4356-9545-539c71005ebf/mount"},"Response":{},"Error":"","FullError":null} I1113 05:46:09.684649 37 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:46:09.686968 37 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-f2a38de8-0a56-4356-9545-539c71005ebf/globalmount"},"Response":{},"Error":"","FullError":null} I1113 05:46:11.798886 37 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} STEP: Checking PVC events Nov 13 05:46:12.786: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-h7b2r", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4162", SelfLink:"", UID:"f2a38de8-0a56-4356-9545-539c71005ebf", ResourceVersion:"204965", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772379157, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002a36ae0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002a36af8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc00379c780), VolumeMode:(*v1.PersistentVolumeMode)(0xc00379c790), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:46:12.786: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-h7b2r", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4162", SelfLink:"", UID:"f2a38de8-0a56-4356-9545-539c71005ebf", ResourceVersion:"204968", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772379157, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.kubernetes.io/selected-node":"node2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00278f980), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00278f998)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00278f9b0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00278f9c8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0036c14f0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0036c1500), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:46:12.787: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-h7b2r", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4162", SelfLink:"", UID:"f2a38de8-0a56-4356-9545-539c71005ebf", ResourceVersion:"204969", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772379157, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4162", "volume.kubernetes.io/selected-node":"node2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004ae4318), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004ae4330)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004ae4348), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004ae4360)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004ae4378), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004ae4390)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc00387e150), VolumeMode:(*v1.PersistentVolumeMode)(0xc00387e160), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:46:12.787: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-h7b2r", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4162", SelfLink:"", UID:"f2a38de8-0a56-4356-9545-539c71005ebf", ResourceVersion:"204981", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772379157, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4162", "volume.kubernetes.io/selected-node":"node2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004ae43c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004ae43d8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004ae43f0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004ae4408)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004ae4420), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004ae4438)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-f2a38de8-0a56-4356-9545-539c71005ebf", StorageClassName:(*string)(0xc00387e190), VolumeMode:(*v1.PersistentVolumeMode)(0xc00387e1a0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:46:12.787: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-h7b2r", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4162", SelfLink:"", UID:"f2a38de8-0a56-4356-9545-539c71005ebf", ResourceVersion:"204982", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772379157, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4162", "volume.kubernetes.io/selected-node":"node2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004ae4468), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004ae4480)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004ae4498), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004ae44b0)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004ae44c8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004ae44e0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-f2a38de8-0a56-4356-9545-539c71005ebf", StorageClassName:(*string)(0xc00387e1d0), VolumeMode:(*v1.PersistentVolumeMode)(0xc00387e1e0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:46:12.787: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-h7b2r", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4162", SelfLink:"", UID:"f2a38de8-0a56-4356-9545-539c71005ebf", ResourceVersion:"205269", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772379157, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(0xc004ae4510), DeletionGracePeriodSeconds:(*int64)(0xc004ad24b8), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4162", "volume.kubernetes.io/selected-node":"node2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004ae4528), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004ae4540)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004ae4558), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004ae4570)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004ae4588), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004ae45a0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-f2a38de8-0a56-4356-9545-539c71005ebf", StorageClassName:(*string)(0xc00387e220), VolumeMode:(*v1.PersistentVolumeMode)(0xc00387e230), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:46:12.787: INFO: PVC event DELETED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-h7b2r", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4162", SelfLink:"", UID:"f2a38de8-0a56-4356-9545-539c71005ebf", ResourceVersion:"205270", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772379157, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(0xc003236a98), DeletionGracePeriodSeconds:(*int64)(0xc0049b4f68), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4162", "volume.kubernetes.io/selected-node":"node2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003236ab0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003236ac8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003236b70), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003236ba0)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003236d08), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003236d20)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-f2a38de8-0a56-4356-9545-539c71005ebf", StorageClassName:(*string)(0xc00379d300), VolumeMode:(*v1.PersistentVolumeMode)(0xc00379d370), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} STEP: Deleting pod pvc-volume-tester-hz54c Nov 13 05:46:12.787: INFO: Deleting pod "pvc-volume-tester-hz54c" in namespace "csi-mock-volumes-4162" STEP: Deleting claim pvc-h7b2r STEP: Deleting storageclass csi-mock-volumes-4162-scdclpb STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4162 STEP: Waiting for namespaces [csi-mock-volumes-4162] to vanish STEP: uninstalling csi mock driver Nov 13 05:46:18.828: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4162-6174/csi-attacher Nov 13 05:46:18.833: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4162 Nov 13 05:46:18.836: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4162 Nov 13 05:46:18.840: INFO: deleting *v1.Role: csi-mock-volumes-4162-6174/external-attacher-cfg-csi-mock-volumes-4162 Nov 13 05:46:18.844: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4162-6174/csi-attacher-role-cfg Nov 13 05:46:18.848: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4162-6174/csi-provisioner Nov 13 05:46:18.851: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4162 Nov 13 05:46:18.854: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4162 Nov 13 05:46:18.857: INFO: deleting *v1.Role: csi-mock-volumes-4162-6174/external-provisioner-cfg-csi-mock-volumes-4162 Nov 13 05:46:18.862: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4162-6174/csi-provisioner-role-cfg Nov 13 05:46:18.865: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4162-6174/csi-resizer Nov 13 05:46:18.869: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4162 Nov 13 05:46:18.872: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4162 Nov 13 05:46:18.875: INFO: deleting *v1.Role: csi-mock-volumes-4162-6174/external-resizer-cfg-csi-mock-volumes-4162 Nov 13 05:46:18.879: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4162-6174/csi-resizer-role-cfg Nov 13 05:46:18.882: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4162-6174/csi-snapshotter Nov 13 05:46:18.886: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4162 Nov 13 05:46:18.889: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4162 Nov 13 05:46:18.893: INFO: deleting *v1.Role: csi-mock-volumes-4162-6174/external-snapshotter-leaderelection-csi-mock-volumes-4162 Nov 13 05:46:18.896: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4162-6174/external-snapshotter-leaderelection Nov 13 05:46:18.900: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4162-6174/csi-mock Nov 13 05:46:18.904: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4162 Nov 13 05:46:18.907: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4162 Nov 13 05:46:18.910: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4162 Nov 13 05:46:18.914: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4162 Nov 13 05:46:18.917: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4162 Nov 13 05:46:18.921: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4162 Nov 13 05:46:18.924: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4162 Nov 13 05:46:18.928: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4162-6174/csi-mockplugin Nov 13 05:46:18.931: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-4162 STEP: deleting the driver namespace: csi-mock-volumes-4162-6174 STEP: Waiting for namespaces [csi-mock-volumes-4162-6174] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:47:02.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:81.631 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 storage capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900 exhausted, late binding, no topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, no topology","total":-1,"completed":9,"skipped":363,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":15,"skipped":419,"failed":0} [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:45:47.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume without restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687 STEP: Building a driver namespace object, basename csi-mock-volumes-7631 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:45:47.088: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7631-1183/csi-attacher Nov 13 05:45:47.091: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7631 Nov 13 05:45:47.091: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-7631 Nov 13 05:45:47.094: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7631 Nov 13 05:45:47.097: INFO: creating *v1.Role: csi-mock-volumes-7631-1183/external-attacher-cfg-csi-mock-volumes-7631 Nov 13 05:45:47.100: INFO: creating *v1.RoleBinding: csi-mock-volumes-7631-1183/csi-attacher-role-cfg Nov 13 05:45:47.103: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7631-1183/csi-provisioner Nov 13 05:45:47.105: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7631 Nov 13 05:45:47.106: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-7631 Nov 13 05:45:47.108: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7631 Nov 13 05:45:47.111: INFO: creating *v1.Role: csi-mock-volumes-7631-1183/external-provisioner-cfg-csi-mock-volumes-7631 Nov 13 05:45:47.114: INFO: creating *v1.RoleBinding: csi-mock-volumes-7631-1183/csi-provisioner-role-cfg Nov 13 05:45:47.116: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7631-1183/csi-resizer Nov 13 05:45:47.118: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7631 Nov 13 05:45:47.118: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-7631 Nov 13 05:45:47.121: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7631 Nov 13 05:45:47.125: INFO: creating *v1.Role: csi-mock-volumes-7631-1183/external-resizer-cfg-csi-mock-volumes-7631 Nov 13 05:45:47.128: INFO: creating *v1.RoleBinding: csi-mock-volumes-7631-1183/csi-resizer-role-cfg Nov 13 05:45:47.131: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7631-1183/csi-snapshotter Nov 13 05:45:47.133: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7631 Nov 13 05:45:47.133: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-7631 Nov 13 05:45:47.136: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7631 Nov 13 05:45:47.139: INFO: creating *v1.Role: csi-mock-volumes-7631-1183/external-snapshotter-leaderelection-csi-mock-volumes-7631 Nov 13 05:45:47.142: INFO: creating *v1.RoleBinding: csi-mock-volumes-7631-1183/external-snapshotter-leaderelection Nov 13 05:45:47.144: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7631-1183/csi-mock Nov 13 05:45:47.147: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7631 Nov 13 05:45:47.150: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7631 Nov 13 05:45:47.153: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7631 Nov 13 05:45:47.156: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7631 Nov 13 05:45:47.159: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7631 Nov 13 05:45:47.162: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7631 Nov 13 05:45:47.165: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7631 Nov 13 05:45:47.168: INFO: creating *v1.StatefulSet: csi-mock-volumes-7631-1183/csi-mockplugin Nov 13 05:45:47.172: INFO: creating *v1.StatefulSet: csi-mock-volumes-7631-1183/csi-mockplugin-attacher Nov 13 05:45:47.176: INFO: creating *v1.StatefulSet: csi-mock-volumes-7631-1183/csi-mockplugin-resizer Nov 13 05:45:47.179: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-7631 to register on node node2 STEP: Creating pod Nov 13 05:45:56.694: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:45:56.699: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-6kdl7] to have phase Bound Nov 13 05:45:56.701: INFO: PersistentVolumeClaim pvc-6kdl7 found but phase is Pending instead of Bound. Nov 13 05:45:58.706: INFO: PersistentVolumeClaim pvc-6kdl7 found and phase=Bound (2.006999954s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-bpbsc Nov 13 05:46:12.741: INFO: Deleting pod "pvc-volume-tester-bpbsc" in namespace "csi-mock-volumes-7631" Nov 13 05:46:12.746: INFO: Wait up to 5m0s for pod "pvc-volume-tester-bpbsc" to be fully deleted STEP: Deleting claim pvc-6kdl7 Nov 13 05:46:22.761: INFO: Waiting up to 2m0s for PersistentVolume pvc-3a1b980a-32b6-483f-891c-17fe603a02f8 to get deleted Nov 13 05:46:22.764: INFO: PersistentVolume pvc-3a1b980a-32b6-483f-891c-17fe603a02f8 found and phase=Bound (3.132092ms) Nov 13 05:46:24.768: INFO: PersistentVolume pvc-3a1b980a-32b6-483f-891c-17fe603a02f8 found and phase=Released (2.007358248s) Nov 13 05:46:26.773: INFO: PersistentVolume pvc-3a1b980a-32b6-483f-891c-17fe603a02f8 found and phase=Released (4.011417746s) Nov 13 05:46:28.777: INFO: PersistentVolume pvc-3a1b980a-32b6-483f-891c-17fe603a02f8 found and phase=Released (6.015653541s) Nov 13 05:46:30.782: INFO: PersistentVolume pvc-3a1b980a-32b6-483f-891c-17fe603a02f8 was removed STEP: Deleting storageclass csi-mock-volumes-7631-scx7289 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-7631 STEP: Waiting for namespaces [csi-mock-volumes-7631] to vanish STEP: uninstalling csi mock driver Nov 13 05:46:36.794: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7631-1183/csi-attacher Nov 13 05:46:36.797: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7631 Nov 13 05:46:36.801: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7631 Nov 13 05:46:36.805: INFO: deleting *v1.Role: csi-mock-volumes-7631-1183/external-attacher-cfg-csi-mock-volumes-7631 Nov 13 05:46:36.809: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7631-1183/csi-attacher-role-cfg Nov 13 05:46:36.812: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7631-1183/csi-provisioner Nov 13 05:46:36.816: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7631 Nov 13 05:46:36.822: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7631 Nov 13 05:46:36.825: INFO: deleting *v1.Role: csi-mock-volumes-7631-1183/external-provisioner-cfg-csi-mock-volumes-7631 Nov 13 05:46:36.829: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7631-1183/csi-provisioner-role-cfg Nov 13 05:46:36.833: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7631-1183/csi-resizer Nov 13 05:46:36.836: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7631 Nov 13 05:46:36.839: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7631 Nov 13 05:46:36.843: INFO: deleting *v1.Role: csi-mock-volumes-7631-1183/external-resizer-cfg-csi-mock-volumes-7631 Nov 13 05:46:36.846: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7631-1183/csi-resizer-role-cfg Nov 13 05:46:36.849: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7631-1183/csi-snapshotter Nov 13 05:46:36.852: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7631 Nov 13 05:46:36.855: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7631 Nov 13 05:46:36.858: INFO: deleting *v1.Role: csi-mock-volumes-7631-1183/external-snapshotter-leaderelection-csi-mock-volumes-7631 Nov 13 05:46:36.862: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7631-1183/external-snapshotter-leaderelection Nov 13 05:46:36.865: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7631-1183/csi-mock Nov 13 05:46:36.869: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7631 Nov 13 05:46:36.872: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7631 Nov 13 05:46:36.875: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7631 Nov 13 05:46:36.878: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7631 Nov 13 05:46:36.882: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7631 Nov 13 05:46:36.885: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7631 Nov 13 05:46:36.888: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7631 Nov 13 05:46:36.891: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7631-1183/csi-mockplugin Nov 13 05:46:36.895: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7631-1183/csi-mockplugin-attacher Nov 13 05:46:36.898: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7631-1183/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-7631-1183 STEP: Waiting for namespaces [csi-mock-volumes-7631-1183] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:47:04.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:77.887 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI online volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:672 should expand volume without restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=on, nodeExpansion=on","total":-1,"completed":16,"skipped":419,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:46:47.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:354 STEP: Initializing test volumes Nov 13 05:46:51.542: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-3f6ff8f9-98ce-40bb-ad52-5a8f00a373f6] Namespace:persistent-local-volumes-test-8730 PodName:hostexec-node1-bf9b2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:46:51.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:46:53.250: INFO: Creating a PV followed by a PVC Nov 13 05:46:53.258: INFO: Waiting for PV local-pvvmp94 to bind to PVC pvc-94mdg Nov 13 05:46:53.258: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-94mdg] to have phase Bound Nov 13 05:46:53.261: INFO: PersistentVolumeClaim pvc-94mdg found but phase is Pending instead of Bound. Nov 13 05:46:55.265: INFO: PersistentVolumeClaim pvc-94mdg found and phase=Bound (2.007238453s) Nov 13 05:46:55.265: INFO: Waiting up to 3m0s for PersistentVolume local-pvvmp94 to have phase Bound Nov 13 05:46:55.267: INFO: PersistentVolume local-pvvmp94 found and phase=Bound (2.239468ms) [It] should fail scheduling due to different NodeSelector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379 STEP: local-volume-type: dir STEP: Initializing test volumes Nov 13 05:46:55.272: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-647e02cb-3063-4d2d-985f-731c5f1b4ec3] Namespace:persistent-local-volumes-test-8730 PodName:hostexec-node1-bf9b2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:46:55.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:46:59.677: INFO: Creating a PV followed by a PVC Nov 13 05:46:59.683: INFO: Waiting for PV local-pv8nshv to bind to PVC pvc-965f2 Nov 13 05:46:59.683: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-965f2] to have phase Bound Nov 13 05:46:59.685: INFO: PersistentVolumeClaim pvc-965f2 found but phase is Pending instead of Bound. Nov 13 05:47:01.689: INFO: PersistentVolumeClaim pvc-965f2 found but phase is Pending instead of Bound. Nov 13 05:47:03.695: INFO: PersistentVolumeClaim pvc-965f2 found and phase=Bound (4.011956668s) Nov 13 05:47:03.695: INFO: Waiting up to 3m0s for PersistentVolume local-pv8nshv to have phase Bound Nov 13 05:47:03.701: INFO: PersistentVolume local-pv8nshv found and phase=Bound (5.747476ms) Nov 13 05:47:03.725: INFO: Waiting up to 5m0s for pod "pod-cadb273c-61ad-477f-9821-bbf4adf37d28" in namespace "persistent-local-volumes-test-8730" to be "Unschedulable" Nov 13 05:47:03.727: INFO: Pod "pod-cadb273c-61ad-477f-9821-bbf4adf37d28": Phase="Pending", Reason="", readiness=false. Elapsed: 1.885211ms Nov 13 05:47:05.731: INFO: Pod "pod-cadb273c-61ad-477f-9821-bbf4adf37d28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006652255s Nov 13 05:47:05.731: INFO: Pod "pod-cadb273c-61ad-477f-9821-bbf4adf37d28" satisfied condition "Unschedulable" [AfterEach] Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:370 STEP: Cleaning up PVC and PV Nov 13 05:47:05.732: INFO: Deleting PersistentVolumeClaim "pvc-94mdg" Nov 13 05:47:05.736: INFO: Deleting PersistentVolume "local-pvvmp94" STEP: Removing the test directory Nov 13 05:47:05.740: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-3f6ff8f9-98ce-40bb-ad52-5a8f00a373f6] Namespace:persistent-local-volumes-test-8730 PodName:hostexec-node1-bf9b2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:47:05.740: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:47:05.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8730" for this suite. • [SLOW TEST:18.356 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347 should fail scheduling due to different NodeSelector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector","total":-1,"completed":11,"skipped":395,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:46:48.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:46:52.529: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-1ae0eddd-47bf-4c02-a411-e120ea7f1f0e-backend && ln -s /tmp/local-volume-test-1ae0eddd-47bf-4c02-a411-e120ea7f1f0e-backend /tmp/local-volume-test-1ae0eddd-47bf-4c02-a411-e120ea7f1f0e] Namespace:persistent-local-volumes-test-1658 PodName:hostexec-node2-xvq54 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:46:52.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:46:52.638: INFO: Creating a PV followed by a PVC Nov 13 05:46:52.644: INFO: Waiting for PV local-pvvfktk to bind to PVC pvc-vp442 Nov 13 05:46:52.644: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-vp442] to have phase Bound Nov 13 05:46:52.646: INFO: PersistentVolumeClaim pvc-vp442 found but phase is Pending instead of Bound. Nov 13 05:46:54.651: INFO: PersistentVolumeClaim pvc-vp442 found and phase=Bound (2.00687905s) Nov 13 05:46:54.651: INFO: Waiting up to 3m0s for PersistentVolume local-pvvfktk to have phase Bound Nov 13 05:46:54.653: INFO: PersistentVolume local-pvvfktk found and phase=Bound (2.429806ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Nov 13 05:47:00.680: INFO: pod "pod-882c6b0d-c64d-4e35-a22d-dcb0c88f00fb" created on Node "node2" STEP: Writing in pod1 Nov 13 05:47:00.680: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1658 PodName:pod-882c6b0d-c64d-4e35-a22d-dcb0c88f00fb ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:47:00.680: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:47:00.781: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 13 05:47:00.781: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1658 PodName:pod-882c6b0d-c64d-4e35-a22d-dcb0c88f00fb ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:47:00.781: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:47:00.865: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Nov 13 05:47:16.889: INFO: pod "pod-05310d18-cd97-4be6-8c47-1684d993c932" created on Node "node2" Nov 13 05:47:16.889: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1658 PodName:pod-05310d18-cd97-4be6-8c47-1684d993c932 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:47:16.889: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:47:17.050: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Nov 13 05:47:17.050: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-1ae0eddd-47bf-4c02-a411-e120ea7f1f0e > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1658 PodName:pod-05310d18-cd97-4be6-8c47-1684d993c932 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:47:17.050: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:47:17.163: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-1ae0eddd-47bf-4c02-a411-e120ea7f1f0e > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Nov 13 05:47:17.163: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1658 PodName:pod-882c6b0d-c64d-4e35-a22d-dcb0c88f00fb ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:47:17.163: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:47:17.275: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-1ae0eddd-47bf-4c02-a411-e120ea7f1f0e", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-882c6b0d-c64d-4e35-a22d-dcb0c88f00fb in namespace persistent-local-volumes-test-1658 STEP: Deleting pod2 STEP: Deleting pod pod-05310d18-cd97-4be6-8c47-1684d993c932 in namespace persistent-local-volumes-test-1658 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:47:17.284: INFO: Deleting PersistentVolumeClaim "pvc-vp442" Nov 13 05:47:17.288: INFO: Deleting PersistentVolume "local-pvvfktk" STEP: Removing the test directory Nov 13 05:47:17.292: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-1ae0eddd-47bf-4c02-a411-e120ea7f1f0e && rm -r /tmp/local-volume-test-1ae0eddd-47bf-4c02-a411-e120ea7f1f0e-backend] Namespace:persistent-local-volumes-test-1658 PodName:hostexec-node2-xvq54 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:47:17.292: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:47:17.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1658" for this suite. • [SLOW TEST:28.944 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":8,"skipped":178,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:47:17.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:68 Nov 13 05:47:17.477: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:47:17.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-9012" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 GlusterFS [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:128 should be mountable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:129 Only supported for node OS distro [gci ubuntu custom] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:69 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:47:17.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename regional-pd STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:68 Nov 13 05:47:17.673: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:47:17.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "regional-pd-4614" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 RegionalPD [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:76 should provision storage in the allowedTopologies with delayed binding [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:90 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:47:17.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] deletion should be idempotent /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:557 Nov 13 05:47:17.766: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:47:17.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-3672" for this suite. S [SKIPPING] [0.031 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 DynamicProvisioner [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:152 deletion should be idempotent [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:557 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:563 ------------------------------ SS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:46:36.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] unlimited /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 STEP: Building a driver namespace object, basename csi-mock-volumes-27 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:46:36.957: INFO: creating *v1.ServiceAccount: csi-mock-volumes-27-3282/csi-attacher Nov 13 05:46:36.959: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-27 Nov 13 05:46:36.959: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-27 Nov 13 05:46:36.962: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-27 Nov 13 05:46:36.965: INFO: creating *v1.Role: csi-mock-volumes-27-3282/external-attacher-cfg-csi-mock-volumes-27 Nov 13 05:46:36.968: INFO: creating *v1.RoleBinding: csi-mock-volumes-27-3282/csi-attacher-role-cfg Nov 13 05:46:36.971: INFO: creating *v1.ServiceAccount: csi-mock-volumes-27-3282/csi-provisioner Nov 13 05:46:36.974: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-27 Nov 13 05:46:36.974: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-27 Nov 13 05:46:36.977: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-27 Nov 13 05:46:36.979: INFO: creating *v1.Role: csi-mock-volumes-27-3282/external-provisioner-cfg-csi-mock-volumes-27 Nov 13 05:46:36.982: INFO: creating *v1.RoleBinding: csi-mock-volumes-27-3282/csi-provisioner-role-cfg Nov 13 05:46:36.985: INFO: creating *v1.ServiceAccount: csi-mock-volumes-27-3282/csi-resizer Nov 13 05:46:36.987: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-27 Nov 13 05:46:36.987: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-27 Nov 13 05:46:36.990: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-27 Nov 13 05:46:36.993: INFO: creating *v1.Role: csi-mock-volumes-27-3282/external-resizer-cfg-csi-mock-volumes-27 Nov 13 05:46:36.995: INFO: creating *v1.RoleBinding: csi-mock-volumes-27-3282/csi-resizer-role-cfg Nov 13 05:46:36.998: INFO: creating *v1.ServiceAccount: csi-mock-volumes-27-3282/csi-snapshotter Nov 13 05:46:37.001: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-27 Nov 13 05:46:37.001: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-27 Nov 13 05:46:37.004: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-27 Nov 13 05:46:37.007: INFO: creating *v1.Role: csi-mock-volumes-27-3282/external-snapshotter-leaderelection-csi-mock-volumes-27 Nov 13 05:46:37.010: INFO: creating *v1.RoleBinding: csi-mock-volumes-27-3282/external-snapshotter-leaderelection Nov 13 05:46:37.013: INFO: creating *v1.ServiceAccount: csi-mock-volumes-27-3282/csi-mock Nov 13 05:46:37.015: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-27 Nov 13 05:46:37.018: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-27 Nov 13 05:46:37.022: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-27 Nov 13 05:46:37.025: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-27 Nov 13 05:46:37.028: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-27 Nov 13 05:46:37.031: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-27 Nov 13 05:46:37.033: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-27 Nov 13 05:46:37.036: INFO: creating *v1.StatefulSet: csi-mock-volumes-27-3282/csi-mockplugin Nov 13 05:46:37.040: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-27 Nov 13 05:46:37.042: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-27" Nov 13 05:46:37.044: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-27 to register on node node2 STEP: Creating pod Nov 13 05:46:46.560: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:46:46.564: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-2xj62] to have phase Bound Nov 13 05:46:46.567: INFO: PersistentVolumeClaim pvc-2xj62 found but phase is Pending instead of Bound. Nov 13 05:46:48.571: INFO: PersistentVolumeClaim pvc-2xj62 found and phase=Bound (2.00647021s) Nov 13 05:46:52.592: INFO: Deleting pod "pvc-volume-tester-lsdsm" in namespace "csi-mock-volumes-27" Nov 13 05:46:52.596: INFO: Wait up to 5m0s for pod "pvc-volume-tester-lsdsm" to be fully deleted STEP: Checking PVC events Nov 13 05:46:59.626: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-2xj62", GenerateName:"pvc-", Namespace:"csi-mock-volumes-27", SelfLink:"", UID:"09767565-a95e-4410-8fa6-cae0fa6a216d", ResourceVersion:"206304", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772379206, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00363ce28), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00363ce40)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc003435d10), VolumeMode:(*v1.PersistentVolumeMode)(0xc003435d20), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:46:59.626: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-2xj62", GenerateName:"pvc-", Namespace:"csi-mock-volumes-27", SelfLink:"", UID:"09767565-a95e-4410-8fa6-cae0fa6a216d", ResourceVersion:"206305", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772379206, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-27"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00363cea0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00363ceb8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00363ced0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00363cee8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc003435d50), VolumeMode:(*v1.PersistentVolumeMode)(0xc003435d60), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:46:59.626: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-2xj62", GenerateName:"pvc-", Namespace:"csi-mock-volumes-27", SelfLink:"", UID:"09767565-a95e-4410-8fa6-cae0fa6a216d", ResourceVersion:"206311", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772379206, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-27"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0059a6378), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0059a6390)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0059a63a8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0059a63c0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-09767565-a95e-4410-8fa6-cae0fa6a216d", StorageClassName:(*string)(0xc005986910), VolumeMode:(*v1.PersistentVolumeMode)(0xc005986920), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:46:59.627: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-2xj62", GenerateName:"pvc-", Namespace:"csi-mock-volumes-27", SelfLink:"", UID:"09767565-a95e-4410-8fa6-cae0fa6a216d", ResourceVersion:"206313", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772379206, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-27"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc005897bf0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc005897c08)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc005897c20), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc005897c38)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-09767565-a95e-4410-8fa6-cae0fa6a216d", StorageClassName:(*string)(0xc0058bee60), VolumeMode:(*v1.PersistentVolumeMode)(0xc0058bee70), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:46:59.627: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-2xj62", GenerateName:"pvc-", Namespace:"csi-mock-volumes-27", SelfLink:"", UID:"09767565-a95e-4410-8fa6-cae0fa6a216d", ResourceVersion:"206613", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772379206, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(0xc005897c68), DeletionGracePeriodSeconds:(*int64)(0xc0047ed9f8), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-27"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc005897c80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc005897c98)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc005897cb0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc005897cc8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-09767565-a95e-4410-8fa6-cae0fa6a216d", StorageClassName:(*string)(0xc0058beeb0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0058beec0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:46:59.627: INFO: PVC event DELETED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-2xj62", GenerateName:"pvc-", Namespace:"csi-mock-volumes-27", SelfLink:"", UID:"09767565-a95e-4410-8fa6-cae0fa6a216d", ResourceVersion:"206614", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772379206, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(0xc00593c0d8), DeletionGracePeriodSeconds:(*int64)(0xc003f8d488), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-27"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00593c0f0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00593c108)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00593c120), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00593c138)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-09767565-a95e-4410-8fa6-cae0fa6a216d", StorageClassName:(*string)(0xc0059109f0), VolumeMode:(*v1.PersistentVolumeMode)(0xc005910a00), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} STEP: Deleting pod pvc-volume-tester-lsdsm Nov 13 05:46:59.627: INFO: Deleting pod "pvc-volume-tester-lsdsm" in namespace "csi-mock-volumes-27" STEP: Deleting claim pvc-2xj62 STEP: Deleting storageclass csi-mock-volumes-27-scj9rmx STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-27 STEP: Waiting for namespaces [csi-mock-volumes-27] to vanish STEP: uninstalling csi mock driver Nov 13 05:47:05.643: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-27-3282/csi-attacher Nov 13 05:47:05.646: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-27 Nov 13 05:47:05.650: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-27 Nov 13 05:47:05.654: INFO: deleting *v1.Role: csi-mock-volumes-27-3282/external-attacher-cfg-csi-mock-volumes-27 Nov 13 05:47:05.658: INFO: deleting *v1.RoleBinding: csi-mock-volumes-27-3282/csi-attacher-role-cfg Nov 13 05:47:05.661: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-27-3282/csi-provisioner Nov 13 05:47:05.664: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-27 Nov 13 05:47:05.668: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-27 Nov 13 05:47:05.671: INFO: deleting *v1.Role: csi-mock-volumes-27-3282/external-provisioner-cfg-csi-mock-volumes-27 Nov 13 05:47:05.675: INFO: deleting *v1.RoleBinding: csi-mock-volumes-27-3282/csi-provisioner-role-cfg Nov 13 05:47:05.678: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-27-3282/csi-resizer Nov 13 05:47:05.682: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-27 Nov 13 05:47:05.686: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-27 Nov 13 05:47:05.690: INFO: deleting *v1.Role: csi-mock-volumes-27-3282/external-resizer-cfg-csi-mock-volumes-27 Nov 13 05:47:05.694: INFO: deleting *v1.RoleBinding: csi-mock-volumes-27-3282/csi-resizer-role-cfg Nov 13 05:47:05.698: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-27-3282/csi-snapshotter Nov 13 05:47:05.701: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-27 Nov 13 05:47:05.704: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-27 Nov 13 05:47:05.708: INFO: deleting *v1.Role: csi-mock-volumes-27-3282/external-snapshotter-leaderelection-csi-mock-volumes-27 Nov 13 05:47:05.714: INFO: deleting *v1.RoleBinding: csi-mock-volumes-27-3282/external-snapshotter-leaderelection Nov 13 05:47:05.720: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-27-3282/csi-mock Nov 13 05:47:05.726: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-27 Nov 13 05:47:05.730: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-27 Nov 13 05:47:05.736: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-27 Nov 13 05:47:05.739: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-27 Nov 13 05:47:05.743: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-27 Nov 13 05:47:05.748: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-27 Nov 13 05:47:05.753: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-27 Nov 13 05:47:05.760: INFO: deleting *v1.StatefulSet: csi-mock-volumes-27-3282/csi-mockplugin Nov 13 05:47:05.764: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-27 STEP: deleting the driver namespace: csi-mock-volumes-27-3282 STEP: Waiting for namespaces [csi-mock-volumes-27-3282] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:47:17.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:40.879 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 storage capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900 unlimited /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume storage capacity unlimited","total":-1,"completed":15,"skipped":639,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:45:34.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should modify fsGroup if fsGroupPolicy=default /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1460 STEP: Building a driver namespace object, basename csi-mock-volumes-1938 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:45:34.464: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1938-6359/csi-attacher Nov 13 05:45:34.466: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1938 Nov 13 05:45:34.466: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-1938 Nov 13 05:45:34.469: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1938 Nov 13 05:45:34.472: INFO: creating *v1.Role: csi-mock-volumes-1938-6359/external-attacher-cfg-csi-mock-volumes-1938 Nov 13 05:45:34.475: INFO: creating *v1.RoleBinding: csi-mock-volumes-1938-6359/csi-attacher-role-cfg Nov 13 05:45:34.478: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1938-6359/csi-provisioner Nov 13 05:45:34.481: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1938 Nov 13 05:45:34.481: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-1938 Nov 13 05:45:34.483: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1938 Nov 13 05:45:34.486: INFO: creating *v1.Role: csi-mock-volumes-1938-6359/external-provisioner-cfg-csi-mock-volumes-1938 Nov 13 05:45:34.489: INFO: creating *v1.RoleBinding: csi-mock-volumes-1938-6359/csi-provisioner-role-cfg Nov 13 05:45:34.492: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1938-6359/csi-resizer Nov 13 05:45:34.495: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1938 Nov 13 05:45:34.495: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-1938 Nov 13 05:45:34.497: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1938 Nov 13 05:45:34.500: INFO: creating *v1.Role: csi-mock-volumes-1938-6359/external-resizer-cfg-csi-mock-volumes-1938 Nov 13 05:45:34.503: INFO: creating *v1.RoleBinding: csi-mock-volumes-1938-6359/csi-resizer-role-cfg Nov 13 05:45:34.505: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1938-6359/csi-snapshotter Nov 13 05:45:34.508: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1938 Nov 13 05:45:34.508: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-1938 Nov 13 05:45:34.510: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1938 Nov 13 05:45:34.513: INFO: creating *v1.Role: csi-mock-volumes-1938-6359/external-snapshotter-leaderelection-csi-mock-volumes-1938 Nov 13 05:45:34.516: INFO: creating *v1.RoleBinding: csi-mock-volumes-1938-6359/external-snapshotter-leaderelection Nov 13 05:45:34.518: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1938-6359/csi-mock Nov 13 05:45:34.521: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1938 Nov 13 05:45:34.523: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1938 Nov 13 05:45:34.526: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1938 Nov 13 05:45:34.528: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1938 Nov 13 05:45:34.531: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1938 Nov 13 05:45:34.533: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1938 Nov 13 05:45:34.536: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1938 Nov 13 05:45:34.538: INFO: creating *v1.StatefulSet: csi-mock-volumes-1938-6359/csi-mockplugin Nov 13 05:45:34.542: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-1938 Nov 13 05:45:34.545: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-1938" Nov 13 05:45:34.547: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-1938 to register on node node1 STEP: Creating pod with fsGroup Nov 13 05:45:49.067: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:45:49.072: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-k5p2x] to have phase Bound Nov 13 05:45:49.073: INFO: PersistentVolumeClaim pvc-k5p2x found but phase is Pending instead of Bound. Nov 13 05:45:51.076: INFO: PersistentVolumeClaim pvc-k5p2x found and phase=Bound (2.004867428s) Nov 13 05:45:55.097: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir /mnt/test/csi-mock-volumes-1938] Namespace:csi-mock-volumes-1938 PodName:pvc-volume-tester-b742p ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:45:55.097: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:45:55.179: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'filecontents' > '/mnt/test/csi-mock-volumes-1938/csi-mock-volumes-1938'; sync] Namespace:csi-mock-volumes-1938 PodName:pvc-volume-tester-b742p ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:45:55.179: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:45:57.397: INFO: ExecWithOptions {Command:[/bin/sh -c ls -l /mnt/test/csi-mock-volumes-1938/csi-mock-volumes-1938] Namespace:csi-mock-volumes-1938 PodName:pvc-volume-tester-b742p ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:45:57.397: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:45:57.934: INFO: pod csi-mock-volumes-1938/pvc-volume-tester-b742p exec for cmd ls -l /mnt/test/csi-mock-volumes-1938/csi-mock-volumes-1938, stdout: -rw-r--r-- 1 root 20791 13 Nov 13 05:45 /mnt/test/csi-mock-volumes-1938/csi-mock-volumes-1938, stderr: Nov 13 05:45:57.934: INFO: ExecWithOptions {Command:[/bin/sh -c rm -fr /mnt/test/csi-mock-volumes-1938] Namespace:csi-mock-volumes-1938 PodName:pvc-volume-tester-b742p ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:45:57.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod pvc-volume-tester-b742p Nov 13 05:45:58.043: INFO: Deleting pod "pvc-volume-tester-b742p" in namespace "csi-mock-volumes-1938" Nov 13 05:45:58.047: INFO: Wait up to 5m0s for pod "pvc-volume-tester-b742p" to be fully deleted STEP: Deleting claim pvc-k5p2x Nov 13 05:46:42.059: INFO: Waiting up to 2m0s for PersistentVolume pvc-af0dc688-04ba-4755-aa2d-b4104fcc70f4 to get deleted Nov 13 05:46:42.062: INFO: PersistentVolume pvc-af0dc688-04ba-4755-aa2d-b4104fcc70f4 found and phase=Bound (2.502532ms) Nov 13 05:46:44.068: INFO: PersistentVolume pvc-af0dc688-04ba-4755-aa2d-b4104fcc70f4 was removed STEP: Deleting storageclass csi-mock-volumes-1938-scq4g7s STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-1938 STEP: Waiting for namespaces [csi-mock-volumes-1938] to vanish STEP: uninstalling csi mock driver Nov 13 05:46:50.082: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1938-6359/csi-attacher Nov 13 05:46:50.086: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1938 Nov 13 05:46:50.089: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1938 Nov 13 05:46:50.093: INFO: deleting *v1.Role: csi-mock-volumes-1938-6359/external-attacher-cfg-csi-mock-volumes-1938 Nov 13 05:46:50.096: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1938-6359/csi-attacher-role-cfg Nov 13 05:46:50.100: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1938-6359/csi-provisioner Nov 13 05:46:50.103: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1938 Nov 13 05:46:50.106: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1938 Nov 13 05:46:50.114: INFO: deleting *v1.Role: csi-mock-volumes-1938-6359/external-provisioner-cfg-csi-mock-volumes-1938 Nov 13 05:46:50.122: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1938-6359/csi-provisioner-role-cfg Nov 13 05:46:50.130: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1938-6359/csi-resizer Nov 13 05:46:50.135: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1938 Nov 13 05:46:50.139: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1938 Nov 13 05:46:50.143: INFO: deleting *v1.Role: csi-mock-volumes-1938-6359/external-resizer-cfg-csi-mock-volumes-1938 Nov 13 05:46:50.146: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1938-6359/csi-resizer-role-cfg Nov 13 05:46:50.150: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1938-6359/csi-snapshotter Nov 13 05:46:50.153: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1938 Nov 13 05:46:50.157: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1938 Nov 13 05:46:50.160: INFO: deleting *v1.Role: csi-mock-volumes-1938-6359/external-snapshotter-leaderelection-csi-mock-volumes-1938 Nov 13 05:46:50.163: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1938-6359/external-snapshotter-leaderelection Nov 13 05:46:50.167: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1938-6359/csi-mock Nov 13 05:46:50.170: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1938 Nov 13 05:46:50.173: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1938 Nov 13 05:46:50.176: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1938 Nov 13 05:46:50.179: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1938 Nov 13 05:46:50.182: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1938 Nov 13 05:46:50.185: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1938 Nov 13 05:46:50.189: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1938 Nov 13 05:46:50.192: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1938-6359/csi-mockplugin Nov 13 05:46:50.197: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-1938 STEP: deleting the driver namespace: csi-mock-volumes-1938-6359 STEP: Waiting for namespaces [csi-mock-volumes-1938-6359] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:47:18.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:103.815 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI FSGroupPolicy [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1436 should modify fsGroup if fsGroupPolicy=default /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1460 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","total":-1,"completed":7,"skipped":182,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:47:17.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Nov 13 05:47:43.938: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-5164 PodName:hostexec-node1-592bp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:47:43.938: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:47:44.554: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Nov 13 05:47:44.554: INFO: exec node1: stdout: "0\n" Nov 13 05:47:44.554: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Nov 13 05:47:44.554: INFO: exec node1: exit code: 0 Nov 13 05:47:44.554: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:47:44.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5164" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [26.679 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1256 ------------------------------ SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:47:44.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 [It] [Serial] attach on previously attached volumes should work /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:458 Nov 13 05:47:44.618: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:47:44.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-3430" for this suite. S [SKIPPING] [0.042 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Serial] attach on previously attached volumes should work [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:458 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:459 ------------------------------ S ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:47:18.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:47:26.298: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-8d6461af-74f6-44d7-ae58-0956569bc2dd-backend && ln -s /tmp/local-volume-test-8d6461af-74f6-44d7-ae58-0956569bc2dd-backend /tmp/local-volume-test-8d6461af-74f6-44d7-ae58-0956569bc2dd] Namespace:persistent-local-volumes-test-2957 PodName:hostexec-node2-p2ghr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:47:26.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:47:26.383: INFO: Creating a PV followed by a PVC Nov 13 05:47:26.390: INFO: Waiting for PV local-pvbb9dx to bind to PVC pvc-g78qd Nov 13 05:47:26.390: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-g78qd] to have phase Bound Nov 13 05:47:26.392: INFO: PersistentVolumeClaim pvc-g78qd found but phase is Pending instead of Bound. Nov 13 05:47:28.394: INFO: PersistentVolumeClaim pvc-g78qd found and phase=Bound (2.004546226s) Nov 13 05:47:28.394: INFO: Waiting up to 3m0s for PersistentVolume local-pvbb9dx to have phase Bound Nov 13 05:47:28.397: INFO: PersistentVolume local-pvbb9dx found and phase=Bound (2.330201ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:47:38.423: INFO: pod "pod-ec74ed5e-3169-4976-88e2-5c7f3b899a5a" created on Node "node2" STEP: Writing in pod1 Nov 13 05:47:38.423: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2957 PodName:pod-ec74ed5e-3169-4976-88e2-5c7f3b899a5a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:47:38.423: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:47:38.666: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 13 05:47:38.666: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2957 PodName:pod-ec74ed5e-3169-4976-88e2-5c7f3b899a5a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:47:38.666: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:47:38.759: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-ec74ed5e-3169-4976-88e2-5c7f3b899a5a in namespace persistent-local-volumes-test-2957 STEP: Creating pod2 STEP: Creating a pod Nov 13 05:47:48.787: INFO: pod "pod-9cd05d62-8f50-4347-a778-f1ad8e37648b" created on Node "node2" STEP: Reading in pod2 Nov 13 05:47:48.787: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2957 PodName:pod-9cd05d62-8f50-4347-a778-f1ad8e37648b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:47:48.787: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:47:49.076: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-9cd05d62-8f50-4347-a778-f1ad8e37648b in namespace persistent-local-volumes-test-2957 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:47:49.082: INFO: Deleting PersistentVolumeClaim "pvc-g78qd" Nov 13 05:47:49.085: INFO: Deleting PersistentVolume "local-pvbb9dx" STEP: Removing the test directory Nov 13 05:47:49.089: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8d6461af-74f6-44d7-ae58-0956569bc2dd && rm -r /tmp/local-volume-test-8d6461af-74f6-44d7-ae58-0956569bc2dd-backend] Namespace:persistent-local-volumes-test-2957 PodName:hostexec-node2-p2ghr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:47:49.089: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:47:49.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2957" for this suite. • [SLOW TEST:30.955 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":8,"skipped":197,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:47:17.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Nov 13 05:47:17.861: INFO: The status of Pod test-hostpath-type-bntzh is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:47:19.866: INFO: The status of Pod test-hostpath-type-bntzh is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:47:21.865: INFO: The status of Pod test-hostpath-type-bntzh is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:47:23.866: INFO: The status of Pod test-hostpath-type-bntzh is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:47:25.866: INFO: The status of Pod test-hostpath-type-bntzh is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:47:27.866: INFO: The status of Pod test-hostpath-type-bntzh is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:47:29.866: INFO: The status of Pod test-hostpath-type-bntzh is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:47:31.866: INFO: The status of Pod test-hostpath-type-bntzh is Running (Ready = true) STEP: running on node node2 STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should fail on mounting non-existent directory 'does-not-exist-dir' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:70 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:47:49.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-6125" for this suite. • [SLOW TEST:32.118 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting non-existent directory 'does-not-exist-dir' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:70 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should fail on mounting non-existent directory 'does-not-exist-dir' when HostPathType is HostPathDirectory","total":-1,"completed":16,"skipped":652,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:47:44.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Nov 13 05:47:44.667: INFO: The status of Pod test-hostpath-type-7hhkx is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:47:46.671: INFO: The status of Pod test-hostpath-type-7hhkx is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:47:48.670: INFO: The status of Pod test-hostpath-type-7hhkx is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:47:50.671: INFO: The status of Pod test-hostpath-type-7hhkx is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:47:52.671: INFO: The status of Pod test-hostpath-type-7hhkx is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:47:54.672: INFO: The status of Pod test-hostpath-type-7hhkx is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:47:56.670: INFO: The status of Pod test-hostpath-type-7hhkx is Running (Ready = true) STEP: running on node node2 STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should fail on mounting file 'afile' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:161 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:48:12.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-3461" for this suite. • [SLOW TEST:28.097 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting file 'afile' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:161 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathCharDev","total":-1,"completed":9,"skipped":379,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:47:49.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-1fda0ee2-bdce-4bb9-afe2-6e5e6095abf1" Nov 13 05:47:55.993: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-1fda0ee2-bdce-4bb9-afe2-6e5e6095abf1" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-1fda0ee2-bdce-4bb9-afe2-6e5e6095abf1" "/tmp/local-volume-test-1fda0ee2-bdce-4bb9-afe2-6e5e6095abf1"] Namespace:persistent-local-volumes-test-1566 PodName:hostexec-node2-p27ll ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:47:55.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:47:56.161: INFO: Creating a PV followed by a PVC Nov 13 05:47:56.168: INFO: Waiting for PV local-pvnnkr4 to bind to PVC pvc-4mdv5 Nov 13 05:47:56.168: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-4mdv5] to have phase Bound Nov 13 05:47:56.170: INFO: PersistentVolumeClaim pvc-4mdv5 found but phase is Pending instead of Bound. Nov 13 05:47:58.174: INFO: PersistentVolumeClaim pvc-4mdv5 found and phase=Bound (2.006152025s) Nov 13 05:47:58.174: INFO: Waiting up to 3m0s for PersistentVolume local-pvnnkr4 to have phase Bound Nov 13 05:47:58.177: INFO: PersistentVolume local-pvnnkr4 found and phase=Bound (3.347013ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:48:08.205: INFO: pod "pod-c2ffc60a-984a-462b-96f1-ee594f68fef6" created on Node "node2" STEP: Writing in pod1 Nov 13 05:48:08.205: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1566 PodName:pod-c2ffc60a-984a-462b-96f1-ee594f68fef6 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:48:08.205: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:48:08.301: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 13 05:48:08.301: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1566 PodName:pod-c2ffc60a-984a-462b-96f1-ee594f68fef6 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:48:08.301: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:48:08.376: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-c2ffc60a-984a-462b-96f1-ee594f68fef6 in namespace persistent-local-volumes-test-1566 STEP: Creating pod2 STEP: Creating a pod Nov 13 05:48:18.401: INFO: pod "pod-1628c012-a453-4b9c-9bcb-19024b001edf" created on Node "node2" STEP: Reading in pod2 Nov 13 05:48:18.401: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1566 PodName:pod-1628c012-a453-4b9c-9bcb-19024b001edf ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:48:18.401: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:48:18.478: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-1628c012-a453-4b9c-9bcb-19024b001edf in namespace persistent-local-volumes-test-1566 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:48:18.490: INFO: Deleting PersistentVolumeClaim "pvc-4mdv5" Nov 13 05:48:18.494: INFO: Deleting PersistentVolume "local-pvnnkr4" STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-1fda0ee2-bdce-4bb9-afe2-6e5e6095abf1" Nov 13 05:48:18.498: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-1fda0ee2-bdce-4bb9-afe2-6e5e6095abf1"] Namespace:persistent-local-volumes-test-1566 PodName:hostexec-node2-p27ll ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:18.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:48:18.771: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-1fda0ee2-bdce-4bb9-afe2-6e5e6095abf1] Namespace:persistent-local-volumes-test-1566 PodName:hostexec-node2-p27ll ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:18.771: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:48:18.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1566" for this suite. • [SLOW TEST:28.921 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":17,"skipped":660,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:48:18.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Nov 13 05:48:18.938: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:48:18.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-1014" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 schedule pods each with a PD, delete pod and verify detach [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:93 for RW PD with pod delete grace period of "default (30s)" /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:135 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:47:05.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should be passed when podInfoOnMount=true /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 STEP: Building a driver namespace object, basename csi-mock-volumes-1059 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:47:05.977: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1059-3051/csi-attacher Nov 13 05:47:05.980: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1059 Nov 13 05:47:05.980: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-1059 Nov 13 05:47:05.984: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1059 Nov 13 05:47:05.987: INFO: creating *v1.Role: csi-mock-volumes-1059-3051/external-attacher-cfg-csi-mock-volumes-1059 Nov 13 05:47:05.990: INFO: creating *v1.RoleBinding: csi-mock-volumes-1059-3051/csi-attacher-role-cfg Nov 13 05:47:05.993: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1059-3051/csi-provisioner Nov 13 05:47:05.995: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1059 Nov 13 05:47:05.995: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-1059 Nov 13 05:47:05.998: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1059 Nov 13 05:47:06.001: INFO: creating *v1.Role: csi-mock-volumes-1059-3051/external-provisioner-cfg-csi-mock-volumes-1059 Nov 13 05:47:06.004: INFO: creating *v1.RoleBinding: csi-mock-volumes-1059-3051/csi-provisioner-role-cfg Nov 13 05:47:06.009: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1059-3051/csi-resizer Nov 13 05:47:06.014: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1059 Nov 13 05:47:06.014: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-1059 Nov 13 05:47:06.019: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1059 Nov 13 05:47:06.024: INFO: creating *v1.Role: csi-mock-volumes-1059-3051/external-resizer-cfg-csi-mock-volumes-1059 Nov 13 05:47:06.030: INFO: creating *v1.RoleBinding: csi-mock-volumes-1059-3051/csi-resizer-role-cfg Nov 13 05:47:06.033: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1059-3051/csi-snapshotter Nov 13 05:47:06.036: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1059 Nov 13 05:47:06.036: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-1059 Nov 13 05:47:06.038: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1059 Nov 13 05:47:06.041: INFO: creating *v1.Role: csi-mock-volumes-1059-3051/external-snapshotter-leaderelection-csi-mock-volumes-1059 Nov 13 05:47:06.044: INFO: creating *v1.RoleBinding: csi-mock-volumes-1059-3051/external-snapshotter-leaderelection Nov 13 05:47:06.046: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1059-3051/csi-mock Nov 13 05:47:06.049: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1059 Nov 13 05:47:06.051: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1059 Nov 13 05:47:06.055: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1059 Nov 13 05:47:06.057: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1059 Nov 13 05:47:06.060: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1059 Nov 13 05:47:06.063: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1059 Nov 13 05:47:06.065: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1059 Nov 13 05:47:06.068: INFO: creating *v1.StatefulSet: csi-mock-volumes-1059-3051/csi-mockplugin Nov 13 05:47:06.073: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-1059 Nov 13 05:47:06.076: INFO: creating *v1.StatefulSet: csi-mock-volumes-1059-3051/csi-mockplugin-attacher Nov 13 05:47:06.080: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-1059" Nov 13 05:47:06.082: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-1059 to register on node node2 STEP: Creating pod Nov 13 05:47:22.349: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:47:22.353: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-cn69m] to have phase Bound Nov 13 05:47:22.355: INFO: PersistentVolumeClaim pvc-cn69m found but phase is Pending instead of Bound. Nov 13 05:47:24.361: INFO: PersistentVolumeClaim pvc-cn69m found and phase=Bound (2.007520307s) STEP: checking for CSIInlineVolumes feature Nov 13 05:47:48.396: INFO: Pod inline-volume-4sp6k has the following logs: Nov 13 05:47:48.402: INFO: Deleting pod "inline-volume-4sp6k" in namespace "csi-mock-volumes-1059" Nov 13 05:47:48.409: INFO: Wait up to 5m0s for pod "inline-volume-4sp6k" to be fully deleted STEP: Deleting the previously created pod Nov 13 05:47:54.415: INFO: Deleting pod "pvc-volume-tester-hjvxx" in namespace "csi-mock-volumes-1059" Nov 13 05:47:54.419: INFO: Wait up to 5m0s for pod "pvc-volume-tester-hjvxx" to be fully deleted STEP: Checking CSI driver logs Nov 13 05:48:04.442: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-hjvxx Nov 13 05:48:04.442: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-1059 Nov 13 05:48:04.442: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 2274997a-53fb-491f-a84a-2360bb4d5320 Nov 13 05:48:04.442: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default Nov 13 05:48:04.442: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: false Nov 13 05:48:04.442: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/2274997a-53fb-491f-a84a-2360bb4d5320/volumes/kubernetes.io~csi/pvc-49d83426-0352-4770-84ba-6dba819d4d98/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-hjvxx Nov 13 05:48:04.442: INFO: Deleting pod "pvc-volume-tester-hjvxx" in namespace "csi-mock-volumes-1059" STEP: Deleting claim pvc-cn69m Nov 13 05:48:04.450: INFO: Waiting up to 2m0s for PersistentVolume pvc-49d83426-0352-4770-84ba-6dba819d4d98 to get deleted Nov 13 05:48:04.452: INFO: PersistentVolume pvc-49d83426-0352-4770-84ba-6dba819d4d98 found and phase=Bound (2.130793ms) Nov 13 05:48:06.455: INFO: PersistentVolume pvc-49d83426-0352-4770-84ba-6dba819d4d98 found and phase=Released (2.005567975s) Nov 13 05:48:08.459: INFO: PersistentVolume pvc-49d83426-0352-4770-84ba-6dba819d4d98 was removed STEP: Deleting storageclass csi-mock-volumes-1059-scnm68v STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-1059 STEP: Waiting for namespaces [csi-mock-volumes-1059] to vanish STEP: uninstalling csi mock driver Nov 13 05:48:14.475: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1059-3051/csi-attacher Nov 13 05:48:14.479: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1059 Nov 13 05:48:14.482: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1059 Nov 13 05:48:14.486: INFO: deleting *v1.Role: csi-mock-volumes-1059-3051/external-attacher-cfg-csi-mock-volumes-1059 Nov 13 05:48:14.489: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1059-3051/csi-attacher-role-cfg Nov 13 05:48:14.493: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1059-3051/csi-provisioner Nov 13 05:48:14.500: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1059 Nov 13 05:48:14.503: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1059 Nov 13 05:48:14.508: INFO: deleting *v1.Role: csi-mock-volumes-1059-3051/external-provisioner-cfg-csi-mock-volumes-1059 Nov 13 05:48:14.519: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1059-3051/csi-provisioner-role-cfg Nov 13 05:48:14.527: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1059-3051/csi-resizer Nov 13 05:48:14.533: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1059 Nov 13 05:48:14.537: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1059 Nov 13 05:48:14.540: INFO: deleting *v1.Role: csi-mock-volumes-1059-3051/external-resizer-cfg-csi-mock-volumes-1059 Nov 13 05:48:14.543: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1059-3051/csi-resizer-role-cfg Nov 13 05:48:14.547: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1059-3051/csi-snapshotter Nov 13 05:48:14.551: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1059 Nov 13 05:48:14.554: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1059 Nov 13 05:48:14.558: INFO: deleting *v1.Role: csi-mock-volumes-1059-3051/external-snapshotter-leaderelection-csi-mock-volumes-1059 Nov 13 05:48:14.561: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1059-3051/external-snapshotter-leaderelection Nov 13 05:48:14.564: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1059-3051/csi-mock Nov 13 05:48:14.567: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1059 Nov 13 05:48:14.571: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1059 Nov 13 05:48:14.574: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1059 Nov 13 05:48:14.577: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1059 Nov 13 05:48:14.580: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1059 Nov 13 05:48:14.583: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1059 Nov 13 05:48:14.587: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1059 Nov 13 05:48:14.591: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1059-3051/csi-mockplugin Nov 13 05:48:14.595: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-1059 Nov 13 05:48:14.599: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1059-3051/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-1059-3051 STEP: Waiting for namespaces [csi-mock-volumes-1059-3051] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:48:26.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:80.699 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443 should be passed when podInfoOnMount=true /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true","total":-1,"completed":12,"skipped":427,"failed":0} SSS ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should be able to mount directory 'adir' successfully when HostPathType is HostPathDirectory","total":-1,"completed":7,"skipped":268,"failed":0} [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:46:47.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:455 STEP: Setting up 10 local volumes on node "node1" STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-1aceb9ac-c444-4099-94a0-350135ab0e61" Nov 13 05:46:49.714: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-1aceb9ac-c444-4099-94a0-350135ab0e61" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-1aceb9ac-c444-4099-94a0-350135ab0e61" "/tmp/local-volume-test-1aceb9ac-c444-4099-94a0-350135ab0e61"] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node1-r4d2d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:46:49.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-60d9c491-baf5-4a8f-a320-07ed8d9dd7a8" Nov 13 05:46:49.808: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-60d9c491-baf5-4a8f-a320-07ed8d9dd7a8" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-60d9c491-baf5-4a8f-a320-07ed8d9dd7a8" "/tmp/local-volume-test-60d9c491-baf5-4a8f-a320-07ed8d9dd7a8"] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node1-r4d2d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:46:49.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-b0a88616-0228-48d1-8faf-985fb9155041" Nov 13 05:46:49.984: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-b0a88616-0228-48d1-8faf-985fb9155041" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-b0a88616-0228-48d1-8faf-985fb9155041" "/tmp/local-volume-test-b0a88616-0228-48d1-8faf-985fb9155041"] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node1-r4d2d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:46:49.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-bc0cb856-7a3a-499d-b9bf-68edeae8b80f" Nov 13 05:46:50.252: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-bc0cb856-7a3a-499d-b9bf-68edeae8b80f" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-bc0cb856-7a3a-499d-b9bf-68edeae8b80f" "/tmp/local-volume-test-bc0cb856-7a3a-499d-b9bf-68edeae8b80f"] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node1-r4d2d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:46:50.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-9e22c8a5-6ea8-416c-8a0b-d6c003b2354d" Nov 13 05:46:50.871: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-9e22c8a5-6ea8-416c-8a0b-d6c003b2354d" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-9e22c8a5-6ea8-416c-8a0b-d6c003b2354d" "/tmp/local-volume-test-9e22c8a5-6ea8-416c-8a0b-d6c003b2354d"] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node1-r4d2d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:46:50.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-001ef5b9-cc00-4f82-aa4c-efb630c0e580" Nov 13 05:46:50.969: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-001ef5b9-cc00-4f82-aa4c-efb630c0e580" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-001ef5b9-cc00-4f82-aa4c-efb630c0e580" "/tmp/local-volume-test-001ef5b9-cc00-4f82-aa4c-efb630c0e580"] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node1-r4d2d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:46:50.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-3add34b5-4227-48cc-af29-c667ac5849aa" Nov 13 05:46:51.075: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-3add34b5-4227-48cc-af29-c667ac5849aa" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-3add34b5-4227-48cc-af29-c667ac5849aa" "/tmp/local-volume-test-3add34b5-4227-48cc-af29-c667ac5849aa"] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node1-r4d2d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:46:51.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-4fd05048-4149-42b0-9dd5-1db716a9ed64" Nov 13 05:46:51.191: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-4fd05048-4149-42b0-9dd5-1db716a9ed64" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-4fd05048-4149-42b0-9dd5-1db716a9ed64" "/tmp/local-volume-test-4fd05048-4149-42b0-9dd5-1db716a9ed64"] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node1-r4d2d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:46:51.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-ff052c0d-cb2e-4bbe-9926-90107b711e15" Nov 13 05:46:51.827: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-ff052c0d-cb2e-4bbe-9926-90107b711e15" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-ff052c0d-cb2e-4bbe-9926-90107b711e15" "/tmp/local-volume-test-ff052c0d-cb2e-4bbe-9926-90107b711e15"] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node1-r4d2d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:46:51.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-4c4a544e-0cae-44c9-afc5-fb009a6ff402" Nov 13 05:46:51.985: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-4c4a544e-0cae-44c9-afc5-fb009a6ff402" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-4c4a544e-0cae-44c9-afc5-fb009a6ff402" "/tmp/local-volume-test-4c4a544e-0cae-44c9-afc5-fb009a6ff402"] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node1-r4d2d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:46:51.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Setting up 10 local volumes on node "node2" STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-90104769-92b4-472e-94e3-7680a6d9c5a3" Nov 13 05:46:58.172: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-90104769-92b4-472e-94e3-7680a6d9c5a3" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-90104769-92b4-472e-94e3-7680a6d9c5a3" "/tmp/local-volume-test-90104769-92b4-472e-94e3-7680a6d9c5a3"] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node2-lqb9c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:46:58.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-3772ccda-e3d9-44c7-98b9-b3256abae02d" Nov 13 05:46:58.283: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-3772ccda-e3d9-44c7-98b9-b3256abae02d" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-3772ccda-e3d9-44c7-98b9-b3256abae02d" "/tmp/local-volume-test-3772ccda-e3d9-44c7-98b9-b3256abae02d"] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node2-lqb9c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:46:58.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-03b6f71f-a595-4665-a608-cfde9556e00f" Nov 13 05:46:58.379: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-03b6f71f-a595-4665-a608-cfde9556e00f" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-03b6f71f-a595-4665-a608-cfde9556e00f" "/tmp/local-volume-test-03b6f71f-a595-4665-a608-cfde9556e00f"] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node2-lqb9c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:46:58.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-d752ccc5-7d19-48c5-8511-34ac2fb8d2b6" Nov 13 05:46:58.475: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-d752ccc5-7d19-48c5-8511-34ac2fb8d2b6" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-d752ccc5-7d19-48c5-8511-34ac2fb8d2b6" "/tmp/local-volume-test-d752ccc5-7d19-48c5-8511-34ac2fb8d2b6"] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node2-lqb9c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:46:58.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-bfbae6e9-a55d-4e60-8568-fb62a3eac1ec" Nov 13 05:46:58.569: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-bfbae6e9-a55d-4e60-8568-fb62a3eac1ec" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-bfbae6e9-a55d-4e60-8568-fb62a3eac1ec" "/tmp/local-volume-test-bfbae6e9-a55d-4e60-8568-fb62a3eac1ec"] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node2-lqb9c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:46:58.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-3f42692c-b8b5-4e6e-ae6f-dc6474a4ec40" Nov 13 05:46:58.663: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-3f42692c-b8b5-4e6e-ae6f-dc6474a4ec40" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-3f42692c-b8b5-4e6e-ae6f-dc6474a4ec40" "/tmp/local-volume-test-3f42692c-b8b5-4e6e-ae6f-dc6474a4ec40"] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node2-lqb9c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:46:58.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-7b2ab630-acc9-41af-b33d-ea0092f6a47e" Nov 13 05:46:58.750: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-7b2ab630-acc9-41af-b33d-ea0092f6a47e" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-7b2ab630-acc9-41af-b33d-ea0092f6a47e" "/tmp/local-volume-test-7b2ab630-acc9-41af-b33d-ea0092f6a47e"] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node2-lqb9c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:46:58.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-a447662c-2b3f-4cdc-a3e9-62411c5c1ad3" Nov 13 05:46:58.894: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-a447662c-2b3f-4cdc-a3e9-62411c5c1ad3" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-a447662c-2b3f-4cdc-a3e9-62411c5c1ad3" "/tmp/local-volume-test-a447662c-2b3f-4cdc-a3e9-62411c5c1ad3"] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node2-lqb9c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:46:58.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-1bd6c566-8feb-43ab-8196-201cd2cd0bdb" Nov 13 05:46:58.990: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-1bd6c566-8feb-43ab-8196-201cd2cd0bdb" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-1bd6c566-8feb-43ab-8196-201cd2cd0bdb" "/tmp/local-volume-test-1bd6c566-8feb-43ab-8196-201cd2cd0bdb"] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node2-lqb9c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:46:58.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-63379926-17e0-4261-961a-6ee2e8820f3e" Nov 13 05:46:59.076: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-63379926-17e0-4261-961a-6ee2e8820f3e" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-63379926-17e0-4261-961a-6ee2e8820f3e" "/tmp/local-volume-test-63379926-17e0-4261-961a-6ee2e8820f3e"] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node2-lqb9c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:46:59.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Create 20 PVs STEP: Start a goroutine to recycle unbound PVs [It] should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:531 STEP: Creating 7 pods periodically STEP: Waiting for all pods to complete successfully STEP: Delete "local-pvvmp94" and create a new PV for same local volume storage Nov 13 05:47:07.357: INFO: Deleting pod pod-11200529-bd4d-4bdd-9236-0abce7dfad34 Nov 13 05:47:07.364: INFO: Deleting PersistentVolumeClaim "pvc-fwrcw" Nov 13 05:47:07.368: INFO: Deleting PersistentVolumeClaim "pvc-c29gr" Nov 13 05:47:07.371: INFO: Deleting PersistentVolumeClaim "pvc-6mpwq" Nov 13 05:47:07.375: INFO: 1/28 pods finished STEP: Delete "local-pvbhbtx" and create a new PV for same local volume storage STEP: Delete "local-pvnnl99" and create a new PV for same local volume storage STEP: Delete "local-pvdknjq" and create a new PV for same local volume storage Nov 13 05:47:09.356: INFO: Deleting pod pod-2febd663-5305-48c7-8328-a062c042a1d9 Nov 13 05:47:09.365: INFO: Deleting PersistentVolumeClaim "pvc-52vzr" Nov 13 05:47:09.369: INFO: Deleting PersistentVolumeClaim "pvc-xbbxj" Nov 13 05:47:09.373: INFO: Deleting PersistentVolumeClaim "pvc-j4rms" Nov 13 05:47:09.376: INFO: 2/28 pods finished STEP: Delete "local-pv6rsh2" and create a new PV for same local volume storage STEP: Delete "local-pv5tzjp" and create a new PV for same local volume storage STEP: Delete "local-pv7bq2q" and create a new PV for same local volume storage STEP: Delete "local-pv8nshv" and create a new PV for same local volume storage Nov 13 05:47:11.356: INFO: Deleting pod pod-4ec9f64a-4300-498c-8f60-5370623c9b90 Nov 13 05:47:11.363: INFO: Deleting PersistentVolumeClaim "pvc-qvhcn" Nov 13 05:47:11.369: INFO: Deleting PersistentVolumeClaim "pvc-mhr6m" Nov 13 05:47:11.373: INFO: Deleting PersistentVolumeClaim "pvc-8xktk" Nov 13 05:47:11.377: INFO: 3/28 pods finished STEP: Delete "local-pvxtmwl" and create a new PV for same local volume storage STEP: Delete "local-pvp6rzx" and create a new PV for same local volume storage STEP: Delete "local-pv9xntr" and create a new PV for same local volume storage Nov 13 05:47:13.356: INFO: Deleting pod pod-644f200e-b358-4149-a9e9-d1a1fafbe884 Nov 13 05:47:13.364: INFO: Deleting PersistentVolumeClaim "pvc-7s627" Nov 13 05:47:13.368: INFO: Deleting PersistentVolumeClaim "pvc-npv76" Nov 13 05:47:13.372: INFO: Deleting PersistentVolumeClaim "pvc-rw252" Nov 13 05:47:13.375: INFO: 4/28 pods finished Nov 13 05:47:13.375: INFO: Deleting pod pod-7cf1ccf8-2f13-4d1b-bf93-ed68618bf296 Nov 13 05:47:13.381: INFO: Deleting PersistentVolumeClaim "pvc-s8kft" STEP: Delete "local-pvtrpb6" and create a new PV for same local volume storage Nov 13 05:47:13.385: INFO: Deleting PersistentVolumeClaim "pvc-dv95x" Nov 13 05:47:13.388: INFO: Deleting PersistentVolumeClaim "pvc-b4vtp" STEP: Delete "local-pv46xxn" and create a new PV for same local volume storage Nov 13 05:47:13.392: INFO: 5/28 pods finished STEP: Delete "local-pvfdvgg" and create a new PV for same local volume storage STEP: Delete "local-pvgmrhr" and create a new PV for same local volume storage STEP: Delete "local-pvc4hvw" and create a new PV for same local volume storage STEP: Delete "local-pvdkrw9" and create a new PV for same local volume storage Nov 13 05:47:16.356: INFO: Deleting pod pod-87164bab-28f0-411f-8ffb-9f395c13c54f Nov 13 05:47:16.362: INFO: Deleting PersistentVolumeClaim "pvc-9wx8n" Nov 13 05:47:16.365: INFO: Deleting PersistentVolumeClaim "pvc-6csdf" Nov 13 05:47:16.368: INFO: Deleting PersistentVolumeClaim "pvc-kwzcj" Nov 13 05:47:16.372: INFO: 6/28 pods finished STEP: Delete "local-pvsq6rj" and create a new PV for same local volume storage STEP: Delete "local-pvzs2jc" and create a new PV for same local volume storage STEP: Delete "local-pvvg9g8" and create a new PV for same local volume storage Nov 13 05:47:21.356: INFO: Deleting pod pod-97750e2d-30e6-48db-80b6-c0320657deed Nov 13 05:47:21.365: INFO: Deleting PersistentVolumeClaim "pvc-lkf9h" Nov 13 05:47:21.369: INFO: Deleting PersistentVolumeClaim "pvc-glwzx" Nov 13 05:47:21.373: INFO: Deleting PersistentVolumeClaim "pvc-x462p" Nov 13 05:47:21.376: INFO: 7/28 pods finished STEP: Delete "local-pvgskwt" and create a new PV for same local volume storage STEP: Delete "local-pvk9plq" and create a new PV for same local volume storage STEP: Delete "local-pvngrpm" and create a new PV for same local volume storage STEP: Delete "local-pvvfktk" and create a new PV for same local volume storage Nov 13 05:47:28.357: INFO: Deleting pod pod-07ba6316-b0e3-4685-b3ed-9914aae67798 Nov 13 05:47:28.365: INFO: Deleting PersistentVolumeClaim "pvc-wqzql" Nov 13 05:47:28.369: INFO: Deleting PersistentVolumeClaim "pvc-ttb4c" Nov 13 05:47:28.372: INFO: Deleting PersistentVolumeClaim "pvc-d8zck" Nov 13 05:47:28.376: INFO: 8/28 pods finished STEP: Delete "local-pvmz4b6" and create a new PV for same local volume storage STEP: Delete "local-pvq29mh" and create a new PV for same local volume storage STEP: Delete "local-pv8j4tz" and create a new PV for same local volume storage Nov 13 05:47:32.359: INFO: Deleting pod pod-3fa5be54-ad6c-45dd-9aa7-b13645a944bc Nov 13 05:47:32.366: INFO: Deleting PersistentVolumeClaim "pvc-kj5fh" Nov 13 05:47:32.369: INFO: Deleting PersistentVolumeClaim "pvc-55qlr" Nov 13 05:47:32.374: INFO: Deleting PersistentVolumeClaim "pvc-j6m7g" Nov 13 05:47:32.378: INFO: 9/28 pods finished STEP: Delete "local-pv55fvg" and create a new PV for same local volume storage STEP: Delete "local-pvp5cll" and create a new PV for same local volume storage STEP: Delete "local-pvwmzfc" and create a new PV for same local volume storage Nov 13 05:47:34.357: INFO: Deleting pod pod-46fad40a-92d7-4181-95e0-a94ffdb3bb19 Nov 13 05:47:34.364: INFO: Deleting PersistentVolumeClaim "pvc-whlrw" Nov 13 05:47:34.369: INFO: Deleting PersistentVolumeClaim "pvc-x2psp" Nov 13 05:47:34.373: INFO: Deleting PersistentVolumeClaim "pvc-9275x" Nov 13 05:47:34.377: INFO: 10/28 pods finished STEP: Delete "local-pvm9jk4" and create a new PV for same local volume storage STEP: Delete "local-pvl2cml" and create a new PV for same local volume storage STEP: Delete "local-pv7c2mh" and create a new PV for same local volume storage Nov 13 05:47:39.355: INFO: Deleting pod pod-fef442c2-998d-4179-98c6-73ab6cbe0eef Nov 13 05:47:39.361: INFO: Deleting PersistentVolumeClaim "pvc-dr5bs" Nov 13 05:47:39.365: INFO: Deleting PersistentVolumeClaim "pvc-q2ksf" Nov 13 05:47:39.369: INFO: Deleting PersistentVolumeClaim "pvc-mb7z7" Nov 13 05:47:39.373: INFO: 11/28 pods finished STEP: Delete "local-pv59vfv" and create a new PV for same local volume storage STEP: Delete "local-pvzh5l2" and create a new PV for same local volume storage STEP: Delete "local-pvqkwrz" and create a new PV for same local volume storage Nov 13 05:47:41.356: INFO: Deleting pod pod-478bf89d-74eb-408e-a067-63447c32b407 Nov 13 05:47:41.363: INFO: Deleting PersistentVolumeClaim "pvc-bjnv6" Nov 13 05:47:41.366: INFO: Deleting PersistentVolumeClaim "pvc-lb695" Nov 13 05:47:41.370: INFO: Deleting PersistentVolumeClaim "pvc-rczmw" Nov 13 05:47:41.374: INFO: 12/28 pods finished Nov 13 05:47:41.374: INFO: Deleting pod pod-5064cfd4-3d68-4783-9850-b52596295273 Nov 13 05:47:41.381: INFO: Deleting PersistentVolumeClaim "pvc-pnmh9" STEP: Delete "local-pvnz6xh" and create a new PV for same local volume storage Nov 13 05:47:41.384: INFO: Deleting PersistentVolumeClaim "pvc-pj5tq" Nov 13 05:47:41.388: INFO: Deleting PersistentVolumeClaim "pvc-4qx48" Nov 13 05:47:41.393: INFO: 13/28 pods finished STEP: Delete "local-pvvdn2h" and create a new PV for same local volume storage STEP: Delete "local-pv25466" and create a new PV for same local volume storage STEP: Delete "local-pvqxv5m" and create a new PV for same local volume storage STEP: Delete "local-pvztxgf" and create a new PV for same local volume storage STEP: Delete "local-pvcdkw4" and create a new PV for same local volume storage Nov 13 05:47:42.357: INFO: Deleting pod pod-d31fbc5b-4ffd-4cb5-8e04-54dac0733a7d Nov 13 05:47:42.369: INFO: Deleting PersistentVolumeClaim "pvc-dmg49" Nov 13 05:47:42.373: INFO: Deleting PersistentVolumeClaim "pvc-4gpxk" Nov 13 05:47:42.376: INFO: Deleting PersistentVolumeClaim "pvc-sfjh9" Nov 13 05:47:42.380: INFO: 14/28 pods finished STEP: Delete "local-pvr4sfm" and create a new PV for same local volume storage STEP: Delete "local-pv7sqfk" and create a new PV for same local volume storage STEP: Delete "local-pvtfd9q" and create a new PV for same local volume storage Nov 13 05:47:51.356: INFO: Deleting pod pod-598d3a4e-b7c6-46f3-abb6-b52f37b78830 Nov 13 05:47:51.364: INFO: Deleting PersistentVolumeClaim "pvc-98j6j" Nov 13 05:47:51.367: INFO: Deleting PersistentVolumeClaim "pvc-d8xps" Nov 13 05:47:51.371: INFO: Deleting PersistentVolumeClaim "pvc-7zkzd" Nov 13 05:47:51.374: INFO: 15/28 pods finished STEP: Delete "local-pv8vz58" and create a new PV for same local volume storage STEP: Delete "local-pvwwrc4" and create a new PV for same local volume storage STEP: Delete "local-pvlc2qj" and create a new PV for same local volume storage Nov 13 05:47:52.357: INFO: Deleting pod pod-4ed74420-a0ac-4fdf-88fb-b97d91f47170 Nov 13 05:47:52.365: INFO: Deleting PersistentVolumeClaim "pvc-97mrc" Nov 13 05:47:52.369: INFO: Deleting PersistentVolumeClaim "pvc-5ppdb" Nov 13 05:47:52.373: INFO: Deleting PersistentVolumeClaim "pvc-mqqqg" Nov 13 05:47:52.376: INFO: 16/28 pods finished STEP: Delete "local-pvpmm9b" and create a new PV for same local volume storage STEP: Delete "local-pvw49fg" and create a new PV for same local volume storage STEP: Delete "local-pvvksnf" and create a new PV for same local volume storage Nov 13 05:47:54.355: INFO: Deleting pod pod-7ca99d52-0a0c-48e7-ae5f-02c2914701c9 Nov 13 05:47:54.363: INFO: Deleting PersistentVolumeClaim "pvc-lfb5q" Nov 13 05:47:54.367: INFO: Deleting PersistentVolumeClaim "pvc-hqpc7" Nov 13 05:47:54.371: INFO: Deleting PersistentVolumeClaim "pvc-6lgvx" Nov 13 05:47:54.374: INFO: 17/28 pods finished Nov 13 05:47:54.374: INFO: Deleting pod pod-d0f13272-d2c1-49e9-8536-9c3cdcb05982 STEP: Delete "local-pvhbt8p" and create a new PV for same local volume storage Nov 13 05:47:54.382: INFO: Deleting PersistentVolumeClaim "pvc-5xh85" Nov 13 05:47:54.385: INFO: Deleting PersistentVolumeClaim "pvc-5rlht" Nov 13 05:47:54.388: INFO: Deleting PersistentVolumeClaim "pvc-6wmkd" STEP: Delete "local-pvmv8sl" and create a new PV for same local volume storage Nov 13 05:47:54.392: INFO: 18/28 pods finished STEP: Delete "local-pvfcsg8" and create a new PV for same local volume storage STEP: Delete "local-pvjxxrw" and create a new PV for same local volume storage STEP: Delete "local-pv7nmbq" and create a new PV for same local volume storage STEP: Delete "local-pvqbhfr" and create a new PV for same local volume storage Nov 13 05:47:56.358: INFO: Deleting pod pod-fb0e2e64-a22d-457c-8402-0f34c367929e Nov 13 05:47:56.363: INFO: Deleting PersistentVolumeClaim "pvc-9ccr4" Nov 13 05:47:56.367: INFO: Deleting PersistentVolumeClaim "pvc-54mv5" Nov 13 05:47:56.371: INFO: Deleting PersistentVolumeClaim "pvc-6vrzb" Nov 13 05:47:56.374: INFO: 19/28 pods finished STEP: Delete "local-pvvlvtn" and create a new PV for same local volume storage Nov 13 05:47:57.356: INFO: Deleting pod pod-9a7263a9-0885-4d1a-b99a-85607e49db25 Nov 13 05:47:57.363: INFO: Deleting PersistentVolumeClaim "pvc-6kc5b" STEP: Delete "local-pvs4tc9" and create a new PV for same local volume storage Nov 13 05:47:57.367: INFO: Deleting PersistentVolumeClaim "pvc-qpjpt" Nov 13 05:47:57.371: INFO: Deleting PersistentVolumeClaim "pvc-dql8c" Nov 13 05:47:57.375: INFO: 20/28 pods finished STEP: Delete "local-pvxc96z" and create a new PV for same local volume storage STEP: Delete "local-pvvxzpd" and create a new PV for same local volume storage STEP: Delete "local-pvsjmtr" and create a new PV for same local volume storage STEP: Delete "local-pvkkq7s" and create a new PV for same local volume storage Nov 13 05:48:04.356: INFO: Deleting pod pod-6aafd56a-ae2d-4ac1-b3b8-837a414b2662 Nov 13 05:48:04.363: INFO: Deleting PersistentVolumeClaim "pvc-5cz6f" Nov 13 05:48:04.366: INFO: Deleting PersistentVolumeClaim "pvc-v6gj2" Nov 13 05:48:04.369: INFO: Deleting PersistentVolumeClaim "pvc-4hktb" Nov 13 05:48:04.373: INFO: 21/28 pods finished STEP: Delete "local-pvp67ps" and create a new PV for same local volume storage STEP: Delete "local-pvwrwqf" and create a new PV for same local volume storage STEP: Delete "local-pvjsqhf" and create a new PV for same local volume storage STEP: Delete "pvc-49d83426-0352-4770-84ba-6dba819d4d98" and create a new PV for same local volume storage Nov 13 05:48:07.356: INFO: Deleting pod pod-a04ccb54-4271-4f38-a0f8-c018626f965e Nov 13 05:48:07.363: INFO: Deleting PersistentVolumeClaim "pvc-c8lfj" Nov 13 05:48:07.368: INFO: Deleting PersistentVolumeClaim "pvc-95mr6" Nov 13 05:48:07.372: INFO: Deleting PersistentVolumeClaim "pvc-pdxg9" Nov 13 05:48:07.375: INFO: 22/28 pods finished STEP: Delete "local-pvljtkc" and create a new PV for same local volume storage STEP: Delete "local-pvk4cb7" and create a new PV for same local volume storage STEP: Delete "local-pvrlrbk" and create a new PV for same local volume storage STEP: Delete "pvc-49d83426-0352-4770-84ba-6dba819d4d98" and create a new PV for same local volume storage STEP: Delete "pvc-49d83426-0352-4770-84ba-6dba819d4d98" and create a new PV for same local volume storage Nov 13 05:48:08.357: INFO: Deleting pod pod-8a410358-e2f5-4ae0-8215-746dad1f7d02 Nov 13 05:48:08.365: INFO: Deleting PersistentVolumeClaim "pvc-vzkpj" Nov 13 05:48:08.370: INFO: Deleting PersistentVolumeClaim "pvc-54l7z" Nov 13 05:48:08.374: INFO: Deleting PersistentVolumeClaim "pvc-rl4dh" Nov 13 05:48:08.378: INFO: 23/28 pods finished STEP: Delete "local-pvlnmtp" and create a new PV for same local volume storage STEP: Delete "local-pvrzz74" and create a new PV for same local volume storage STEP: Delete "local-pvcvwwt" and create a new PV for same local volume storage Nov 13 05:48:09.358: INFO: Deleting pod pod-b3743fcd-f0f6-4d3f-8781-0a278d9b1a04 Nov 13 05:48:09.364: INFO: Deleting PersistentVolumeClaim "pvc-bwnxp" Nov 13 05:48:09.368: INFO: Deleting PersistentVolumeClaim "pvc-dnht6" Nov 13 05:48:09.371: INFO: Deleting PersistentVolumeClaim "pvc-sxj8g" Nov 13 05:48:09.375: INFO: 24/28 pods finished STEP: Delete "local-pv8vzr2" and create a new PV for same local volume storage STEP: Delete "local-pvmw4f7" and create a new PV for same local volume storage STEP: Delete "local-pvgsdb7" and create a new PV for same local volume storage Nov 13 05:48:11.355: INFO: Deleting pod pod-067545ee-ef69-4f23-b8fc-4eee54b0d7a7 Nov 13 05:48:11.364: INFO: Deleting PersistentVolumeClaim "pvc-hq29x" Nov 13 05:48:11.368: INFO: Deleting PersistentVolumeClaim "pvc-gvz86" Nov 13 05:48:11.371: INFO: Deleting PersistentVolumeClaim "pvc-zvcmk" Nov 13 05:48:11.375: INFO: 25/28 pods finished STEP: Delete "local-pvqc2sc" and create a new PV for same local volume storage STEP: Delete "local-pv4lhxh" and create a new PV for same local volume storage STEP: Delete "local-pv22kzx" and create a new PV for same local volume storage Nov 13 05:48:14.356: INFO: Deleting pod pod-35a22b49-7403-4af3-8859-380fa3db7284 Nov 13 05:48:14.362: INFO: Deleting PersistentVolumeClaim "pvc-ncv69" Nov 13 05:48:14.365: INFO: Deleting PersistentVolumeClaim "pvc-rgsrl" Nov 13 05:48:14.369: INFO: Deleting PersistentVolumeClaim "pvc-kl6gn" Nov 13 05:48:14.373: INFO: 26/28 pods finished STEP: Delete "local-pvvzn8g" and create a new PV for same local volume storage STEP: Delete "local-pv4z6fr" and create a new PV for same local volume storage STEP: Delete "local-pvs2w4x" and create a new PV for same local volume storage Nov 13 05:48:22.356: INFO: Deleting pod pod-ee34bbbc-59e8-4f99-a593-ded7a8ceefba Nov 13 05:48:22.363: INFO: Deleting PersistentVolumeClaim "pvc-tclkm" Nov 13 05:48:22.367: INFO: Deleting PersistentVolumeClaim "pvc-n8jjx" Nov 13 05:48:22.371: INFO: Deleting PersistentVolumeClaim "pvc-lff4g" Nov 13 05:48:22.375: INFO: 27/28 pods finished STEP: Delete "local-pvcpxxb" and create a new PV for same local volume storage STEP: Delete "local-pvnsmfd" and create a new PV for same local volume storage STEP: Delete "local-pvw9nk4" and create a new PV for same local volume storage STEP: Delete "local-pvnnkr4" and create a new PV for same local volume storage Nov 13 05:48:23.355: INFO: Deleting pod pod-70ab388e-e8f5-4779-a3de-f0e153f3606c Nov 13 05:48:23.360: INFO: Deleting PersistentVolumeClaim "pvc-q5qvx" Nov 13 05:48:23.364: INFO: Deleting PersistentVolumeClaim "pvc-ncb69" Nov 13 05:48:23.367: INFO: Deleting PersistentVolumeClaim "pvc-4jf4h" Nov 13 05:48:23.371: INFO: 28/28 pods finished [AfterEach] Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:519 STEP: Stop and wait for recycle goroutine to finish STEP: Clean all PVs STEP: Cleaning up 10 local volumes on node "node1" STEP: Cleaning up PVC and PV Nov 13 05:48:23.371: INFO: pvc is nil Nov 13 05:48:23.371: INFO: Deleting PersistentVolume "local-pv6bv4d" STEP: Cleaning up PVC and PV Nov 13 05:48:23.375: INFO: pvc is nil Nov 13 05:48:23.375: INFO: Deleting PersistentVolume "local-pv6f7hc" STEP: Cleaning up PVC and PV Nov 13 05:48:23.379: INFO: pvc is nil Nov 13 05:48:23.379: INFO: Deleting PersistentVolume "local-pvb5phw" STEP: Cleaning up PVC and PV Nov 13 05:48:23.383: INFO: pvc is nil Nov 13 05:48:23.383: INFO: Deleting PersistentVolume "local-pvz4lhv" STEP: Cleaning up PVC and PV Nov 13 05:48:23.387: INFO: pvc is nil Nov 13 05:48:23.387: INFO: Deleting PersistentVolume "local-pvxm75x" STEP: Cleaning up PVC and PV Nov 13 05:48:23.390: INFO: pvc is nil Nov 13 05:48:23.390: INFO: Deleting PersistentVolume "local-pvdkkn4" STEP: Cleaning up PVC and PV Nov 13 05:48:23.396: INFO: pvc is nil Nov 13 05:48:23.396: INFO: Deleting PersistentVolume "local-pvwr8d2" STEP: Cleaning up PVC and PV Nov 13 05:48:23.434: INFO: pvc is nil Nov 13 05:48:23.434: INFO: Deleting PersistentVolume "local-pvzpcqs" STEP: Cleaning up PVC and PV Nov 13 05:48:23.438: INFO: pvc is nil Nov 13 05:48:23.438: INFO: Deleting PersistentVolume "local-pv2xjtw" STEP: Cleaning up PVC and PV Nov 13 05:48:23.441: INFO: pvc is nil Nov 13 05:48:23.441: INFO: Deleting PersistentVolume "local-pvgw46j" STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-1aceb9ac-c444-4099-94a0-350135ab0e61" Nov 13 05:48:23.444: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-1aceb9ac-c444-4099-94a0-350135ab0e61"] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node1-r4d2d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:23.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:48:23.539: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-1aceb9ac-c444-4099-94a0-350135ab0e61] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node1-r4d2d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:23.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-60d9c491-baf5-4a8f-a320-07ed8d9dd7a8" Nov 13 05:48:23.621: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-60d9c491-baf5-4a8f-a320-07ed8d9dd7a8"] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node1-r4d2d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:23.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:48:23.722: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-60d9c491-baf5-4a8f-a320-07ed8d9dd7a8] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node1-r4d2d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:23.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-b0a88616-0228-48d1-8faf-985fb9155041" Nov 13 05:48:23.801: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-b0a88616-0228-48d1-8faf-985fb9155041"] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node1-r4d2d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:23.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:48:23.892: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-b0a88616-0228-48d1-8faf-985fb9155041] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node1-r4d2d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:23.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-bc0cb856-7a3a-499d-b9bf-68edeae8b80f" Nov 13 05:48:23.982: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-bc0cb856-7a3a-499d-b9bf-68edeae8b80f"] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node1-r4d2d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:23.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:48:24.091: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-bc0cb856-7a3a-499d-b9bf-68edeae8b80f] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node1-r4d2d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:24.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-9e22c8a5-6ea8-416c-8a0b-d6c003b2354d" Nov 13 05:48:24.177: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-9e22c8a5-6ea8-416c-8a0b-d6c003b2354d"] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node1-r4d2d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:24.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:48:24.271: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-9e22c8a5-6ea8-416c-8a0b-d6c003b2354d] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node1-r4d2d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:24.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-001ef5b9-cc00-4f82-aa4c-efb630c0e580" Nov 13 05:48:24.364: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-001ef5b9-cc00-4f82-aa4c-efb630c0e580"] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node1-r4d2d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:24.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:48:24.456: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-001ef5b9-cc00-4f82-aa4c-efb630c0e580] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node1-r4d2d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:24.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-3add34b5-4227-48cc-af29-c667ac5849aa" Nov 13 05:48:24.537: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-3add34b5-4227-48cc-af29-c667ac5849aa"] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node1-r4d2d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:24.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:48:24.638: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-3add34b5-4227-48cc-af29-c667ac5849aa] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node1-r4d2d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:24.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-4fd05048-4149-42b0-9dd5-1db716a9ed64" Nov 13 05:48:24.725: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-4fd05048-4149-42b0-9dd5-1db716a9ed64"] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node1-r4d2d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:24.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:48:24.831: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4fd05048-4149-42b0-9dd5-1db716a9ed64] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node1-r4d2d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:24.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-ff052c0d-cb2e-4bbe-9926-90107b711e15" Nov 13 05:48:24.915: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-ff052c0d-cb2e-4bbe-9926-90107b711e15"] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node1-r4d2d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:24.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:48:25.045: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ff052c0d-cb2e-4bbe-9926-90107b711e15] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node1-r4d2d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:25.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-4c4a544e-0cae-44c9-afc5-fb009a6ff402" Nov 13 05:48:25.130: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-4c4a544e-0cae-44c9-afc5-fb009a6ff402"] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node1-r4d2d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:25.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:48:25.224: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4c4a544e-0cae-44c9-afc5-fb009a6ff402] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node1-r4d2d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:25.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up 10 local volumes on node "node2" STEP: Cleaning up PVC and PV Nov 13 05:48:25.314: INFO: pvc is nil Nov 13 05:48:25.314: INFO: Deleting PersistentVolume "local-pvbsl7z" STEP: Cleaning up PVC and PV Nov 13 05:48:25.317: INFO: pvc is nil Nov 13 05:48:25.317: INFO: Deleting PersistentVolume "local-pvg2fmq" STEP: Cleaning up PVC and PV Nov 13 05:48:25.321: INFO: pvc is nil Nov 13 05:48:25.321: INFO: Deleting PersistentVolume "local-pvkscgw" STEP: Cleaning up PVC and PV Nov 13 05:48:25.326: INFO: pvc is nil Nov 13 05:48:25.326: INFO: Deleting PersistentVolume "local-pvgvrhd" STEP: Cleaning up PVC and PV Nov 13 05:48:25.330: INFO: pvc is nil Nov 13 05:48:25.330: INFO: Deleting PersistentVolume "local-pv4jr4x" STEP: Cleaning up PVC and PV Nov 13 05:48:25.333: INFO: pvc is nil Nov 13 05:48:25.333: INFO: Deleting PersistentVolume "local-pvz7bp7" STEP: Cleaning up PVC and PV Nov 13 05:48:25.337: INFO: pvc is nil Nov 13 05:48:25.337: INFO: Deleting PersistentVolume "local-pv6xbdx" STEP: Cleaning up PVC and PV Nov 13 05:48:25.341: INFO: pvc is nil Nov 13 05:48:25.341: INFO: Deleting PersistentVolume "local-pvvwvhq" STEP: Cleaning up PVC and PV Nov 13 05:48:25.344: INFO: pvc is nil Nov 13 05:48:25.344: INFO: Deleting PersistentVolume "local-pvvqz8q" STEP: Cleaning up PVC and PV Nov 13 05:48:25.348: INFO: pvc is nil Nov 13 05:48:25.348: INFO: Deleting PersistentVolume "local-pvcn958" STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-90104769-92b4-472e-94e3-7680a6d9c5a3" Nov 13 05:48:25.352: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-90104769-92b4-472e-94e3-7680a6d9c5a3"] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node2-lqb9c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:25.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:48:25.441: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-90104769-92b4-472e-94e3-7680a6d9c5a3] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node2-lqb9c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:25.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-3772ccda-e3d9-44c7-98b9-b3256abae02d" Nov 13 05:48:25.518: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-3772ccda-e3d9-44c7-98b9-b3256abae02d"] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node2-lqb9c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:25.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:48:25.603: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-3772ccda-e3d9-44c7-98b9-b3256abae02d] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node2-lqb9c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:25.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-03b6f71f-a595-4665-a608-cfde9556e00f" Nov 13 05:48:25.722: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-03b6f71f-a595-4665-a608-cfde9556e00f"] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node2-lqb9c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:25.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:48:25.809: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-03b6f71f-a595-4665-a608-cfde9556e00f] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node2-lqb9c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:25.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-d752ccc5-7d19-48c5-8511-34ac2fb8d2b6" Nov 13 05:48:25.884: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-d752ccc5-7d19-48c5-8511-34ac2fb8d2b6"] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node2-lqb9c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:25.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:48:25.980: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-d752ccc5-7d19-48c5-8511-34ac2fb8d2b6] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node2-lqb9c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:25.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-bfbae6e9-a55d-4e60-8568-fb62a3eac1ec" Nov 13 05:48:26.060: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-bfbae6e9-a55d-4e60-8568-fb62a3eac1ec"] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node2-lqb9c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:26.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:48:26.180: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-bfbae6e9-a55d-4e60-8568-fb62a3eac1ec] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node2-lqb9c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:26.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-3f42692c-b8b5-4e6e-ae6f-dc6474a4ec40" Nov 13 05:48:26.262: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-3f42692c-b8b5-4e6e-ae6f-dc6474a4ec40"] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node2-lqb9c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:26.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:48:26.345: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-3f42692c-b8b5-4e6e-ae6f-dc6474a4ec40] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node2-lqb9c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:26.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-7b2ab630-acc9-41af-b33d-ea0092f6a47e" Nov 13 05:48:26.425: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-7b2ab630-acc9-41af-b33d-ea0092f6a47e"] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node2-lqb9c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:26.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:48:26.509: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-7b2ab630-acc9-41af-b33d-ea0092f6a47e] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node2-lqb9c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:26.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-a447662c-2b3f-4cdc-a3e9-62411c5c1ad3" Nov 13 05:48:26.592: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-a447662c-2b3f-4cdc-a3e9-62411c5c1ad3"] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node2-lqb9c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:26.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:48:26.679: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a447662c-2b3f-4cdc-a3e9-62411c5c1ad3] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node2-lqb9c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:26.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-1bd6c566-8feb-43ab-8196-201cd2cd0bdb" Nov 13 05:48:26.764: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-1bd6c566-8feb-43ab-8196-201cd2cd0bdb"] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node2-lqb9c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:26.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:48:26.862: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-1bd6c566-8feb-43ab-8196-201cd2cd0bdb] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node2-lqb9c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:26.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-63379926-17e0-4261-961a-6ee2e8820f3e" Nov 13 05:48:26.950: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-63379926-17e0-4261-961a-6ee2e8820f3e"] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node2-lqb9c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:26.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:48:27.491: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-63379926-17e0-4261-961a-6ee2e8820f3e] Namespace:persistent-local-volumes-test-6582 PodName:hostexec-node2-lqb9c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:27.491: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:48:27.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6582" for this suite. • [SLOW TEST:100.007 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:441 should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:531 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","total":-1,"completed":8,"skipped":268,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:47:49.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:47:57.262: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-8e74eb8f-399d-4e13-b788-d0cbf449136b] Namespace:persistent-local-volumes-test-1480 PodName:hostexec-node1-gjddw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:47:57.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:47:57.836: INFO: Creating a PV followed by a PVC Nov 13 05:47:57.846: INFO: Waiting for PV local-pvcpz5f to bind to PVC pvc-vw8c4 Nov 13 05:47:57.846: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-vw8c4] to have phase Bound Nov 13 05:47:57.849: INFO: PersistentVolumeClaim pvc-vw8c4 found but phase is Pending instead of Bound. Nov 13 05:47:59.853: INFO: PersistentVolumeClaim pvc-vw8c4 found but phase is Pending instead of Bound. Nov 13 05:48:01.857: INFO: PersistentVolumeClaim pvc-vw8c4 found but phase is Pending instead of Bound. Nov 13 05:48:03.862: INFO: PersistentVolumeClaim pvc-vw8c4 found but phase is Pending instead of Bound. Nov 13 05:48:05.866: INFO: PersistentVolumeClaim pvc-vw8c4 found but phase is Pending instead of Bound. Nov 13 05:48:07.869: INFO: PersistentVolumeClaim pvc-vw8c4 found but phase is Pending instead of Bound. Nov 13 05:48:09.873: INFO: PersistentVolumeClaim pvc-vw8c4 found but phase is Pending instead of Bound. Nov 13 05:48:11.878: INFO: PersistentVolumeClaim pvc-vw8c4 found and phase=Bound (14.031834262s) Nov 13 05:48:11.878: INFO: Waiting up to 3m0s for PersistentVolume local-pvcpz5f to have phase Bound Nov 13 05:48:11.881: INFO: PersistentVolume local-pvcpz5f found and phase=Bound (2.815425ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:48:17.905: INFO: pod "pod-64572785-b551-425b-8298-9e3937003ae7" created on Node "node1" STEP: Writing in pod1 Nov 13 05:48:17.906: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1480 PodName:pod-64572785-b551-425b-8298-9e3937003ae7 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:48:17.906: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:48:18.070: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 13 05:48:18.070: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1480 PodName:pod-64572785-b551-425b-8298-9e3937003ae7 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:48:18.070: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:48:18.200: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-64572785-b551-425b-8298-9e3937003ae7 in namespace persistent-local-volumes-test-1480 STEP: Creating pod2 STEP: Creating a pod Nov 13 05:48:26.225: INFO: pod "pod-fd34b2c3-6930-4f3f-950e-fc43f6e4d013" created on Node "node1" STEP: Reading in pod2 Nov 13 05:48:26.225: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1480 PodName:pod-fd34b2c3-6930-4f3f-950e-fc43f6e4d013 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:48:26.225: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:48:27.794: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-fd34b2c3-6930-4f3f-950e-fc43f6e4d013 in namespace persistent-local-volumes-test-1480 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:48:27.798: INFO: Deleting PersistentVolumeClaim "pvc-vw8c4" Nov 13 05:48:27.803: INFO: Deleting PersistentVolume "local-pvcpz5f" STEP: Removing the test directory Nov 13 05:48:27.806: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8e74eb8f-399d-4e13-b788-d0cbf449136b] Namespace:persistent-local-volumes-test-1480 PodName:hostexec-node1-gjddw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:27.806: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:48:27.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1480" for this suite. • [SLOW TEST:38.707 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":9,"skipped":198,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:48:28.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Nov 13 05:48:28.053: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:48:28.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-808" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create unbound pvc count metrics for pvc controller after creating pvc only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:494 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:48:19.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Nov 13 05:48:19.084: INFO: The status of Pod test-hostpath-type-f5zk6 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:48:21.088: INFO: The status of Pod test-hostpath-type-f5zk6 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:48:23.089: INFO: The status of Pod test-hostpath-type-f5zk6 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:48:25.089: INFO: The status of Pod test-hostpath-type-f5zk6 is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:48:27.089: INFO: The status of Pod test-hostpath-type-f5zk6 is Running (Ready = true) STEP: running on node node2 STEP: Create a block device for further testing Nov 13 05:48:27.092: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-9975 PodName:test-hostpath-type-f5zk6 ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:48:27.092: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:350 [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:48:31.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-9975" for this suite. • [SLOW TEST:12.632 seconds] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:350 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathUnset","total":-1,"completed":18,"skipped":737,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:48:28.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:51 Nov 13 05:48:28.108: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PV STEP: Waiting for PV to enter phase Available Nov 13 05:48:28.116: INFO: Waiting up to 30s for PersistentVolume hostpath-z24sb to have phase Available Nov 13 05:48:28.120: INFO: PersistentVolume hostpath-z24sb found but phase is Pending instead of Available. Nov 13 05:48:29.123: INFO: PersistentVolume hostpath-z24sb found and phase=Available (1.00734391s) STEP: Checking that PV Protection finalizer is set [It] Verify that PV bound to a PVC is not removed immediately /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:107 STEP: Creating a PVC STEP: Waiting for PVC to become Bound Nov 13 05:48:29.130: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-6jkbm] to have phase Bound Nov 13 05:48:29.132: INFO: PersistentVolumeClaim pvc-6jkbm found but phase is Pending instead of Bound. Nov 13 05:48:31.136: INFO: PersistentVolumeClaim pvc-6jkbm found and phase=Bound (2.006010916s) STEP: Deleting the PV, however, the PV must not be removed from the system as it's bound to a PVC STEP: Checking that the PV status is Terminating STEP: Deleting the PVC that is bound to the PV STEP: Checking that the PV is automatically removed from the system because it's no longer bound to a PVC Nov 13 05:48:31.147: INFO: Waiting up to 3m0s for PersistentVolume hostpath-z24sb to get deleted Nov 13 05:48:31.149: INFO: PersistentVolume hostpath-z24sb found and phase=Bound (2.069689ms) Nov 13 05:48:33.152: INFO: PersistentVolume hostpath-z24sb was removed [AfterEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:48:33.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-protection-4493" for this suite. [AfterEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:92 Nov 13 05:48:33.159: INFO: AfterEach: Cleaning up test resources. Nov 13 05:48:33.159: INFO: Deleting PersistentVolumeClaim "pvc-6jkbm" Nov 13 05:48:33.162: INFO: Deleting PersistentVolume "hostpath-z24sb" • [SLOW TEST:5.080 seconds] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Verify that PV bound to a PVC is not removed immediately /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:107 ------------------------------ {"msg":"PASSED [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately","total":-1,"completed":10,"skipped":266,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:48:12.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-071fd554-6852-4167-b7cb-231ad0482a0d" Nov 13 05:48:22.823: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-071fd554-6852-4167-b7cb-231ad0482a0d && dd if=/dev/zero of=/tmp/local-volume-test-071fd554-6852-4167-b7cb-231ad0482a0d/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-071fd554-6852-4167-b7cb-231ad0482a0d/file] Namespace:persistent-local-volumes-test-3305 PodName:hostexec-node1-79w7m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:22.823: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:48:23.051: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-071fd554-6852-4167-b7cb-231ad0482a0d/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-3305 PodName:hostexec-node1-79w7m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:23.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:48:23.136: INFO: Creating a PV followed by a PVC Nov 13 05:48:23.142: INFO: Waiting for PV local-pvrfdvd to bind to PVC pvc-2ksfq Nov 13 05:48:23.142: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-2ksfq] to have phase Bound Nov 13 05:48:23.144: INFO: PersistentVolumeClaim pvc-2ksfq found but phase is Pending instead of Bound. Nov 13 05:48:25.150: INFO: PersistentVolumeClaim pvc-2ksfq found and phase=Bound (2.00794904s) Nov 13 05:48:25.150: INFO: Waiting up to 3m0s for PersistentVolume local-pvrfdvd to have phase Bound Nov 13 05:48:25.152: INFO: PersistentVolume local-pvrfdvd found and phase=Bound (2.069413ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:48:31.178: INFO: pod "pod-f9af67c0-2ade-4e24-9b9b-0cf918bb9929" created on Node "node1" STEP: Writing in pod1 Nov 13 05:48:31.178: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3305 PodName:pod-f9af67c0-2ade-4e24-9b9b-0cf918bb9929 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:48:31.178: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:48:31.267: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n18 bytes (18B) copied, 0.000126 seconds, 139.5KB/s", err: Nov 13 05:48:31.267: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-3305 PodName:pod-f9af67c0-2ade-4e24-9b9b-0cf918bb9929 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:48:31.267: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:48:31.352: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-f9af67c0-2ade-4e24-9b9b-0cf918bb9929 in namespace persistent-local-volumes-test-3305 STEP: Creating pod2 STEP: Creating a pod Nov 13 05:48:35.377: INFO: pod "pod-786f85ab-8f9f-4f8b-9a6b-113320aa8b42" created on Node "node1" STEP: Reading in pod2 Nov 13 05:48:35.377: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-3305 PodName:pod-786f85ab-8f9f-4f8b-9a6b-113320aa8b42 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:48:35.377: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:48:35.457: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-786f85ab-8f9f-4f8b-9a6b-113320aa8b42 in namespace persistent-local-volumes-test-3305 [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:48:35.463: INFO: Deleting PersistentVolumeClaim "pvc-2ksfq" Nov 13 05:48:35.467: INFO: Deleting PersistentVolume "local-pvrfdvd" Nov 13 05:48:35.471: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-071fd554-6852-4167-b7cb-231ad0482a0d/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-3305 PodName:hostexec-node1-79w7m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:35.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node1" at path /tmp/local-volume-test-071fd554-6852-4167-b7cb-231ad0482a0d/file Nov 13 05:48:35.561: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-3305 PodName:hostexec-node1-79w7m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:35.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-071fd554-6852-4167-b7cb-231ad0482a0d Nov 13 05:48:35.650: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-071fd554-6852-4167-b7cb-231ad0482a0d] Namespace:persistent-local-volumes-test-3305 PodName:hostexec-node1-79w7m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:35.650: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:48:35.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3305" for this suite. • [SLOW TEST:22.982 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":10,"skipped":398,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:48:31.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:59 STEP: Creating configMap with name configmap-test-volume-fc7657a9-26d1-40da-b62b-9a3e557cb3a5 STEP: Creating a pod to test consume configMaps Nov 13 05:48:31.884: INFO: Waiting up to 5m0s for pod "pod-configmaps-76bf4aab-dd2f-44e6-bf27-600d932b9f03" in namespace "configmap-4835" to be "Succeeded or Failed" Nov 13 05:48:31.887: INFO: Pod "pod-configmaps-76bf4aab-dd2f-44e6-bf27-600d932b9f03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.687366ms Nov 13 05:48:33.891: INFO: Pod "pod-configmaps-76bf4aab-dd2f-44e6-bf27-600d932b9f03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006688493s Nov 13 05:48:35.894: INFO: Pod "pod-configmaps-76bf4aab-dd2f-44e6-bf27-600d932b9f03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00994816s STEP: Saw pod success Nov 13 05:48:35.894: INFO: Pod "pod-configmaps-76bf4aab-dd2f-44e6-bf27-600d932b9f03" satisfied condition "Succeeded or Failed" Nov 13 05:48:35.896: INFO: Trying to get logs from node node2 pod pod-configmaps-76bf4aab-dd2f-44e6-bf27-600d932b9f03 container agnhost-container: STEP: delete the pod Nov 13 05:48:35.910: INFO: Waiting for pod pod-configmaps-76bf4aab-dd2f-44e6-bf27-600d932b9f03 to disappear Nov 13 05:48:35.912: INFO: Pod pod-configmaps-76bf4aab-dd2f-44e6-bf27-600d932b9f03 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:48:35.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4835" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":19,"skipped":817,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:48:35.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Nov 13 05:48:35.971: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:48:35.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-9448" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create none metrics for pvc controller before creating any PV or PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:481 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:48:27.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Nov 13 05:48:27.795: INFO: The status of Pod test-hostpath-type-6pncn is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:48:29.799: INFO: The status of Pod test-hostpath-type-6pncn is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:48:31.798: INFO: The status of Pod test-hostpath-type-6pncn is Running (Ready = true) STEP: running on node node2 STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should fail on mounting directory 'adir' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:84 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:48:39.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-6077" for this suite. • [SLOW TEST:12.098 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting directory 'adir' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:84 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathFile","total":-1,"completed":9,"skipped":306,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:48:35.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:48:39.815: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-9eb03d61-cc46-41fc-9fd8-606baa67f374] Namespace:persistent-local-volumes-test-258 PodName:hostexec-node2-d7wk8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:39.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:48:39.905: INFO: Creating a PV followed by a PVC Nov 13 05:48:39.911: INFO: Waiting for PV local-pv2gx48 to bind to PVC pvc-rhdb8 Nov 13 05:48:39.911: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-rhdb8] to have phase Bound Nov 13 05:48:39.913: INFO: PersistentVolumeClaim pvc-rhdb8 found but phase is Pending instead of Bound. Nov 13 05:48:41.918: INFO: PersistentVolumeClaim pvc-rhdb8 found and phase=Bound (2.007197147s) Nov 13 05:48:41.918: INFO: Waiting up to 3m0s for PersistentVolume local-pv2gx48 to have phase Bound Nov 13 05:48:41.920: INFO: PersistentVolume local-pv2gx48 found and phase=Bound (1.947959ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Nov 13 05:48:45.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-258 exec pod-d29c2ff1-d0e3-4b96-934b-0b68181509ef --namespace=persistent-local-volumes-test-258 -- stat -c %g /mnt/volume1' Nov 13 05:48:46.167: INFO: stderr: "" Nov 13 05:48:46.167: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-d29c2ff1-d0e3-4b96-934b-0b68181509ef in namespace persistent-local-volumes-test-258 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:48:46.172: INFO: Deleting PersistentVolumeClaim "pvc-rhdb8" Nov 13 05:48:46.176: INFO: Deleting PersistentVolume "local-pv2gx48" STEP: Removing the test directory Nov 13 05:48:46.180: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-9eb03d61-cc46-41fc-9fd8-606baa67f374] Namespace:persistent-local-volumes-test-258 PodName:hostexec-node2-d7wk8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:46.180: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:48:46.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-258" for this suite. • [SLOW TEST:10.507 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":11,"skipped":401,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:48:36.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-5d0e5301-d587-4926-ae40-60a3014560cb" Nov 13 05:48:40.091: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-5d0e5301-d587-4926-ae40-60a3014560cb" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-5d0e5301-d587-4926-ae40-60a3014560cb" "/tmp/local-volume-test-5d0e5301-d587-4926-ae40-60a3014560cb"] Namespace:persistent-local-volumes-test-1388 PodName:hostexec-node2-kh4xz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:40.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:48:40.184: INFO: Creating a PV followed by a PVC Nov 13 05:48:40.190: INFO: Waiting for PV local-pvtz5bn to bind to PVC pvc-dsq78 Nov 13 05:48:40.190: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-dsq78] to have phase Bound Nov 13 05:48:40.192: INFO: PersistentVolumeClaim pvc-dsq78 found but phase is Pending instead of Bound. Nov 13 05:48:42.196: INFO: PersistentVolumeClaim pvc-dsq78 found and phase=Bound (2.006711692s) Nov 13 05:48:42.196: INFO: Waiting up to 3m0s for PersistentVolume local-pvtz5bn to have phase Bound Nov 13 05:48:42.199: INFO: PersistentVolume local-pvtz5bn found and phase=Bound (2.120041ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:48:46.223: INFO: pod "pod-2c252414-b172-4359-abbc-65e5bec5e293" created on Node "node2" STEP: Writing in pod1 Nov 13 05:48:46.223: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1388 PodName:pod-2c252414-b172-4359-abbc-65e5bec5e293 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:48:46.223: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:48:46.302: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Nov 13 05:48:46.302: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1388 PodName:pod-2c252414-b172-4359-abbc-65e5bec5e293 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:48:46.302: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:48:46.373: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Nov 13 05:48:46.373: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-5d0e5301-d587-4926-ae40-60a3014560cb > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1388 PodName:pod-2c252414-b172-4359-abbc-65e5bec5e293 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:48:46.373: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:48:46.455: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-5d0e5301-d587-4926-ae40-60a3014560cb > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-2c252414-b172-4359-abbc-65e5bec5e293 in namespace persistent-local-volumes-test-1388 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:48:46.459: INFO: Deleting PersistentVolumeClaim "pvc-dsq78" Nov 13 05:48:46.463: INFO: Deleting PersistentVolume "local-pvtz5bn" STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-5d0e5301-d587-4926-ae40-60a3014560cb" Nov 13 05:48:46.466: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-5d0e5301-d587-4926-ae40-60a3014560cb"] Namespace:persistent-local-volumes-test-1388 PodName:hostexec-node2-kh4xz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:46.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:48:46.552: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-5d0e5301-d587-4926-ae40-60a3014560cb] Namespace:persistent-local-volumes-test-1388 PodName:hostexec-node2-kh4xz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:46.552: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:48:46.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1388" for this suite. • [SLOW TEST:10.628 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":20,"skipped":866,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:48:26.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:48:30.677: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-ea482831-3e52-4ac5-9847-54356e196805-backend && ln -s /tmp/local-volume-test-ea482831-3e52-4ac5-9847-54356e196805-backend /tmp/local-volume-test-ea482831-3e52-4ac5-9847-54356e196805] Namespace:persistent-local-volumes-test-7285 PodName:hostexec-node1-lqg76 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:30.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:48:30.781: INFO: Creating a PV followed by a PVC Nov 13 05:48:30.787: INFO: Waiting for PV local-pvkjcp8 to bind to PVC pvc-btz6s Nov 13 05:48:30.788: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-btz6s] to have phase Bound Nov 13 05:48:30.790: INFO: PersistentVolumeClaim pvc-btz6s found but phase is Pending instead of Bound. Nov 13 05:48:32.793: INFO: PersistentVolumeClaim pvc-btz6s found but phase is Pending instead of Bound. Nov 13 05:48:34.800: INFO: PersistentVolumeClaim pvc-btz6s found but phase is Pending instead of Bound. Nov 13 05:48:36.804: INFO: PersistentVolumeClaim pvc-btz6s found but phase is Pending instead of Bound. Nov 13 05:48:38.813: INFO: PersistentVolumeClaim pvc-btz6s found but phase is Pending instead of Bound. Nov 13 05:48:40.820: INFO: PersistentVolumeClaim pvc-btz6s found but phase is Pending instead of Bound. Nov 13 05:48:42.824: INFO: PersistentVolumeClaim pvc-btz6s found and phase=Bound (12.036939167s) Nov 13 05:48:42.825: INFO: Waiting up to 3m0s for PersistentVolume local-pvkjcp8 to have phase Bound Nov 13 05:48:42.827: INFO: PersistentVolume local-pvkjcp8 found and phase=Bound (2.158522ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:48:46.853: INFO: pod "pod-e8d5df23-f441-4091-8985-8897e0a5f152" created on Node "node1" STEP: Writing in pod1 Nov 13 05:48:46.853: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7285 PodName:pod-e8d5df23-f441-4091-8985-8897e0a5f152 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:48:46.853: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:48:46.963: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Nov 13 05:48:46.963: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7285 PodName:pod-e8d5df23-f441-4091-8985-8897e0a5f152 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:48:46.963: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:48:47.052: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Nov 13 05:48:47.052: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-ea482831-3e52-4ac5-9847-54356e196805 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7285 PodName:pod-e8d5df23-f441-4091-8985-8897e0a5f152 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:48:47.052: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:48:47.163: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-ea482831-3e52-4ac5-9847-54356e196805 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-e8d5df23-f441-4091-8985-8897e0a5f152 in namespace persistent-local-volumes-test-7285 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:48:47.168: INFO: Deleting PersistentVolumeClaim "pvc-btz6s" Nov 13 05:48:47.172: INFO: Deleting PersistentVolume "local-pvkjcp8" STEP: Removing the test directory Nov 13 05:48:47.176: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ea482831-3e52-4ac5-9847-54356e196805 && rm -r /tmp/local-volume-test-ea482831-3e52-4ac5-9847-54356e196805-backend] Namespace:persistent-local-volumes-test-7285 PodName:hostexec-node1-lqg76 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:47.176: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:48:47.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7285" for this suite. • [SLOW TEST:20.662 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":13,"skipped":430,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:48:46.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37 [It] should support subPath [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93 STEP: Creating a pod to test hostPath subPath Nov 13 05:48:46.837: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-4092" to be "Succeeded or Failed" Nov 13 05:48:46.840: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.509155ms Nov 13 05:48:48.843: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005996036s Nov 13 05:48:50.846: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00842153s Nov 13 05:48:52.849: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.011277443s STEP: Saw pod success Nov 13 05:48:52.849: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Nov 13 05:48:52.851: INFO: Trying to get logs from node node2 pod pod-host-path-test container test-container-2: STEP: delete the pod Nov 13 05:48:52.864: INFO: Waiting for pod pod-host-path-test to disappear Nov 13 05:48:52.866: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:48:52.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-4092" for this suite. • [SLOW TEST:6.070 seconds] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support subPath [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should support subPath [NodeConformance]","total":-1,"completed":21,"skipped":934,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:48:52.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:112 [It] should be reschedulable [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:326 Nov 13 05:48:52.910: INFO: Only supported for providers [openstack gce gke vsphere azure] (not local) [AfterEach] pods that use multiple volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:322 [AfterEach] [sig-storage] PersistentVolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:48:52.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-951" for this suite. S [SKIPPING] [0.039 seconds] [sig-storage] PersistentVolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Default StorageClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:319 pods that use multiple volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:320 should be reschedulable [Slow] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:326 Only supported for providers [openstack gce gke vsphere azure] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:328 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:48:52.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:77 Nov 13 05:48:52.993: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:48:52.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-7063" for this suite. [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:111 Nov 13 05:48:53.002: INFO: AfterEach: Cleaning up test resources Nov 13 05:48:53.002: INFO: pvc is nil Nov 13 05:48:53.002: INFO: pv is nil S [SKIPPING] in Spec Setup (BeforeEach) [0.027 seconds] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:127 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85 ------------------------------ SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:47:03.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should report attach limit when limit is bigger than 0 [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:529 STEP: Building a driver namespace object, basename csi-mock-volumes-1450 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:47:03.161: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1450-4265/csi-attacher Nov 13 05:47:03.164: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1450 Nov 13 05:47:03.164: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-1450 Nov 13 05:47:03.166: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1450 Nov 13 05:47:03.169: INFO: creating *v1.Role: csi-mock-volumes-1450-4265/external-attacher-cfg-csi-mock-volumes-1450 Nov 13 05:47:03.172: INFO: creating *v1.RoleBinding: csi-mock-volumes-1450-4265/csi-attacher-role-cfg Nov 13 05:47:03.174: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1450-4265/csi-provisioner Nov 13 05:47:03.178: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1450 Nov 13 05:47:03.178: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-1450 Nov 13 05:47:03.180: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1450 Nov 13 05:47:03.183: INFO: creating *v1.Role: csi-mock-volumes-1450-4265/external-provisioner-cfg-csi-mock-volumes-1450 Nov 13 05:47:03.186: INFO: creating *v1.RoleBinding: csi-mock-volumes-1450-4265/csi-provisioner-role-cfg Nov 13 05:47:03.189: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1450-4265/csi-resizer Nov 13 05:47:03.191: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1450 Nov 13 05:47:03.191: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-1450 Nov 13 05:47:03.195: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1450 Nov 13 05:47:03.197: INFO: creating *v1.Role: csi-mock-volumes-1450-4265/external-resizer-cfg-csi-mock-volumes-1450 Nov 13 05:47:03.200: INFO: creating *v1.RoleBinding: csi-mock-volumes-1450-4265/csi-resizer-role-cfg Nov 13 05:47:03.202: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1450-4265/csi-snapshotter Nov 13 05:47:03.205: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1450 Nov 13 05:47:03.205: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-1450 Nov 13 05:47:03.210: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1450 Nov 13 05:47:03.212: INFO: creating *v1.Role: csi-mock-volumes-1450-4265/external-snapshotter-leaderelection-csi-mock-volumes-1450 Nov 13 05:47:03.215: INFO: creating *v1.RoleBinding: csi-mock-volumes-1450-4265/external-snapshotter-leaderelection Nov 13 05:47:03.220: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1450-4265/csi-mock Nov 13 05:47:03.222: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1450 Nov 13 05:47:03.224: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1450 Nov 13 05:47:03.227: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1450 Nov 13 05:47:03.230: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1450 Nov 13 05:47:03.233: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1450 Nov 13 05:47:03.236: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1450 Nov 13 05:47:03.238: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1450 Nov 13 05:47:03.240: INFO: creating *v1.StatefulSet: csi-mock-volumes-1450-4265/csi-mockplugin Nov 13 05:47:03.244: INFO: creating *v1.StatefulSet: csi-mock-volumes-1450-4265/csi-mockplugin-attacher Nov 13 05:47:03.248: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-1450 to register on node node1 STEP: Creating pod Nov 13 05:47:19.522: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:47:19.526: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-wr28d] to have phase Bound Nov 13 05:47:19.528: INFO: PersistentVolumeClaim pvc-wr28d found but phase is Pending instead of Bound. Nov 13 05:47:21.530: INFO: PersistentVolumeClaim pvc-wr28d found and phase=Bound (2.0045784s) STEP: Creating pod Nov 13 05:47:43.556: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:47:43.561: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-fbnh9] to have phase Bound Nov 13 05:47:43.563: INFO: PersistentVolumeClaim pvc-fbnh9 found but phase is Pending instead of Bound. Nov 13 05:47:45.568: INFO: PersistentVolumeClaim pvc-fbnh9 found and phase=Bound (2.006762984s) STEP: Creating pod Nov 13 05:47:59.597: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:47:59.601: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-jmwvh] to have phase Bound Nov 13 05:47:59.603: INFO: PersistentVolumeClaim pvc-jmwvh found but phase is Pending instead of Bound. Nov 13 05:48:01.607: INFO: PersistentVolumeClaim pvc-jmwvh found and phase=Bound (2.006035639s) STEP: Deleting pod pvc-volume-tester-5f82l Nov 13 05:48:11.632: INFO: Deleting pod "pvc-volume-tester-5f82l" in namespace "csi-mock-volumes-1450" Nov 13 05:48:11.635: INFO: Wait up to 5m0s for pod "pvc-volume-tester-5f82l" to be fully deleted STEP: Deleting pod pvc-volume-tester-kqmqm Nov 13 05:48:17.640: INFO: Deleting pod "pvc-volume-tester-kqmqm" in namespace "csi-mock-volumes-1450" Nov 13 05:48:17.645: INFO: Wait up to 5m0s for pod "pvc-volume-tester-kqmqm" to be fully deleted STEP: Deleting pod pvc-volume-tester-7zdfg Nov 13 05:48:23.652: INFO: Deleting pod "pvc-volume-tester-7zdfg" in namespace "csi-mock-volumes-1450" Nov 13 05:48:23.657: INFO: Wait up to 5m0s for pod "pvc-volume-tester-7zdfg" to be fully deleted STEP: Deleting claim pvc-wr28d Nov 13 05:48:31.669: INFO: Waiting up to 2m0s for PersistentVolume pvc-758647a5-a225-470b-be65-400767a2f23c to get deleted Nov 13 05:48:31.672: INFO: PersistentVolume pvc-758647a5-a225-470b-be65-400767a2f23c found and phase=Bound (2.540453ms) Nov 13 05:48:33.676: INFO: PersistentVolume pvc-758647a5-a225-470b-be65-400767a2f23c was removed STEP: Deleting claim pvc-fbnh9 Nov 13 05:48:33.683: INFO: Waiting up to 2m0s for PersistentVolume pvc-cddc3e60-96a5-4fc7-a47e-27bc668c0659 to get deleted Nov 13 05:48:33.685: INFO: PersistentVolume pvc-cddc3e60-96a5-4fc7-a47e-27bc668c0659 found and phase=Bound (1.857028ms) Nov 13 05:48:35.688: INFO: PersistentVolume pvc-cddc3e60-96a5-4fc7-a47e-27bc668c0659 was removed STEP: Deleting claim pvc-jmwvh Nov 13 05:48:35.694: INFO: Waiting up to 2m0s for PersistentVolume pvc-ddb8155c-fbdd-4635-a9ad-c87cda73dc19 to get deleted Nov 13 05:48:35.696: INFO: PersistentVolume pvc-ddb8155c-fbdd-4635-a9ad-c87cda73dc19 found and phase=Bound (1.931862ms) Nov 13 05:48:37.699: INFO: PersistentVolume pvc-ddb8155c-fbdd-4635-a9ad-c87cda73dc19 was removed STEP: Deleting storageclass csi-mock-volumes-1450-scn94w8 STEP: Deleting storageclass csi-mock-volumes-1450-scrg9zh STEP: Deleting storageclass csi-mock-volumes-1450-sctk8gm STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-1450 STEP: Waiting for namespaces [csi-mock-volumes-1450] to vanish STEP: uninstalling csi mock driver Nov 13 05:48:43.717: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1450-4265/csi-attacher Nov 13 05:48:43.721: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1450 Nov 13 05:48:43.725: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1450 Nov 13 05:48:43.728: INFO: deleting *v1.Role: csi-mock-volumes-1450-4265/external-attacher-cfg-csi-mock-volumes-1450 Nov 13 05:48:43.731: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1450-4265/csi-attacher-role-cfg Nov 13 05:48:43.734: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1450-4265/csi-provisioner Nov 13 05:48:43.737: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1450 Nov 13 05:48:43.741: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1450 Nov 13 05:48:43.744: INFO: deleting *v1.Role: csi-mock-volumes-1450-4265/external-provisioner-cfg-csi-mock-volumes-1450 Nov 13 05:48:43.747: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1450-4265/csi-provisioner-role-cfg Nov 13 05:48:43.751: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1450-4265/csi-resizer Nov 13 05:48:43.755: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1450 Nov 13 05:48:43.758: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1450 Nov 13 05:48:43.761: INFO: deleting *v1.Role: csi-mock-volumes-1450-4265/external-resizer-cfg-csi-mock-volumes-1450 Nov 13 05:48:43.765: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1450-4265/csi-resizer-role-cfg Nov 13 05:48:43.768: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1450-4265/csi-snapshotter Nov 13 05:48:43.772: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1450 Nov 13 05:48:43.775: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1450 Nov 13 05:48:43.779: INFO: deleting *v1.Role: csi-mock-volumes-1450-4265/external-snapshotter-leaderelection-csi-mock-volumes-1450 Nov 13 05:48:43.783: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1450-4265/external-snapshotter-leaderelection Nov 13 05:48:43.786: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1450-4265/csi-mock Nov 13 05:48:43.789: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1450 Nov 13 05:48:43.794: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1450 Nov 13 05:48:43.797: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1450 Nov 13 05:48:43.801: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1450 Nov 13 05:48:43.805: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1450 Nov 13 05:48:43.809: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1450 Nov 13 05:48:43.814: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1450 Nov 13 05:48:43.818: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1450-4265/csi-mockplugin Nov 13 05:48:43.821: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1450-4265/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-1450-4265 STEP: Waiting for namespaces [csi-mock-volumes-1450-4265] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:48:55.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:112.765 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI volume limit information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:528 should report attach limit when limit is bigger than 0 [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:529 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI volume limit information using mock driver should report attach limit when limit is bigger than 0 [Slow]","total":-1,"completed":10,"skipped":426,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:48:46.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-6f840e6d-ddc0-42ba-9d09-ab3367fceb22" Nov 13 05:48:50.390: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-6f840e6d-ddc0-42ba-9d09-ab3367fceb22 && dd if=/dev/zero of=/tmp/local-volume-test-6f840e6d-ddc0-42ba-9d09-ab3367fceb22/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-6f840e6d-ddc0-42ba-9d09-ab3367fceb22/file] Namespace:persistent-local-volumes-test-6657 PodName:hostexec-node1-j47n4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:50.390: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:48:50.634: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-6f840e6d-ddc0-42ba-9d09-ab3367fceb22/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6657 PodName:hostexec-node1-j47n4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:50.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:48:50.813: INFO: Creating a PV followed by a PVC Nov 13 05:48:50.820: INFO: Waiting for PV local-pv4975n to bind to PVC pvc-srp4s Nov 13 05:48:50.820: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-srp4s] to have phase Bound Nov 13 05:48:50.823: INFO: PersistentVolumeClaim pvc-srp4s found but phase is Pending instead of Bound. Nov 13 05:48:52.825: INFO: PersistentVolumeClaim pvc-srp4s found but phase is Pending instead of Bound. Nov 13 05:48:54.831: INFO: PersistentVolumeClaim pvc-srp4s found but phase is Pending instead of Bound. Nov 13 05:48:56.835: INFO: PersistentVolumeClaim pvc-srp4s found and phase=Bound (6.014171952s) Nov 13 05:48:56.835: INFO: Waiting up to 3m0s for PersistentVolume local-pv4975n to have phase Bound Nov 13 05:48:56.837: INFO: PersistentVolume local-pv4975n found and phase=Bound (2.157181ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 Nov 13 05:48:56.841: INFO: We don't set fsGroup on block device, skipped. [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:48:56.843: INFO: Deleting PersistentVolumeClaim "pvc-srp4s" Nov 13 05:48:56.847: INFO: Deleting PersistentVolume "local-pv4975n" Nov 13 05:48:56.851: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-6f840e6d-ddc0-42ba-9d09-ab3367fceb22/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6657 PodName:hostexec-node1-j47n4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:56.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node1" at path /tmp/local-volume-test-6f840e6d-ddc0-42ba-9d09-ab3367fceb22/file Nov 13 05:48:57.106: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-6657 PodName:hostexec-node1-j47n4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:57.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-6f840e6d-ddc0-42ba-9d09-ab3367fceb22 Nov 13 05:48:57.229: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6f840e6d-ddc0-42ba-9d09-ab3367fceb22] Namespace:persistent-local-volumes-test-6657 PodName:hostexec-node1-j47n4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:57.229: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:48:57.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6657" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [11.036 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 We don't set fsGroup on block device, skipped. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:263 ------------------------------ SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:48:47.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-5ac4fc8d-f8fc-4902-9200-c53ec417fb04" Nov 13 05:48:53.357: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-5ac4fc8d-f8fc-4902-9200-c53ec417fb04 && dd if=/dev/zero of=/tmp/local-volume-test-5ac4fc8d-f8fc-4902-9200-c53ec417fb04/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-5ac4fc8d-f8fc-4902-9200-c53ec417fb04/file] Namespace:persistent-local-volumes-test-6075 PodName:hostexec-node1-scbm7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:53.357: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:48:53.474: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-5ac4fc8d-f8fc-4902-9200-c53ec417fb04/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6075 PodName:hostexec-node1-scbm7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:53.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:48:53.657: INFO: Creating a PV followed by a PVC Nov 13 05:48:53.664: INFO: Waiting for PV local-pvgqmn4 to bind to PVC pvc-h74p4 Nov 13 05:48:53.664: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-h74p4] to have phase Bound Nov 13 05:48:53.667: INFO: PersistentVolumeClaim pvc-h74p4 found but phase is Pending instead of Bound. Nov 13 05:48:55.674: INFO: PersistentVolumeClaim pvc-h74p4 found but phase is Pending instead of Bound. Nov 13 05:48:57.678: INFO: PersistentVolumeClaim pvc-h74p4 found and phase=Bound (4.013428758s) Nov 13 05:48:57.678: INFO: Waiting up to 3m0s for PersistentVolume local-pvgqmn4 to have phase Bound Nov 13 05:48:57.680: INFO: PersistentVolume local-pvgqmn4 found and phase=Bound (2.661718ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 Nov 13 05:48:57.685: INFO: We don't set fsGroup on block device, skipped. [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:48:57.687: INFO: Deleting PersistentVolumeClaim "pvc-h74p4" Nov 13 05:48:57.692: INFO: Deleting PersistentVolume "local-pvgqmn4" Nov 13 05:48:57.696: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-5ac4fc8d-f8fc-4902-9200-c53ec417fb04/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6075 PodName:hostexec-node1-scbm7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:57.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop1" on node "node1" at path /tmp/local-volume-test-5ac4fc8d-f8fc-4902-9200-c53ec417fb04/file Nov 13 05:48:57.797: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop1] Namespace:persistent-local-volumes-test-6075 PodName:hostexec-node1-scbm7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:57.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-5ac4fc8d-f8fc-4902-9200-c53ec417fb04 Nov 13 05:48:57.903: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-5ac4fc8d-f8fc-4902-9200-c53ec417fb04] Namespace:persistent-local-volumes-test-6075 PodName:hostexec-node1-scbm7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:57.903: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:48:58.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6075" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [10.702 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 We don't set fsGroup on block device, skipped. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:263 ------------------------------ SS ------------------------------ [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:48:53.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Nov 13 05:48:53.060: INFO: The status of Pod test-hostpath-type-tvjsn is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:48:55.064: INFO: The status of Pod test-hostpath-type-tvjsn is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:48:57.065: INFO: The status of Pod test-hostpath-type-tvjsn is Running (Ready = true) STEP: running on node node2 STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should be able to mount file 'afile' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:147 [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:49:05.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-7501" for this suite. • [SLOW TEST:12.095 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount file 'afile' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:147 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should be able to mount file 'afile' successfully when HostPathType is HostPathUnset","total":-1,"completed":22,"skipped":975,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:48:55.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:48:59.998: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-65ec65f9-fc97-4d19-b492-63c257491a06 && mount --bind /tmp/local-volume-test-65ec65f9-fc97-4d19-b492-63c257491a06 /tmp/local-volume-test-65ec65f9-fc97-4d19-b492-63c257491a06] Namespace:persistent-local-volumes-test-8433 PodName:hostexec-node1-h4x6x ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:48:59.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:49:00.092: INFO: Creating a PV followed by a PVC Nov 13 05:49:00.101: INFO: Waiting for PV local-pvll6cs to bind to PVC pvc-6qw8m Nov 13 05:49:00.101: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-6qw8m] to have phase Bound Nov 13 05:49:00.103: INFO: PersistentVolumeClaim pvc-6qw8m found but phase is Pending instead of Bound. Nov 13 05:49:02.108: INFO: PersistentVolumeClaim pvc-6qw8m found and phase=Bound (2.00637912s) Nov 13 05:49:02.108: INFO: Waiting up to 3m0s for PersistentVolume local-pvll6cs to have phase Bound Nov 13 05:49:02.110: INFO: PersistentVolume local-pvll6cs found and phase=Bound (2.346944ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:49:06.138: INFO: pod "pod-68fd6ccd-552f-480b-885e-10be450b0b0c" created on Node "node1" STEP: Writing in pod1 Nov 13 05:49:06.138: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8433 PodName:pod-68fd6ccd-552f-480b-885e-10be450b0b0c ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:49:06.138: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:49:06.224: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Nov 13 05:49:06.224: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8433 PodName:pod-68fd6ccd-552f-480b-885e-10be450b0b0c ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:49:06.224: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:49:06.338: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Nov 13 05:49:06.338: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-65ec65f9-fc97-4d19-b492-63c257491a06 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8433 PodName:pod-68fd6ccd-552f-480b-885e-10be450b0b0c ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:49:06.338: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:49:06.414: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-65ec65f9-fc97-4d19-b492-63c257491a06 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-68fd6ccd-552f-480b-885e-10be450b0b0c in namespace persistent-local-volumes-test-8433 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:49:06.418: INFO: Deleting PersistentVolumeClaim "pvc-6qw8m" Nov 13 05:49:06.423: INFO: Deleting PersistentVolume "local-pvll6cs" STEP: Removing the test directory Nov 13 05:49:06.426: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-65ec65f9-fc97-4d19-b492-63c257491a06 && rm -r /tmp/local-volume-test-65ec65f9-fc97-4d19-b492-63c257491a06] Namespace:persistent-local-volumes-test-8433 PodName:hostexec-node1-h4x6x ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:49:06.426: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:49:06.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8433" for this suite. • [SLOW TEST:10.606 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":11,"skipped":476,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:49:05.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:75 STEP: Creating configMap with name configmap-test-volume-f22fafc1-edb1-466f-ad02-de12cb5e1f74 STEP: Creating a pod to test consume configMaps Nov 13 05:49:05.173: INFO: Waiting up to 5m0s for pod "pod-configmaps-19f1c5b7-5ea0-4249-a144-166f868c5e31" in namespace "configmap-3742" to be "Succeeded or Failed" Nov 13 05:49:05.175: INFO: Pod "pod-configmaps-19f1c5b7-5ea0-4249-a144-166f868c5e31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.293746ms Nov 13 05:49:07.178: INFO: Pod "pod-configmaps-19f1c5b7-5ea0-4249-a144-166f868c5e31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005036507s Nov 13 05:49:09.181: INFO: Pod "pod-configmaps-19f1c5b7-5ea0-4249-a144-166f868c5e31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008624403s STEP: Saw pod success Nov 13 05:49:09.181: INFO: Pod "pod-configmaps-19f1c5b7-5ea0-4249-a144-166f868c5e31" satisfied condition "Succeeded or Failed" Nov 13 05:49:09.184: INFO: Trying to get logs from node node2 pod pod-configmaps-19f1c5b7-5ea0-4249-a144-166f868c5e31 container agnhost-container: STEP: delete the pod Nov 13 05:49:09.198: INFO: Waiting for pod pod-configmaps-19f1c5b7-5ea0-4249-a144-166f868c5e31 to disappear Nov 13 05:49:09.200: INFO: Pod pod-configmaps-19f1c5b7-5ea0-4249-a144-166f868c5e31 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:49:09.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3742" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":23,"skipped":977,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:48:57.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:49:01.446: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-41c863a7-7fee-4808-9f5e-e1b97f3f4e6b] Namespace:persistent-local-volumes-test-8486 PodName:hostexec-node2-zlqx2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:49:01.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:49:01.752: INFO: Creating a PV followed by a PVC Nov 13 05:49:01.759: INFO: Waiting for PV local-pv4p94w to bind to PVC pvc-xxv9s Nov 13 05:49:01.759: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-xxv9s] to have phase Bound Nov 13 05:49:01.762: INFO: PersistentVolumeClaim pvc-xxv9s found but phase is Pending instead of Bound. Nov 13 05:49:03.766: INFO: PersistentVolumeClaim pvc-xxv9s found but phase is Pending instead of Bound. Nov 13 05:49:05.769: INFO: PersistentVolumeClaim pvc-xxv9s found but phase is Pending instead of Bound. Nov 13 05:49:07.772: INFO: PersistentVolumeClaim pvc-xxv9s found but phase is Pending instead of Bound. Nov 13 05:49:09.778: INFO: PersistentVolumeClaim pvc-xxv9s found but phase is Pending instead of Bound. Nov 13 05:49:11.780: INFO: PersistentVolumeClaim pvc-xxv9s found and phase=Bound (10.020425101s) Nov 13 05:49:11.780: INFO: Waiting up to 3m0s for PersistentVolume local-pv4p94w to have phase Bound Nov 13 05:49:11.781: INFO: PersistentVolume local-pv4p94w found and phase=Bound (1.581527ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:49:15.805: INFO: pod "pod-9f7fba0e-f168-4c70-b2aa-5b5cc2443c56" created on Node "node2" STEP: Writing in pod1 Nov 13 05:49:15.805: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8486 PodName:pod-9f7fba0e-f168-4c70-b2aa-5b5cc2443c56 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:49:15.805: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:49:15.903: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Nov 13 05:49:15.903: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8486 PodName:pod-9f7fba0e-f168-4c70-b2aa-5b5cc2443c56 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:49:15.903: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:49:16.050: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-9f7fba0e-f168-4c70-b2aa-5b5cc2443c56 in namespace persistent-local-volumes-test-8486 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:49:16.056: INFO: Deleting PersistentVolumeClaim "pvc-xxv9s" Nov 13 05:49:16.059: INFO: Deleting PersistentVolume "local-pv4p94w" STEP: Removing the test directory Nov 13 05:49:16.062: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-41c863a7-7fee-4808-9f5e-e1b97f3f4e6b] Namespace:persistent-local-volumes-test-8486 PodName:hostexec-node2-zlqx2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:49:16.062: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:49:16.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8486" for this suite. • [SLOW TEST:18.779 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":12,"skipped":443,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:49:06.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:49:08.617: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-4d6f02e6-1f50-430b-a168-51af589dea84-backend && mount --bind /tmp/local-volume-test-4d6f02e6-1f50-430b-a168-51af589dea84-backend /tmp/local-volume-test-4d6f02e6-1f50-430b-a168-51af589dea84-backend && ln -s /tmp/local-volume-test-4d6f02e6-1f50-430b-a168-51af589dea84-backend /tmp/local-volume-test-4d6f02e6-1f50-430b-a168-51af589dea84] Namespace:persistent-local-volumes-test-8345 PodName:hostexec-node1-c88th ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:49:08.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:49:08.744: INFO: Creating a PV followed by a PVC Nov 13 05:49:08.751: INFO: Waiting for PV local-pvskvz8 to bind to PVC pvc-g7prq Nov 13 05:49:08.751: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-g7prq] to have phase Bound Nov 13 05:49:08.753: INFO: PersistentVolumeClaim pvc-g7prq found but phase is Pending instead of Bound. Nov 13 05:49:10.757: INFO: PersistentVolumeClaim pvc-g7prq found but phase is Pending instead of Bound. Nov 13 05:49:12.761: INFO: PersistentVolumeClaim pvc-g7prq found and phase=Bound (4.00989738s) Nov 13 05:49:12.761: INFO: Waiting up to 3m0s for PersistentVolume local-pvskvz8 to have phase Bound Nov 13 05:49:12.763: INFO: PersistentVolume local-pvskvz8 found and phase=Bound (2.227316ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:49:16.787: INFO: pod "pod-fb5247bf-c37d-4f7b-a7bc-d5e7cd865933" created on Node "node1" STEP: Writing in pod1 Nov 13 05:49:16.787: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8345 PodName:pod-fb5247bf-c37d-4f7b-a7bc-d5e7cd865933 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:49:16.787: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:49:16.888: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Nov 13 05:49:16.888: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8345 PodName:pod-fb5247bf-c37d-4f7b-a7bc-d5e7cd865933 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:49:16.889: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:49:16.987: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-fb5247bf-c37d-4f7b-a7bc-d5e7cd865933 in namespace persistent-local-volumes-test-8345 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:49:16.992: INFO: Deleting PersistentVolumeClaim "pvc-g7prq" Nov 13 05:49:16.996: INFO: Deleting PersistentVolume "local-pvskvz8" STEP: Removing the test directory Nov 13 05:49:17.001: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-4d6f02e6-1f50-430b-a168-51af589dea84 && umount /tmp/local-volume-test-4d6f02e6-1f50-430b-a168-51af589dea84-backend && rm -r /tmp/local-volume-test-4d6f02e6-1f50-430b-a168-51af589dea84-backend] Namespace:persistent-local-volumes-test-8345 PodName:hostexec-node1-c88th ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:49:17.001: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:49:17.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8345" for this suite. • [SLOW TEST:10.556 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":12,"skipped":481,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:48:39.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not require VolumeAttach for drivers without attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 STEP: Building a driver namespace object, basename csi-mock-volumes-5757 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:48:40.066: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5757-1805/csi-attacher Nov 13 05:48:40.069: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5757 Nov 13 05:48:40.069: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-5757 Nov 13 05:48:40.071: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5757 Nov 13 05:48:40.075: INFO: creating *v1.Role: csi-mock-volumes-5757-1805/external-attacher-cfg-csi-mock-volumes-5757 Nov 13 05:48:40.077: INFO: creating *v1.RoleBinding: csi-mock-volumes-5757-1805/csi-attacher-role-cfg Nov 13 05:48:40.081: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5757-1805/csi-provisioner Nov 13 05:48:40.084: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5757 Nov 13 05:48:40.084: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-5757 Nov 13 05:48:40.086: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5757 Nov 13 05:48:40.090: INFO: creating *v1.Role: csi-mock-volumes-5757-1805/external-provisioner-cfg-csi-mock-volumes-5757 Nov 13 05:48:40.092: INFO: creating *v1.RoleBinding: csi-mock-volumes-5757-1805/csi-provisioner-role-cfg Nov 13 05:48:40.095: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5757-1805/csi-resizer Nov 13 05:48:40.097: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5757 Nov 13 05:48:40.097: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-5757 Nov 13 05:48:40.099: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5757 Nov 13 05:48:40.102: INFO: creating *v1.Role: csi-mock-volumes-5757-1805/external-resizer-cfg-csi-mock-volumes-5757 Nov 13 05:48:40.105: INFO: creating *v1.RoleBinding: csi-mock-volumes-5757-1805/csi-resizer-role-cfg Nov 13 05:48:40.108: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5757-1805/csi-snapshotter Nov 13 05:48:40.111: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5757 Nov 13 05:48:40.111: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-5757 Nov 13 05:48:40.114: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5757 Nov 13 05:48:40.116: INFO: creating *v1.Role: csi-mock-volumes-5757-1805/external-snapshotter-leaderelection-csi-mock-volumes-5757 Nov 13 05:48:40.118: INFO: creating *v1.RoleBinding: csi-mock-volumes-5757-1805/external-snapshotter-leaderelection Nov 13 05:48:40.121: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5757-1805/csi-mock Nov 13 05:48:40.123: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5757 Nov 13 05:48:40.126: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5757 Nov 13 05:48:40.130: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5757 Nov 13 05:48:40.132: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5757 Nov 13 05:48:40.134: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5757 Nov 13 05:48:40.137: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5757 Nov 13 05:48:40.139: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5757 Nov 13 05:48:40.143: INFO: creating *v1.StatefulSet: csi-mock-volumes-5757-1805/csi-mockplugin Nov 13 05:48:40.147: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-5757 Nov 13 05:48:40.149: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-5757" Nov 13 05:48:40.152: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5757 to register on node node1 STEP: Creating pod Nov 13 05:48:45.165: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:48:45.170: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-nwcx7] to have phase Bound Nov 13 05:48:45.172: INFO: PersistentVolumeClaim pvc-nwcx7 found but phase is Pending instead of Bound. Nov 13 05:48:47.176: INFO: PersistentVolumeClaim pvc-nwcx7 found and phase=Bound (2.005791121s) STEP: Checking if VolumeAttachment was created for the pod STEP: Deleting pod pvc-volume-tester-bpxl5 Nov 13 05:48:53.202: INFO: Deleting pod "pvc-volume-tester-bpxl5" in namespace "csi-mock-volumes-5757" Nov 13 05:48:53.206: INFO: Wait up to 5m0s for pod "pvc-volume-tester-bpxl5" to be fully deleted STEP: Deleting claim pvc-nwcx7 Nov 13 05:49:03.218: INFO: Waiting up to 2m0s for PersistentVolume pvc-f84bae72-95b1-483b-86bd-c4256f3e0b31 to get deleted Nov 13 05:49:03.220: INFO: PersistentVolume pvc-f84bae72-95b1-483b-86bd-c4256f3e0b31 found and phase=Bound (1.636293ms) Nov 13 05:49:05.223: INFO: PersistentVolume pvc-f84bae72-95b1-483b-86bd-c4256f3e0b31 was removed STEP: Deleting storageclass csi-mock-volumes-5757-sc4lcmb STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-5757 STEP: Waiting for namespaces [csi-mock-volumes-5757] to vanish STEP: uninstalling csi mock driver Nov 13 05:49:11.232: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5757-1805/csi-attacher Nov 13 05:49:11.236: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5757 Nov 13 05:49:11.239: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5757 Nov 13 05:49:11.242: INFO: deleting *v1.Role: csi-mock-volumes-5757-1805/external-attacher-cfg-csi-mock-volumes-5757 Nov 13 05:49:11.246: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5757-1805/csi-attacher-role-cfg Nov 13 05:49:11.249: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5757-1805/csi-provisioner Nov 13 05:49:11.252: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5757 Nov 13 05:49:11.255: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5757 Nov 13 05:49:11.258: INFO: deleting *v1.Role: csi-mock-volumes-5757-1805/external-provisioner-cfg-csi-mock-volumes-5757 Nov 13 05:49:11.262: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5757-1805/csi-provisioner-role-cfg Nov 13 05:49:11.266: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5757-1805/csi-resizer Nov 13 05:49:11.273: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5757 Nov 13 05:49:11.281: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5757 Nov 13 05:49:11.289: INFO: deleting *v1.Role: csi-mock-volumes-5757-1805/external-resizer-cfg-csi-mock-volumes-5757 Nov 13 05:49:11.295: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5757-1805/csi-resizer-role-cfg Nov 13 05:49:11.298: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5757-1805/csi-snapshotter Nov 13 05:49:11.302: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5757 Nov 13 05:49:11.305: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5757 Nov 13 05:49:11.308: INFO: deleting *v1.Role: csi-mock-volumes-5757-1805/external-snapshotter-leaderelection-csi-mock-volumes-5757 Nov 13 05:49:11.312: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5757-1805/external-snapshotter-leaderelection Nov 13 05:49:11.316: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5757-1805/csi-mock Nov 13 05:49:11.319: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5757 Nov 13 05:49:11.322: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5757 Nov 13 05:49:11.325: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5757 Nov 13 05:49:11.332: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5757 Nov 13 05:49:11.335: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5757 Nov 13 05:49:11.338: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5757 Nov 13 05:49:11.344: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5757 Nov 13 05:49:11.347: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5757-1805/csi-mockplugin Nov 13 05:49:11.352: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-5757 STEP: deleting the driver namespace: csi-mock-volumes-5757-1805 STEP: Waiting for namespaces [csi-mock-volumes-5757-1805] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:49:23.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:43.375 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI attach test using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:316 should not require VolumeAttach for drivers without attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment","total":-1,"completed":10,"skipped":383,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:49:09.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-a46e36d0-9b7b-4bd0-9068-ca93a1615f27" Nov 13 05:49:11.359: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-a46e36d0-9b7b-4bd0-9068-ca93a1615f27" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-a46e36d0-9b7b-4bd0-9068-ca93a1615f27" "/tmp/local-volume-test-a46e36d0-9b7b-4bd0-9068-ca93a1615f27"] Namespace:persistent-local-volumes-test-9217 PodName:hostexec-node2-tb6c7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:49:11.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:49:11.443: INFO: Creating a PV followed by a PVC Nov 13 05:49:11.454: INFO: Waiting for PV local-pvph2tj to bind to PVC pvc-qm5nx Nov 13 05:49:11.454: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-qm5nx] to have phase Bound Nov 13 05:49:11.457: INFO: PersistentVolumeClaim pvc-qm5nx found but phase is Pending instead of Bound. Nov 13 05:49:13.460: INFO: PersistentVolumeClaim pvc-qm5nx found and phase=Bound (2.005744116s) Nov 13 05:49:13.460: INFO: Waiting up to 3m0s for PersistentVolume local-pvph2tj to have phase Bound Nov 13 05:49:13.463: INFO: PersistentVolume local-pvph2tj found and phase=Bound (2.407939ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Nov 13 05:49:17.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-9217 exec pod-fe4e2dde-6bb2-400a-aab0-c7adf1d33b0a --namespace=persistent-local-volumes-test-9217 -- stat -c %g /mnt/volume1' Nov 13 05:49:17.917: INFO: stderr: "" Nov 13 05:49:17.917: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Nov 13 05:49:23.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-9217 exec pod-1234fd18-844f-45e9-b236-6788684caa29 --namespace=persistent-local-volumes-test-9217 -- stat -c %g /mnt/volume1' Nov 13 05:49:24.199: INFO: stderr: "" Nov 13 05:49:24.199: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod pod-fe4e2dde-6bb2-400a-aab0-c7adf1d33b0a in namespace persistent-local-volumes-test-9217 STEP: Deleting second pod STEP: Deleting pod pod-1234fd18-844f-45e9-b236-6788684caa29 in namespace persistent-local-volumes-test-9217 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:49:24.208: INFO: Deleting PersistentVolumeClaim "pvc-qm5nx" Nov 13 05:49:24.212: INFO: Deleting PersistentVolume "local-pvph2tj" STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-a46e36d0-9b7b-4bd0-9068-ca93a1615f27" Nov 13 05:49:24.215: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-a46e36d0-9b7b-4bd0-9068-ca93a1615f27"] Namespace:persistent-local-volumes-test-9217 PodName:hostexec-node2-tb6c7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:49:24.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:49:24.350: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a46e36d0-9b7b-4bd0-9068-ca93a1615f27] Namespace:persistent-local-volumes-test-9217 PodName:hostexec-node2-tb6c7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:49:24.350: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:49:24.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9217" for this suite. • [SLOW TEST:15.200 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":24,"skipped":1019,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:49:24.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Nov 13 05:49:24.651: INFO: The status of Pod test-hostpath-type-tjd6j is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:49:26.654: INFO: The status of Pod test-hostpath-type-tjd6j is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:49:28.655: INFO: The status of Pod test-hostpath-type-tjd6j is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:49:30.655: INFO: The status of Pod test-hostpath-type-tjd6j is Running (Ready = true) STEP: running on node node2 STEP: Create a character device for further testing Nov 13 05:49:30.657: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-9963 PodName:test-hostpath-type-tjd6j ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:49:30.657: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting non-existent character device 'does-not-exist-char-dev' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:271 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:49:32.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-9963" for this suite. • [SLOW TEST:8.197 seconds] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting non-existent character device 'does-not-exist-char-dev' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:271 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should fail on mounting non-existent character device 'does-not-exist-char-dev' when HostPathType is HostPathCharDev","total":-1,"completed":25,"skipped":1075,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:49:17.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:49:21.217: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-bbb08d71-d4dc-46dc-9fa4-8650b8703c10-backend && mount --bind /tmp/local-volume-test-bbb08d71-d4dc-46dc-9fa4-8650b8703c10-backend /tmp/local-volume-test-bbb08d71-d4dc-46dc-9fa4-8650b8703c10-backend && ln -s /tmp/local-volume-test-bbb08d71-d4dc-46dc-9fa4-8650b8703c10-backend /tmp/local-volume-test-bbb08d71-d4dc-46dc-9fa4-8650b8703c10] Namespace:persistent-local-volumes-test-5126 PodName:hostexec-node2-4vm7r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:49:21.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:49:21.361: INFO: Creating a PV followed by a PVC Nov 13 05:49:21.368: INFO: Waiting for PV local-pvw6jsl to bind to PVC pvc-5bgks Nov 13 05:49:21.368: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-5bgks] to have phase Bound Nov 13 05:49:21.370: INFO: PersistentVolumeClaim pvc-5bgks found but phase is Pending instead of Bound. Nov 13 05:49:23.373: INFO: PersistentVolumeClaim pvc-5bgks found and phase=Bound (2.00534748s) Nov 13 05:49:23.373: INFO: Waiting up to 3m0s for PersistentVolume local-pvw6jsl to have phase Bound Nov 13 05:49:23.376: INFO: PersistentVolume local-pvw6jsl found and phase=Bound (2.424121ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Nov 13 05:49:29.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-5126 exec pod-f0a7baf0-2061-4ae4-bb98-733e4ab2c4c8 --namespace=persistent-local-volumes-test-5126 -- stat -c %g /mnt/volume1' Nov 13 05:49:29.658: INFO: stderr: "" Nov 13 05:49:29.658: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Nov 13 05:49:37.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-5126 exec pod-3ab04a0e-9079-4828-bb12-d5904b97e6d8 --namespace=persistent-local-volumes-test-5126 -- stat -c %g /mnt/volume1' Nov 13 05:49:37.999: INFO: stderr: "" Nov 13 05:49:37.999: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod pod-f0a7baf0-2061-4ae4-bb98-733e4ab2c4c8 in namespace persistent-local-volumes-test-5126 STEP: Deleting second pod STEP: Deleting pod pod-3ab04a0e-9079-4828-bb12-d5904b97e6d8 in namespace persistent-local-volumes-test-5126 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:49:38.008: INFO: Deleting PersistentVolumeClaim "pvc-5bgks" Nov 13 05:49:38.012: INFO: Deleting PersistentVolume "local-pvw6jsl" STEP: Removing the test directory Nov 13 05:49:38.015: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-bbb08d71-d4dc-46dc-9fa4-8650b8703c10 && umount /tmp/local-volume-test-bbb08d71-d4dc-46dc-9fa4-8650b8703c10-backend && rm -r /tmp/local-volume-test-bbb08d71-d4dc-46dc-9fa4-8650b8703c10-backend] Namespace:persistent-local-volumes-test-5126 PodName:hostexec-node2-4vm7r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:49:38.015: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:49:38.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5126" for this suite. • [SLOW TEST:21.183 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":13,"skipped":503,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:49:38.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Nov 13 05:49:38.412: INFO: The status of Pod test-hostpath-type-dc2lp is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:49:40.416: INFO: The status of Pod test-hostpath-type-dc2lp is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:49:42.419: INFO: The status of Pod test-hostpath-type-dc2lp is Running (Ready = true) STEP: running on node node1 STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should fail on mounting directory 'adir' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:99 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:49:48.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-2912" for this suite. • [SLOW TEST:10.107 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting directory 'adir' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:99 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathBlockDev","total":-1,"completed":14,"skipped":514,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:49:48.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Nov 13 05:49:48.523: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:49:48.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-2084" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create volume metrics with the correct PVC ref [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:204 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ S ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:49:48.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Nov 13 05:49:48.561: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:49:48.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-9403" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.035 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create prometheus metrics for volume provisioning errors [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:147 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:49:48.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] volume on default medium should have the correct mode using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:71 STEP: Creating a pod to test emptydir volume type on node default medium Nov 13 05:49:48.612: INFO: Waiting up to 5m0s for pod "pod-495d09d1-f165-41eb-ba63-ba341ba32434" in namespace "emptydir-5289" to be "Succeeded or Failed" Nov 13 05:49:48.615: INFO: Pod "pod-495d09d1-f165-41eb-ba63-ba341ba32434": Phase="Pending", Reason="", readiness=false. Elapsed: 2.98572ms Nov 13 05:49:50.619: INFO: Pod "pod-495d09d1-f165-41eb-ba63-ba341ba32434": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007078204s Nov 13 05:49:52.623: INFO: Pod "pod-495d09d1-f165-41eb-ba63-ba341ba32434": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010855088s STEP: Saw pod success Nov 13 05:49:52.623: INFO: Pod "pod-495d09d1-f165-41eb-ba63-ba341ba32434" satisfied condition "Succeeded or Failed" Nov 13 05:49:52.625: INFO: Trying to get logs from node node1 pod pod-495d09d1-f165-41eb-ba63-ba341ba32434 container test-container: STEP: delete the pod Nov 13 05:49:52.661: INFO: Waiting for pod pod-495d09d1-f165-41eb-ba63-ba341ba32434 to disappear Nov 13 05:49:52.663: INFO: Pod pod-495d09d1-f165-41eb-ba63-ba341ba32434 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:49:52.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5289" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":-1,"completed":15,"skipped":528,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:49:52.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:49:56.835: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-91b36215-788f-454d-a913-485b4f87f28e-backend && ln -s /tmp/local-volume-test-91b36215-788f-454d-a913-485b4f87f28e-backend /tmp/local-volume-test-91b36215-788f-454d-a913-485b4f87f28e] Namespace:persistent-local-volumes-test-5238 PodName:hostexec-node1-mhz7w ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:49:56.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:49:56.956: INFO: Creating a PV followed by a PVC Nov 13 05:49:56.962: INFO: Waiting for PV local-pvhbqxk to bind to PVC pvc-jkwfb Nov 13 05:49:56.962: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-jkwfb] to have phase Bound Nov 13 05:49:56.964: INFO: PersistentVolumeClaim pvc-jkwfb found but phase is Pending instead of Bound. Nov 13 05:49:58.973: INFO: PersistentVolumeClaim pvc-jkwfb found but phase is Pending instead of Bound. Nov 13 05:50:00.976: INFO: PersistentVolumeClaim pvc-jkwfb found but phase is Pending instead of Bound. Nov 13 05:50:02.979: INFO: PersistentVolumeClaim pvc-jkwfb found but phase is Pending instead of Bound. Nov 13 05:50:04.983: INFO: PersistentVolumeClaim pvc-jkwfb found but phase is Pending instead of Bound. Nov 13 05:50:06.987: INFO: PersistentVolumeClaim pvc-jkwfb found but phase is Pending instead of Bound. Nov 13 05:50:08.992: INFO: PersistentVolumeClaim pvc-jkwfb found but phase is Pending instead of Bound. Nov 13 05:50:10.995: INFO: PersistentVolumeClaim pvc-jkwfb found but phase is Pending instead of Bound. Nov 13 05:50:12.998: INFO: PersistentVolumeClaim pvc-jkwfb found and phase=Bound (16.036001096s) Nov 13 05:50:12.998: INFO: Waiting up to 3m0s for PersistentVolume local-pvhbqxk to have phase Bound Nov 13 05:50:13.000: INFO: PersistentVolume local-pvhbqxk found and phase=Bound (1.821147ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Nov 13 05:50:13.004: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:50:13.006: INFO: Deleting PersistentVolumeClaim "pvc-jkwfb" Nov 13 05:50:13.010: INFO: Deleting PersistentVolume "local-pvhbqxk" STEP: Removing the test directory Nov 13 05:50:13.014: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-91b36215-788f-454d-a913-485b4f87f28e && rm -r /tmp/local-volume-test-91b36215-788f-454d-a913-485b4f87f28e-backend] Namespace:persistent-local-volumes-test-5238 PodName:hostexec-node1-mhz7w ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:50:13.014: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:50:13.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5238" for this suite. S [SKIPPING] [20.554 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:50:13.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] new files should be created with FSGroup ownership when container is non-root /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:59 STEP: Creating a pod to test emptydir 0644 on tmpfs Nov 13 05:50:13.556: INFO: Waiting up to 5m0s for pod "pod-54377f4b-ba07-44af-9b46-b61fe7858eac" in namespace "emptydir-5743" to be "Succeeded or Failed" Nov 13 05:50:13.560: INFO: Pod "pod-54377f4b-ba07-44af-9b46-b61fe7858eac": Phase="Pending", Reason="", readiness=false. Elapsed: 3.457702ms Nov 13 05:50:15.565: INFO: Pod "pod-54377f4b-ba07-44af-9b46-b61fe7858eac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008743562s Nov 13 05:50:17.569: INFO: Pod "pod-54377f4b-ba07-44af-9b46-b61fe7858eac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012785108s STEP: Saw pod success Nov 13 05:50:17.569: INFO: Pod "pod-54377f4b-ba07-44af-9b46-b61fe7858eac" satisfied condition "Succeeded or Failed" Nov 13 05:50:17.572: INFO: Trying to get logs from node node1 pod pod-54377f4b-ba07-44af-9b46-b61fe7858eac container test-container: STEP: delete the pod Nov 13 05:50:17.587: INFO: Waiting for pod pod-54377f4b-ba07-44af-9b46-b61fe7858eac to disappear Nov 13 05:50:17.589: INFO: Pod pod-54377f4b-ba07-44af-9b46-b61fe7858eac no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:50:17.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5743" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root","total":-1,"completed":16,"skipped":678,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:49:23.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity used, have capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 STEP: Building a driver namespace object, basename csi-mock-volumes-12 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:49:23.497: INFO: creating *v1.ServiceAccount: csi-mock-volumes-12-6103/csi-attacher Nov 13 05:49:23.500: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-12 Nov 13 05:49:23.500: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-12 Nov 13 05:49:23.503: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-12 Nov 13 05:49:23.506: INFO: creating *v1.Role: csi-mock-volumes-12-6103/external-attacher-cfg-csi-mock-volumes-12 Nov 13 05:49:23.509: INFO: creating *v1.RoleBinding: csi-mock-volumes-12-6103/csi-attacher-role-cfg Nov 13 05:49:23.511: INFO: creating *v1.ServiceAccount: csi-mock-volumes-12-6103/csi-provisioner Nov 13 05:49:23.514: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-12 Nov 13 05:49:23.514: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-12 Nov 13 05:49:23.518: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-12 Nov 13 05:49:23.521: INFO: creating *v1.Role: csi-mock-volumes-12-6103/external-provisioner-cfg-csi-mock-volumes-12 Nov 13 05:49:23.523: INFO: creating *v1.RoleBinding: csi-mock-volumes-12-6103/csi-provisioner-role-cfg Nov 13 05:49:23.526: INFO: creating *v1.ServiceAccount: csi-mock-volumes-12-6103/csi-resizer Nov 13 05:49:23.528: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-12 Nov 13 05:49:23.528: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-12 Nov 13 05:49:23.530: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-12 Nov 13 05:49:23.532: INFO: creating *v1.Role: csi-mock-volumes-12-6103/external-resizer-cfg-csi-mock-volumes-12 Nov 13 05:49:23.535: INFO: creating *v1.RoleBinding: csi-mock-volumes-12-6103/csi-resizer-role-cfg Nov 13 05:49:23.537: INFO: creating *v1.ServiceAccount: csi-mock-volumes-12-6103/csi-snapshotter Nov 13 05:49:23.540: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-12 Nov 13 05:49:23.540: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-12 Nov 13 05:49:23.542: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-12 Nov 13 05:49:23.545: INFO: creating *v1.Role: csi-mock-volumes-12-6103/external-snapshotter-leaderelection-csi-mock-volumes-12 Nov 13 05:49:23.547: INFO: creating *v1.RoleBinding: csi-mock-volumes-12-6103/external-snapshotter-leaderelection Nov 13 05:49:23.550: INFO: creating *v1.ServiceAccount: csi-mock-volumes-12-6103/csi-mock Nov 13 05:49:23.552: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-12 Nov 13 05:49:23.556: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-12 Nov 13 05:49:23.559: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-12 Nov 13 05:49:23.562: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-12 Nov 13 05:49:23.564: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-12 Nov 13 05:49:23.567: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-12 Nov 13 05:49:23.569: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-12 Nov 13 05:49:23.572: INFO: creating *v1.StatefulSet: csi-mock-volumes-12-6103/csi-mockplugin Nov 13 05:49:23.576: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-12 Nov 13 05:49:23.579: INFO: creating *v1.StatefulSet: csi-mock-volumes-12-6103/csi-mockplugin-attacher Nov 13 05:49:23.583: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-12" Nov 13 05:49:23.585: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-12 to register on node node1 STEP: Creating pod Nov 13 05:49:38.107: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Deleting the previously created pod Nov 13 05:49:50.134: INFO: Deleting pod "pvc-volume-tester-g67d8" in namespace "csi-mock-volumes-12" Nov 13 05:49:50.139: INFO: Wait up to 5m0s for pod "pvc-volume-tester-g67d8" to be fully deleted STEP: Deleting pod pvc-volume-tester-g67d8 Nov 13 05:50:02.144: INFO: Deleting pod "pvc-volume-tester-g67d8" in namespace "csi-mock-volumes-12" STEP: Deleting claim pvc-sqwdh Nov 13 05:50:02.158: INFO: Waiting up to 2m0s for PersistentVolume pvc-8a1f53ad-b69f-4800-8369-3265d55c284c to get deleted Nov 13 05:50:02.161: INFO: PersistentVolume pvc-8a1f53ad-b69f-4800-8369-3265d55c284c found and phase=Bound (2.711739ms) Nov 13 05:50:04.168: INFO: PersistentVolume pvc-8a1f53ad-b69f-4800-8369-3265d55c284c found and phase=Released (2.009723926s) Nov 13 05:50:06.171: INFO: PersistentVolume pvc-8a1f53ad-b69f-4800-8369-3265d55c284c was removed STEP: Deleting storageclass mock-csi-storage-capacity-csi-mock-volumes-12 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-12 STEP: Waiting for namespaces [csi-mock-volumes-12] to vanish STEP: uninstalling csi mock driver Nov 13 05:50:12.186: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-12-6103/csi-attacher Nov 13 05:50:12.192: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-12 Nov 13 05:50:12.196: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-12 Nov 13 05:50:12.199: INFO: deleting *v1.Role: csi-mock-volumes-12-6103/external-attacher-cfg-csi-mock-volumes-12 Nov 13 05:50:12.204: INFO: deleting *v1.RoleBinding: csi-mock-volumes-12-6103/csi-attacher-role-cfg Nov 13 05:50:12.207: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-12-6103/csi-provisioner Nov 13 05:50:12.211: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-12 Nov 13 05:50:12.215: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-12 Nov 13 05:50:12.218: INFO: deleting *v1.Role: csi-mock-volumes-12-6103/external-provisioner-cfg-csi-mock-volumes-12 Nov 13 05:50:12.221: INFO: deleting *v1.RoleBinding: csi-mock-volumes-12-6103/csi-provisioner-role-cfg Nov 13 05:50:12.224: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-12-6103/csi-resizer Nov 13 05:50:12.229: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-12 Nov 13 05:50:12.232: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-12 Nov 13 05:50:12.235: INFO: deleting *v1.Role: csi-mock-volumes-12-6103/external-resizer-cfg-csi-mock-volumes-12 Nov 13 05:50:12.238: INFO: deleting *v1.RoleBinding: csi-mock-volumes-12-6103/csi-resizer-role-cfg Nov 13 05:50:12.241: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-12-6103/csi-snapshotter Nov 13 05:50:12.244: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-12 Nov 13 05:50:12.248: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-12 Nov 13 05:50:12.251: INFO: deleting *v1.Role: csi-mock-volumes-12-6103/external-snapshotter-leaderelection-csi-mock-volumes-12 Nov 13 05:50:12.255: INFO: deleting *v1.RoleBinding: csi-mock-volumes-12-6103/external-snapshotter-leaderelection Nov 13 05:50:12.258: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-12-6103/csi-mock Nov 13 05:50:12.261: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-12 Nov 13 05:50:12.265: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-12 Nov 13 05:50:12.268: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-12 Nov 13 05:50:12.272: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-12 Nov 13 05:50:12.275: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-12 Nov 13 05:50:12.278: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-12 Nov 13 05:50:12.282: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-12 Nov 13 05:50:12.286: INFO: deleting *v1.StatefulSet: csi-mock-volumes-12-6103/csi-mockplugin Nov 13 05:50:12.290: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-12 Nov 13 05:50:12.293: INFO: deleting *v1.StatefulSet: csi-mock-volumes-12-6103/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-12-6103 STEP: Waiting for namespaces [csi-mock-volumes-12-6103] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:50:18.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:54.883 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134 CSIStorageCapacity used, have capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","total":-1,"completed":11,"skipped":411,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:49:16.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should modify fsGroup if fsGroupPolicy=File /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1460 STEP: Building a driver namespace object, basename csi-mock-volumes-6808 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:49:16.260: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6808-9794/csi-attacher Nov 13 05:49:16.262: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6808 Nov 13 05:49:16.262: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-6808 Nov 13 05:49:16.265: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6808 Nov 13 05:49:16.268: INFO: creating *v1.Role: csi-mock-volumes-6808-9794/external-attacher-cfg-csi-mock-volumes-6808 Nov 13 05:49:16.271: INFO: creating *v1.RoleBinding: csi-mock-volumes-6808-9794/csi-attacher-role-cfg Nov 13 05:49:16.274: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6808-9794/csi-provisioner Nov 13 05:49:16.276: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6808 Nov 13 05:49:16.276: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-6808 Nov 13 05:49:16.279: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6808 Nov 13 05:49:16.282: INFO: creating *v1.Role: csi-mock-volumes-6808-9794/external-provisioner-cfg-csi-mock-volumes-6808 Nov 13 05:49:16.285: INFO: creating *v1.RoleBinding: csi-mock-volumes-6808-9794/csi-provisioner-role-cfg Nov 13 05:49:16.288: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6808-9794/csi-resizer Nov 13 05:49:16.291: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6808 Nov 13 05:49:16.291: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-6808 Nov 13 05:49:16.294: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6808 Nov 13 05:49:16.297: INFO: creating *v1.Role: csi-mock-volumes-6808-9794/external-resizer-cfg-csi-mock-volumes-6808 Nov 13 05:49:16.300: INFO: creating *v1.RoleBinding: csi-mock-volumes-6808-9794/csi-resizer-role-cfg Nov 13 05:49:16.304: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6808-9794/csi-snapshotter Nov 13 05:49:16.310: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6808 Nov 13 05:49:16.310: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-6808 Nov 13 05:49:16.314: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6808 Nov 13 05:49:16.320: INFO: creating *v1.Role: csi-mock-volumes-6808-9794/external-snapshotter-leaderelection-csi-mock-volumes-6808 Nov 13 05:49:16.324: INFO: creating *v1.RoleBinding: csi-mock-volumes-6808-9794/external-snapshotter-leaderelection Nov 13 05:49:16.329: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6808-9794/csi-mock Nov 13 05:49:16.335: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6808 Nov 13 05:49:16.339: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6808 Nov 13 05:49:16.342: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6808 Nov 13 05:49:16.344: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6808 Nov 13 05:49:16.347: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6808 Nov 13 05:49:16.349: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6808 Nov 13 05:49:16.351: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6808 Nov 13 05:49:16.354: INFO: creating *v1.StatefulSet: csi-mock-volumes-6808-9794/csi-mockplugin Nov 13 05:49:16.358: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-6808 Nov 13 05:49:16.362: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-6808" Nov 13 05:49:16.364: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-6808 to register on node node2 STEP: Creating pod with fsGroup Nov 13 05:49:26.378: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:49:26.383: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-cw9sl] to have phase Bound Nov 13 05:49:26.386: INFO: PersistentVolumeClaim pvc-cw9sl found but phase is Pending instead of Bound. Nov 13 05:49:28.391: INFO: PersistentVolumeClaim pvc-cw9sl found and phase=Bound (2.007747599s) Nov 13 05:49:36.412: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir /mnt/test/csi-mock-volumes-6808] Namespace:csi-mock-volumes-6808 PodName:pvc-volume-tester-lvqz2 ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:49:36.412: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:49:36.537: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'filecontents' > '/mnt/test/csi-mock-volumes-6808/csi-mock-volumes-6808'; sync] Namespace:csi-mock-volumes-6808 PodName:pvc-volume-tester-lvqz2 ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:49:36.537: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:49:38.800: INFO: ExecWithOptions {Command:[/bin/sh -c ls -l /mnt/test/csi-mock-volumes-6808/csi-mock-volumes-6808] Namespace:csi-mock-volumes-6808 PodName:pvc-volume-tester-lvqz2 ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:49:38.800: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:49:38.880: INFO: pod csi-mock-volumes-6808/pvc-volume-tester-lvqz2 exec for cmd ls -l /mnt/test/csi-mock-volumes-6808/csi-mock-volumes-6808, stdout: -rw-r--r-- 1 root 8172 13 Nov 13 05:49 /mnt/test/csi-mock-volumes-6808/csi-mock-volumes-6808, stderr: Nov 13 05:49:38.880: INFO: ExecWithOptions {Command:[/bin/sh -c rm -fr /mnt/test/csi-mock-volumes-6808] Namespace:csi-mock-volumes-6808 PodName:pvc-volume-tester-lvqz2 ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:49:38.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod pvc-volume-tester-lvqz2 Nov 13 05:49:38.959: INFO: Deleting pod "pvc-volume-tester-lvqz2" in namespace "csi-mock-volumes-6808" Nov 13 05:49:38.965: INFO: Wait up to 5m0s for pod "pvc-volume-tester-lvqz2" to be fully deleted STEP: Deleting claim pvc-cw9sl Nov 13 05:50:12.977: INFO: Waiting up to 2m0s for PersistentVolume pvc-de916a0a-7460-4e3f-ad5c-b61ff5a1d685 to get deleted Nov 13 05:50:12.979: INFO: PersistentVolume pvc-de916a0a-7460-4e3f-ad5c-b61ff5a1d685 found and phase=Bound (1.840665ms) Nov 13 05:50:14.983: INFO: PersistentVolume pvc-de916a0a-7460-4e3f-ad5c-b61ff5a1d685 was removed STEP: Deleting storageclass csi-mock-volumes-6808-scmcnx6 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-6808 STEP: Waiting for namespaces [csi-mock-volumes-6808] to vanish STEP: uninstalling csi mock driver Nov 13 05:50:20.997: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6808-9794/csi-attacher Nov 13 05:50:21.001: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6808 Nov 13 05:50:21.004: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6808 Nov 13 05:50:21.008: INFO: deleting *v1.Role: csi-mock-volumes-6808-9794/external-attacher-cfg-csi-mock-volumes-6808 Nov 13 05:50:21.011: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6808-9794/csi-attacher-role-cfg Nov 13 05:50:21.016: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6808-9794/csi-provisioner Nov 13 05:50:21.020: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6808 Nov 13 05:50:21.030: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6808 Nov 13 05:50:21.033: INFO: deleting *v1.Role: csi-mock-volumes-6808-9794/external-provisioner-cfg-csi-mock-volumes-6808 Nov 13 05:50:21.037: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6808-9794/csi-provisioner-role-cfg Nov 13 05:50:21.040: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6808-9794/csi-resizer Nov 13 05:50:21.043: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6808 Nov 13 05:50:21.046: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6808 Nov 13 05:50:21.050: INFO: deleting *v1.Role: csi-mock-volumes-6808-9794/external-resizer-cfg-csi-mock-volumes-6808 Nov 13 05:50:21.053: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6808-9794/csi-resizer-role-cfg Nov 13 05:50:21.057: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6808-9794/csi-snapshotter Nov 13 05:50:21.062: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6808 Nov 13 05:50:21.065: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6808 Nov 13 05:50:21.068: INFO: deleting *v1.Role: csi-mock-volumes-6808-9794/external-snapshotter-leaderelection-csi-mock-volumes-6808 Nov 13 05:50:21.072: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6808-9794/external-snapshotter-leaderelection Nov 13 05:50:21.075: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6808-9794/csi-mock Nov 13 05:50:21.079: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6808 Nov 13 05:50:21.082: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6808 Nov 13 05:50:21.085: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6808 Nov 13 05:50:21.088: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6808 Nov 13 05:50:21.092: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6808 Nov 13 05:50:21.095: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6808 Nov 13 05:50:21.098: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6808 Nov 13 05:50:21.102: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6808-9794/csi-mockplugin Nov 13 05:50:21.105: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-6808 STEP: deleting the driver namespace: csi-mock-volumes-6808-9794 STEP: Waiting for namespaces [csi-mock-volumes-6808-9794] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:50:33.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:76.919 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI FSGroupPolicy [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1436 should modify fsGroup if fsGroupPolicy=File /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1460 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File","total":-1,"completed":13,"skipped":451,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:50:17.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:50:21.662: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-65a8fe5c-66de-43c1-92a4-50a9ece4ff40] Namespace:persistent-local-volumes-test-9999 PodName:hostexec-node1-c7scw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:50:21.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:50:23.426: INFO: Creating a PV followed by a PVC Nov 13 05:50:23.433: INFO: Waiting for PV local-pv9k5gt to bind to PVC pvc-x74b5 Nov 13 05:50:23.433: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-x74b5] to have phase Bound Nov 13 05:50:23.435: INFO: PersistentVolumeClaim pvc-x74b5 found but phase is Pending instead of Bound. Nov 13 05:50:25.439: INFO: PersistentVolumeClaim pvc-x74b5 found but phase is Pending instead of Bound. Nov 13 05:50:27.442: INFO: PersistentVolumeClaim pvc-x74b5 found and phase=Bound (4.008398884s) Nov 13 05:50:27.442: INFO: Waiting up to 3m0s for PersistentVolume local-pv9k5gt to have phase Bound Nov 13 05:50:27.445: INFO: PersistentVolume local-pv9k5gt found and phase=Bound (2.663923ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:50:35.472: INFO: pod "pod-b467c8be-35dd-46e1-b464-97006c52356a" created on Node "node1" STEP: Writing in pod1 Nov 13 05:50:35.472: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9999 PodName:pod-b467c8be-35dd-46e1-b464-97006c52356a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:50:35.472: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:50:35.557: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Nov 13 05:50:35.557: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9999 PodName:pod-b467c8be-35dd-46e1-b464-97006c52356a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:50:35.557: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:50:35.650: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Nov 13 05:50:35.650: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-65a8fe5c-66de-43c1-92a4-50a9ece4ff40 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9999 PodName:pod-b467c8be-35dd-46e1-b464-97006c52356a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:50:35.650: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:50:35.733: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-65a8fe5c-66de-43c1-92a4-50a9ece4ff40 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-b467c8be-35dd-46e1-b464-97006c52356a in namespace persistent-local-volumes-test-9999 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:50:35.738: INFO: Deleting PersistentVolumeClaim "pvc-x74b5" Nov 13 05:50:35.742: INFO: Deleting PersistentVolume "local-pv9k5gt" STEP: Removing the test directory Nov 13 05:50:35.746: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-65a8fe5c-66de-43c1-92a4-50a9ece4ff40] Namespace:persistent-local-volumes-test-9999 PodName:hostexec-node1-c7scw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:50:35.746: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:50:35.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9999" for this suite. • [SLOW TEST:18.233 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":17,"skipped":686,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:50:35.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pvc-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:72 Nov 13 05:50:35.919: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PVC Nov 13 05:50:35.924: INFO: error finding default storageClass : No default storage class found [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:50:35.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pvc-protection-4815" for this suite. [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:108 S [SKIPPING] in Spec Setup (BeforeEach) [0.037 seconds] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:145 error finding default storageClass : No default storage class found /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pv/pv.go:819 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:50:35.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should test that deleting a claim before the volume is provisioned deletes the volume. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:511 Nov 13 05:50:36.016: INFO: Only supported for providers [openstack gce aws gke vsphere azure] (not local) [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:50:36.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-7374" for this suite. S [SKIPPING] [0.031 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 DynamicProvisioner [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:152 should test that deleting a claim before the volume is provisioned deletes the volume. [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:511 Only supported for providers [openstack gce aws gke vsphere azure] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:517 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:45:37.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/secrets_volume.go:430 STEP: Creating the pod [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:50:37.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1792" for this suite. • [SLOW TEST:300.056 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/secrets_volume.go:430 ------------------------------ {"msg":"PASSED [sig-storage] Secrets Should fail non-optional pod creation due to secret object does not exist [Slow]","total":-1,"completed":9,"skipped":359,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:45:41.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to the key in the secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:421 STEP: Creating secret with name s-test-opt-create-4c2e9cd6-3700-4192-b72d-9fb1d76f4e78 STEP: Creating the pod [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:50:41.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4358" for this suite. • [SLOW TEST:300.054 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to the key in the secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:421 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]","total":-1,"completed":10,"skipped":396,"failed":1,"failures":["[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:50:41.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Nov 13 05:50:41.355: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:50:41.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-836" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.029 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 schedule a pod w/ RW PD(s) mounted to 1 or more containers, write to PD, verify content, delete pod, and repeat in rapid succession [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:231 using 4 containers and 1 PDs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:254 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:50:41.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:110 STEP: Creating configMap with name projected-configmap-test-volume-map-9a243e58-ec19-4518-8ccf-d3845bcc88e9 STEP: Creating a pod to test consume configMaps Nov 13 05:50:41.459: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-26b00f56-a8cc-42bc-9a6f-b787021e9bb1" in namespace "projected-8918" to be "Succeeded or Failed" Nov 13 05:50:41.461: INFO: Pod "pod-projected-configmaps-26b00f56-a8cc-42bc-9a6f-b787021e9bb1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120079ms Nov 13 05:50:43.467: INFO: Pod "pod-projected-configmaps-26b00f56-a8cc-42bc-9a6f-b787021e9bb1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00758164s Nov 13 05:50:45.472: INFO: Pod "pod-projected-configmaps-26b00f56-a8cc-42bc-9a6f-b787021e9bb1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012563596s Nov 13 05:50:47.474: INFO: Pod "pod-projected-configmaps-26b00f56-a8cc-42bc-9a6f-b787021e9bb1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015304835s STEP: Saw pod success Nov 13 05:50:47.474: INFO: Pod "pod-projected-configmaps-26b00f56-a8cc-42bc-9a6f-b787021e9bb1" satisfied condition "Succeeded or Failed" Nov 13 05:50:47.477: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-26b00f56-a8cc-42bc-9a6f-b787021e9bb1 container agnhost-container: STEP: delete the pod Nov 13 05:50:47.493: INFO: Waiting for pod pod-projected-configmaps-26b00f56-a8cc-42bc-9a6f-b787021e9bb1 to disappear Nov 13 05:50:47.495: INFO: Pod pod-projected-configmaps-26b00f56-a8cc-42bc-9a6f-b787021e9bb1 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:50:47.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8918" for this suite. • [SLOW TEST:6.083 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:110 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":11,"skipped":496,"failed":1,"failures":["[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} SSSSS ------------------------------ [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:50:47.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:68 Nov 13 05:50:47.540: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:50:47.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-2868" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 NFSv3 [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:102 should be mountable for NFSv3 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:103 Only supported for node OS distro [gci ubuntu custom] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:69 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:50:33.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:50:35.227: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-8c7b3be9-bf21-4c12-8869-84bc2da18663] Namespace:persistent-local-volumes-test-2386 PodName:hostexec-node2-5x8rd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:50:35.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:50:35.736: INFO: Creating a PV followed by a PVC Nov 13 05:50:35.742: INFO: Waiting for PV local-pvf6rg6 to bind to PVC pvc-mk2jr Nov 13 05:50:35.742: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-mk2jr] to have phase Bound Nov 13 05:50:35.744: INFO: PersistentVolumeClaim pvc-mk2jr found but phase is Pending instead of Bound. Nov 13 05:50:37.747: INFO: PersistentVolumeClaim pvc-mk2jr found but phase is Pending instead of Bound. Nov 13 05:50:39.752: INFO: PersistentVolumeClaim pvc-mk2jr found but phase is Pending instead of Bound. Nov 13 05:50:41.756: INFO: PersistentVolumeClaim pvc-mk2jr found and phase=Bound (6.013960483s) Nov 13 05:50:41.756: INFO: Waiting up to 3m0s for PersistentVolume local-pvf6rg6 to have phase Bound Nov 13 05:50:41.758: INFO: PersistentVolume local-pvf6rg6 found and phase=Bound (2.275899ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Nov 13 05:50:45.785: INFO: pod "pod-ef5098ff-33e0-4368-b9a7-108d53ede30a" created on Node "node2" STEP: Writing in pod1 Nov 13 05:50:45.785: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2386 PodName:pod-ef5098ff-33e0-4368-b9a7-108d53ede30a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:50:45.785: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:50:45.862: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 13 05:50:45.862: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2386 PodName:pod-ef5098ff-33e0-4368-b9a7-108d53ede30a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:50:45.862: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:50:45.939: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Nov 13 05:50:49.962: INFO: pod "pod-f1309e04-874a-4950-9c66-64681032ce6a" created on Node "node2" Nov 13 05:50:49.962: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2386 PodName:pod-f1309e04-874a-4950-9c66-64681032ce6a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:50:49.962: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:50:50.041: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Nov 13 05:50:50.042: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-8c7b3be9-bf21-4c12-8869-84bc2da18663 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2386 PodName:pod-f1309e04-874a-4950-9c66-64681032ce6a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:50:50.042: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:50:50.164: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-8c7b3be9-bf21-4c12-8869-84bc2da18663 > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Nov 13 05:50:50.164: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2386 PodName:pod-ef5098ff-33e0-4368-b9a7-108d53ede30a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:50:50.164: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:50:50.237: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-8c7b3be9-bf21-4c12-8869-84bc2da18663", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-ef5098ff-33e0-4368-b9a7-108d53ede30a in namespace persistent-local-volumes-test-2386 STEP: Deleting pod2 STEP: Deleting pod pod-f1309e04-874a-4950-9c66-64681032ce6a in namespace persistent-local-volumes-test-2386 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:50:50.249: INFO: Deleting PersistentVolumeClaim "pvc-mk2jr" Nov 13 05:50:50.253: INFO: Deleting PersistentVolume "local-pvf6rg6" STEP: Removing the test directory Nov 13 05:50:50.257: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8c7b3be9-bf21-4c12-8869-84bc2da18663] Namespace:persistent-local-volumes-test-2386 PodName:hostexec-node2-5x8rd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:50:50.257: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:50:50.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2386" for this suite. • [SLOW TEST:17.188 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":14,"skipped":474,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:50:37.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-6d1ab755-9312-4295-ac8a-a714b6a89e6f" Nov 13 05:50:39.476: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-6d1ab755-9312-4295-ac8a-a714b6a89e6f && dd if=/dev/zero of=/tmp/local-volume-test-6d1ab755-9312-4295-ac8a-a714b6a89e6f/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-6d1ab755-9312-4295-ac8a-a714b6a89e6f/file] Namespace:persistent-local-volumes-test-8095 PodName:hostexec-node1-8c89f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:50:39.476: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:50:39.599: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-6d1ab755-9312-4295-ac8a-a714b6a89e6f/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-8095 PodName:hostexec-node1-8c89f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:50:39.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:50:39.687: INFO: Creating a PV followed by a PVC Nov 13 05:50:39.695: INFO: Waiting for PV local-pv9ng9j to bind to PVC pvc-tqkrz Nov 13 05:50:39.695: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-tqkrz] to have phase Bound Nov 13 05:50:39.697: INFO: PersistentVolumeClaim pvc-tqkrz found but phase is Pending instead of Bound. Nov 13 05:50:41.700: INFO: PersistentVolumeClaim pvc-tqkrz found and phase=Bound (2.005833468s) Nov 13 05:50:41.700: INFO: Waiting up to 3m0s for PersistentVolume local-pv9ng9j to have phase Bound Nov 13 05:50:41.703: INFO: PersistentVolume local-pv9ng9j found and phase=Bound (2.936122ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Nov 13 05:50:47.729: INFO: pod "pod-5fbfd77b-26e3-48e9-97c6-f8869ed0044a" created on Node "node1" STEP: Writing in pod1 Nov 13 05:50:47.729: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8095 PodName:pod-5fbfd77b-26e3-48e9-97c6-f8869ed0044a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:50:47.729: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:50:47.808: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n18 bytes (18B) copied, 0.000181 seconds, 97.1KB/s", err: Nov 13 05:50:47.808: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-8095 PodName:pod-5fbfd77b-26e3-48e9-97c6-f8869ed0044a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:50:47.808: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:50:47.911: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Nov 13 05:50:51.932: INFO: pod "pod-9f31211f-3481-4589-afe4-e925f637a290" created on Node "node1" Nov 13 05:50:51.932: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-8095 PodName:pod-9f31211f-3481-4589-afe4-e925f637a290 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:50:51.932: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:50:52.018: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Writing in pod2 Nov 13 05:50:52.018: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo /dev/loop0 > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8095 PodName:pod-9f31211f-3481-4589-afe4-e925f637a290 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:50:52.018: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:50:52.105: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo /dev/loop0 > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n11 bytes (11B) copied, 0.000034 seconds, 315.9KB/s", err: STEP: Reading in pod1 Nov 13 05:50:52.105: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-8095 PodName:pod-5fbfd77b-26e3-48e9-97c6-f8869ed0044a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:50:52.105: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:50:52.180: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "/dev/loop0.ontent...................................................................................", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-5fbfd77b-26e3-48e9-97c6-f8869ed0044a in namespace persistent-local-volumes-test-8095 STEP: Deleting pod2 STEP: Deleting pod pod-9f31211f-3481-4589-afe4-e925f637a290 in namespace persistent-local-volumes-test-8095 [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:50:52.191: INFO: Deleting PersistentVolumeClaim "pvc-tqkrz" Nov 13 05:50:52.195: INFO: Deleting PersistentVolume "local-pv9ng9j" Nov 13 05:50:52.198: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-6d1ab755-9312-4295-ac8a-a714b6a89e6f/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-8095 PodName:hostexec-node1-8c89f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:50:52.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node1" at path /tmp/local-volume-test-6d1ab755-9312-4295-ac8a-a714b6a89e6f/file Nov 13 05:50:52.289: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-8095 PodName:hostexec-node1-8c89f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:50:52.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-6d1ab755-9312-4295-ac8a-a714b6a89e6f Nov 13 05:50:52.429: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6d1ab755-9312-4295-ac8a-a714b6a89e6f] Namespace:persistent-local-volumes-test-8095 PodName:hostexec-node1-8c89f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:50:52.429: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:50:52.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8095" for this suite. • [SLOW TEST:15.117 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":10,"skipped":366,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:50:47.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:50:51.668: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-99dfbb63-0af1-464d-b545-26b45f24664b-backend && mount --bind /tmp/local-volume-test-99dfbb63-0af1-464d-b545-26b45f24664b-backend /tmp/local-volume-test-99dfbb63-0af1-464d-b545-26b45f24664b-backend && ln -s /tmp/local-volume-test-99dfbb63-0af1-464d-b545-26b45f24664b-backend /tmp/local-volume-test-99dfbb63-0af1-464d-b545-26b45f24664b] Namespace:persistent-local-volumes-test-5756 PodName:hostexec-node1-t2m6v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:50:51.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:50:51.960: INFO: Creating a PV followed by a PVC Nov 13 05:50:51.967: INFO: Waiting for PV local-pv7sngb to bind to PVC pvc-ht7jd Nov 13 05:50:51.967: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-ht7jd] to have phase Bound Nov 13 05:50:51.969: INFO: PersistentVolumeClaim pvc-ht7jd found but phase is Pending instead of Bound. Nov 13 05:50:53.975: INFO: PersistentVolumeClaim pvc-ht7jd found but phase is Pending instead of Bound. Nov 13 05:50:55.978: INFO: PersistentVolumeClaim pvc-ht7jd found but phase is Pending instead of Bound. Nov 13 05:50:57.981: INFO: PersistentVolumeClaim pvc-ht7jd found and phase=Bound (6.014441399s) Nov 13 05:50:57.981: INFO: Waiting up to 3m0s for PersistentVolume local-pv7sngb to have phase Bound Nov 13 05:50:57.983: INFO: PersistentVolume local-pv7sngb found and phase=Bound (1.971879ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Nov 13 05:51:02.010: INFO: pod "pod-fd1675f8-4ff5-414b-aef8-06824287cdc1" created on Node "node1" STEP: Writing in pod1 Nov 13 05:51:02.010: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5756 PodName:pod-fd1675f8-4ff5-414b-aef8-06824287cdc1 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:51:02.010: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:51:02.130: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 13 05:51:02.130: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5756 PodName:pod-fd1675f8-4ff5-414b-aef8-06824287cdc1 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:51:02.130: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:51:02.213: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Nov 13 05:51:08.238: INFO: pod "pod-970c42ca-7e33-4b47-aa06-97c3810e251c" created on Node "node1" Nov 13 05:51:08.238: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5756 PodName:pod-970c42ca-7e33-4b47-aa06-97c3810e251c ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:51:08.238: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:51:08.327: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Nov 13 05:51:08.327: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-99dfbb63-0af1-464d-b545-26b45f24664b > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5756 PodName:pod-970c42ca-7e33-4b47-aa06-97c3810e251c ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:51:08.327: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:51:08.432: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-99dfbb63-0af1-464d-b545-26b45f24664b > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Nov 13 05:51:08.432: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5756 PodName:pod-fd1675f8-4ff5-414b-aef8-06824287cdc1 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:51:08.432: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:51:08.508: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-99dfbb63-0af1-464d-b545-26b45f24664b", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-fd1675f8-4ff5-414b-aef8-06824287cdc1 in namespace persistent-local-volumes-test-5756 STEP: Deleting pod2 STEP: Deleting pod pod-970c42ca-7e33-4b47-aa06-97c3810e251c in namespace persistent-local-volumes-test-5756 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:51:08.519: INFO: Deleting PersistentVolumeClaim "pvc-ht7jd" Nov 13 05:51:08.523: INFO: Deleting PersistentVolume "local-pv7sngb" STEP: Removing the test directory Nov 13 05:51:08.527: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-99dfbb63-0af1-464d-b545-26b45f24664b && umount /tmp/local-volume-test-99dfbb63-0af1-464d-b545-26b45f24664b-backend && rm -r /tmp/local-volume-test-99dfbb63-0af1-464d-b545-26b45f24664b-backend] Namespace:persistent-local-volumes-test-5756 PodName:hostexec-node1-t2m6v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:51:08.527: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:51:08.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5756" for this suite. • [SLOW TEST:21.033 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":12,"skipped":527,"failed":1,"failures":["[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} SSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:51:08.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:354 STEP: Initializing test volumes Nov 13 05:51:12.715: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-4ba3e4df-969c-4062-8cf8-cdabce9e24e4] Namespace:persistent-local-volumes-test-2599 PodName:hostexec-node1-km252 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:51:12.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:51:12.803: INFO: Creating a PV followed by a PVC Nov 13 05:51:12.811: INFO: Waiting for PV local-pvnd7v8 to bind to PVC pvc-fjrxq Nov 13 05:51:12.811: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-fjrxq] to have phase Bound Nov 13 05:51:12.816: INFO: PersistentVolumeClaim pvc-fjrxq found but phase is Pending instead of Bound. Nov 13 05:51:14.818: INFO: PersistentVolumeClaim pvc-fjrxq found but phase is Pending instead of Bound. Nov 13 05:51:16.824: INFO: PersistentVolumeClaim pvc-fjrxq found but phase is Pending instead of Bound. Nov 13 05:51:18.829: INFO: PersistentVolumeClaim pvc-fjrxq found but phase is Pending instead of Bound. Nov 13 05:51:20.834: INFO: PersistentVolumeClaim pvc-fjrxq found but phase is Pending instead of Bound. Nov 13 05:51:22.839: INFO: PersistentVolumeClaim pvc-fjrxq found but phase is Pending instead of Bound. Nov 13 05:51:24.846: INFO: PersistentVolumeClaim pvc-fjrxq found but phase is Pending instead of Bound. Nov 13 05:51:26.852: INFO: PersistentVolumeClaim pvc-fjrxq found and phase=Bound (14.041145467s) Nov 13 05:51:26.852: INFO: Waiting up to 3m0s for PersistentVolume local-pvnd7v8 to have phase Bound Nov 13 05:51:26.854: INFO: PersistentVolume local-pvnd7v8 found and phase=Bound (2.250953ms) [It] should fail scheduling due to different NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375 STEP: local-volume-type: dir STEP: Initializing test volumes Nov 13 05:51:26.858: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-4cc82bf6-33b8-4af9-979e-1546034c4ec5] Namespace:persistent-local-volumes-test-2599 PodName:hostexec-node1-km252 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:51:26.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:51:27.049: INFO: Creating a PV followed by a PVC Nov 13 05:51:27.055: INFO: Waiting for PV local-pvnph28 to bind to PVC pvc-4sssx Nov 13 05:51:27.055: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-4sssx] to have phase Bound Nov 13 05:51:27.058: INFO: PersistentVolumeClaim pvc-4sssx found but phase is Pending instead of Bound. Nov 13 05:51:29.061: INFO: PersistentVolumeClaim pvc-4sssx found but phase is Pending instead of Bound. Nov 13 05:51:31.064: INFO: PersistentVolumeClaim pvc-4sssx found but phase is Pending instead of Bound. Nov 13 05:51:33.069: INFO: PersistentVolumeClaim pvc-4sssx found but phase is Pending instead of Bound. Nov 13 05:51:35.077: INFO: PersistentVolumeClaim pvc-4sssx found but phase is Pending instead of Bound. Nov 13 05:51:37.083: INFO: PersistentVolumeClaim pvc-4sssx found but phase is Pending instead of Bound. Nov 13 05:51:39.088: INFO: PersistentVolumeClaim pvc-4sssx found but phase is Pending instead of Bound. Nov 13 05:51:41.091: INFO: PersistentVolumeClaim pvc-4sssx found but phase is Pending instead of Bound. Nov 13 05:51:43.094: INFO: PersistentVolumeClaim pvc-4sssx found and phase=Bound (16.039204937s) Nov 13 05:51:43.094: INFO: Waiting up to 3m0s for PersistentVolume local-pvnph28 to have phase Bound Nov 13 05:51:43.098: INFO: PersistentVolume local-pvnph28 found and phase=Bound (3.71279ms) Nov 13 05:51:43.122: INFO: Waiting up to 5m0s for pod "pod-1a0fd78f-898f-4a35-acc1-c043e47b247c" in namespace "persistent-local-volumes-test-2599" to be "Unschedulable" Nov 13 05:51:43.124: INFO: Pod "pod-1a0fd78f-898f-4a35-acc1-c043e47b247c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.184423ms Nov 13 05:51:45.129: INFO: Pod "pod-1a0fd78f-898f-4a35-acc1-c043e47b247c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007217595s Nov 13 05:51:45.129: INFO: Pod "pod-1a0fd78f-898f-4a35-acc1-c043e47b247c" satisfied condition "Unschedulable" [AfterEach] Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:370 STEP: Cleaning up PVC and PV Nov 13 05:51:45.129: INFO: Deleting PersistentVolumeClaim "pvc-fjrxq" Nov 13 05:51:45.135: INFO: Deleting PersistentVolume "local-pvnd7v8" STEP: Removing the test directory Nov 13 05:51:45.139: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4ba3e4df-969c-4062-8cf8-cdabce9e24e4] Namespace:persistent-local-volumes-test-2599 PodName:hostexec-node1-km252 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:51:45.139: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:51:45.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2599" for this suite. • [SLOW TEST:36.575 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347 should fail scheduling due to different NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity","total":-1,"completed":13,"skipped":530,"failed":1,"failures":["[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:50:18.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should retry NodeStage after NodeStage final error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 STEP: Building a driver namespace object, basename csi-mock-volumes-398 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Nov 13 05:50:18.452: INFO: creating *v1.ServiceAccount: csi-mock-volumes-398-1101/csi-attacher Nov 13 05:50:18.455: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-398 Nov 13 05:50:18.455: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-398 Nov 13 05:50:18.458: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-398 Nov 13 05:50:18.461: INFO: creating *v1.Role: csi-mock-volumes-398-1101/external-attacher-cfg-csi-mock-volumes-398 Nov 13 05:50:18.463: INFO: creating *v1.RoleBinding: csi-mock-volumes-398-1101/csi-attacher-role-cfg Nov 13 05:50:18.466: INFO: creating *v1.ServiceAccount: csi-mock-volumes-398-1101/csi-provisioner Nov 13 05:50:18.468: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-398 Nov 13 05:50:18.468: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-398 Nov 13 05:50:18.471: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-398 Nov 13 05:50:18.474: INFO: creating *v1.Role: csi-mock-volumes-398-1101/external-provisioner-cfg-csi-mock-volumes-398 Nov 13 05:50:18.477: INFO: creating *v1.RoleBinding: csi-mock-volumes-398-1101/csi-provisioner-role-cfg Nov 13 05:50:18.479: INFO: creating *v1.ServiceAccount: csi-mock-volumes-398-1101/csi-resizer Nov 13 05:50:18.481: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-398 Nov 13 05:50:18.481: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-398 Nov 13 05:50:18.484: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-398 Nov 13 05:50:18.486: INFO: creating *v1.Role: csi-mock-volumes-398-1101/external-resizer-cfg-csi-mock-volumes-398 Nov 13 05:50:18.489: INFO: creating *v1.RoleBinding: csi-mock-volumes-398-1101/csi-resizer-role-cfg Nov 13 05:50:18.492: INFO: creating *v1.ServiceAccount: csi-mock-volumes-398-1101/csi-snapshotter Nov 13 05:50:18.494: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-398 Nov 13 05:50:18.494: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-398 Nov 13 05:50:18.497: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-398 Nov 13 05:50:18.501: INFO: creating *v1.Role: csi-mock-volumes-398-1101/external-snapshotter-leaderelection-csi-mock-volumes-398 Nov 13 05:50:18.504: INFO: creating *v1.RoleBinding: csi-mock-volumes-398-1101/external-snapshotter-leaderelection Nov 13 05:50:18.507: INFO: creating *v1.ServiceAccount: csi-mock-volumes-398-1101/csi-mock Nov 13 05:50:18.509: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-398 Nov 13 05:50:18.511: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-398 Nov 13 05:50:18.514: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-398 Nov 13 05:50:18.517: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-398 Nov 13 05:50:18.519: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-398 Nov 13 05:50:18.522: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-398 Nov 13 05:50:18.525: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-398 Nov 13 05:50:18.527: INFO: creating *v1.StatefulSet: csi-mock-volumes-398-1101/csi-mockplugin Nov 13 05:50:18.532: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-398 Nov 13 05:50:18.534: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-398" Nov 13 05:50:18.536: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-398 to register on node node2 I1113 05:50:23.603484 26 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-398","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1113 05:50:23.697812 26 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I1113 05:50:23.699643 26 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-398","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1113 05:50:23.701007 26 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I1113 05:50:23.703155 26 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I1113 05:50:23.896813 26 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-398"},"Error":"","FullError":null} STEP: Creating pod Nov 13 05:50:28.052: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:50:28.057: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-x9gn2] to have phase Bound Nov 13 05:50:28.060: INFO: PersistentVolumeClaim pvc-x9gn2 found but phase is Pending instead of Bound. I1113 05:50:28.066227 26 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-834486b0-bf93-426b-8d39-700203ef7d64","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-834486b0-bf93-426b-8d39-700203ef7d64"}}},"Error":"","FullError":null} Nov 13 05:50:30.063: INFO: PersistentVolumeClaim pvc-x9gn2 found and phase=Bound (2.005959525s) Nov 13 05:50:30.078: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-x9gn2] to have phase Bound Nov 13 05:50:30.080: INFO: PersistentVolumeClaim pvc-x9gn2 found and phase=Bound (1.991798ms) STEP: Waiting for expected CSI calls I1113 05:50:30.304062 26 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:50:30.306400 26 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-834486b0-bf93-426b-8d39-700203ef7d64/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-834486b0-bf93-426b-8d39-700203ef7d64","storage.kubernetes.io/csiProvisionerIdentity":"1636782623698-8081-csi-mock-csi-mock-volumes-398"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} I1113 05:50:30.817237 26 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:50:30.819426 26 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-834486b0-bf93-426b-8d39-700203ef7d64/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-834486b0-bf93-426b-8d39-700203ef7d64","storage.kubernetes.io/csiProvisionerIdentity":"1636782623698-8081-csi-mock-csi-mock-volumes-398"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} I1113 05:50:31.824392 26 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:50:31.826734 26 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-834486b0-bf93-426b-8d39-700203ef7d64/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-834486b0-bf93-426b-8d39-700203ef7d64","storage.kubernetes.io/csiProvisionerIdentity":"1636782623698-8081-csi-mock-csi-mock-volumes-398"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} I1113 05:50:33.858154 26 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Nov 13 05:50:33.859: INFO: >>> kubeConfig: /root/.kube/config I1113 05:50:33.982099 26 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-834486b0-bf93-426b-8d39-700203ef7d64/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-834486b0-bf93-426b-8d39-700203ef7d64","storage.kubernetes.io/csiProvisionerIdentity":"1636782623698-8081-csi-mock-csi-mock-volumes-398"}},"Response":{},"Error":"","FullError":null} I1113 05:50:33.985759 26 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Nov 13 05:50:33.990: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:50:34.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Waiting for pod to be running Nov 13 05:50:34.187: INFO: >>> kubeConfig: /root/.kube/config I1113 05:50:34.337156 26 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-834486b0-bf93-426b-8d39-700203ef7d64/globalmount","target_path":"/var/lib/kubelet/pods/f67bee6c-310a-4f51-af06-a7904d646bc7/volumes/kubernetes.io~csi/pvc-834486b0-bf93-426b-8d39-700203ef7d64/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-834486b0-bf93-426b-8d39-700203ef7d64","storage.kubernetes.io/csiProvisionerIdentity":"1636782623698-8081-csi-mock-csi-mock-volumes-398"}},"Response":{},"Error":"","FullError":null} STEP: Deleting the previously created pod Nov 13 05:50:38.093: INFO: Deleting pod "pvc-volume-tester-8s5pw" in namespace "csi-mock-volumes-398" Nov 13 05:50:38.098: INFO: Wait up to 5m0s for pod "pvc-volume-tester-8s5pw" to be fully deleted Nov 13 05:50:39.696: INFO: >>> kubeConfig: /root/.kube/config I1113 05:50:39.784909 26 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/f67bee6c-310a-4f51-af06-a7904d646bc7/volumes/kubernetes.io~csi/pvc-834486b0-bf93-426b-8d39-700203ef7d64/mount"},"Response":{},"Error":"","FullError":null} I1113 05:50:39.798362 26 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:50:39.799911 26 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-834486b0-bf93-426b-8d39-700203ef7d64/globalmount"},"Response":{},"Error":"","FullError":null} STEP: Waiting for all remaining expected CSI calls STEP: Deleting pod pvc-volume-tester-8s5pw Nov 13 05:50:53.106: INFO: Deleting pod "pvc-volume-tester-8s5pw" in namespace "csi-mock-volumes-398" STEP: Deleting claim pvc-x9gn2 Nov 13 05:50:53.117: INFO: Waiting up to 2m0s for PersistentVolume pvc-834486b0-bf93-426b-8d39-700203ef7d64 to get deleted Nov 13 05:50:53.119: INFO: PersistentVolume pvc-834486b0-bf93-426b-8d39-700203ef7d64 found and phase=Bound (2.623324ms) I1113 05:50:53.133640 26 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} Nov 13 05:50:55.122: INFO: PersistentVolume pvc-834486b0-bf93-426b-8d39-700203ef7d64 was removed STEP: Deleting storageclass csi-mock-volumes-398-scv7crd STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-398 STEP: Waiting for namespaces [csi-mock-volumes-398] to vanish STEP: uninstalling csi mock driver Nov 13 05:51:01.155: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-398-1101/csi-attacher Nov 13 05:51:01.160: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-398 Nov 13 05:51:01.164: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-398 Nov 13 05:51:01.168: INFO: deleting *v1.Role: csi-mock-volumes-398-1101/external-attacher-cfg-csi-mock-volumes-398 Nov 13 05:51:01.171: INFO: deleting *v1.RoleBinding: csi-mock-volumes-398-1101/csi-attacher-role-cfg Nov 13 05:51:01.174: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-398-1101/csi-provisioner Nov 13 05:51:01.178: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-398 Nov 13 05:51:01.182: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-398 Nov 13 05:51:01.185: INFO: deleting *v1.Role: csi-mock-volumes-398-1101/external-provisioner-cfg-csi-mock-volumes-398 Nov 13 05:51:01.189: INFO: deleting *v1.RoleBinding: csi-mock-volumes-398-1101/csi-provisioner-role-cfg Nov 13 05:51:01.192: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-398-1101/csi-resizer Nov 13 05:51:01.197: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-398 Nov 13 05:51:01.200: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-398 Nov 13 05:51:01.204: INFO: deleting *v1.Role: csi-mock-volumes-398-1101/external-resizer-cfg-csi-mock-volumes-398 Nov 13 05:51:01.207: INFO: deleting *v1.RoleBinding: csi-mock-volumes-398-1101/csi-resizer-role-cfg Nov 13 05:51:01.211: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-398-1101/csi-snapshotter Nov 13 05:51:01.214: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-398 Nov 13 05:51:01.218: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-398 Nov 13 05:51:01.222: INFO: deleting *v1.Role: csi-mock-volumes-398-1101/external-snapshotter-leaderelection-csi-mock-volumes-398 Nov 13 05:51:01.225: INFO: deleting *v1.RoleBinding: csi-mock-volumes-398-1101/external-snapshotter-leaderelection Nov 13 05:51:01.229: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-398-1101/csi-mock Nov 13 05:51:01.232: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-398 Nov 13 05:51:01.236: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-398 Nov 13 05:51:01.239: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-398 Nov 13 05:51:01.243: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-398 Nov 13 05:51:01.247: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-398 Nov 13 05:51:01.250: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-398 Nov 13 05:51:01.254: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-398 Nov 13 05:51:01.257: INFO: deleting *v1.StatefulSet: csi-mock-volumes-398-1101/csi-mockplugin Nov 13 05:51:01.262: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-398 STEP: deleting the driver namespace: csi-mock-volumes-398-1101 STEP: Waiting for namespaces [csi-mock-volumes-398-1101] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:51:45.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:86.907 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI NodeStage error cases [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:734 should retry NodeStage after NodeStage final error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should retry NodeStage after NodeStage final error","total":-1,"completed":12,"skipped":439,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:50:50.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should call NodeUnstage after NodeStage success /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 STEP: Building a driver namespace object, basename csi-mock-volumes-6413 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:50:50.436: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6413-9352/csi-attacher Nov 13 05:50:50.441: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6413 Nov 13 05:50:50.441: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-6413 Nov 13 05:50:50.444: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6413 Nov 13 05:50:50.448: INFO: creating *v1.Role: csi-mock-volumes-6413-9352/external-attacher-cfg-csi-mock-volumes-6413 Nov 13 05:50:50.451: INFO: creating *v1.RoleBinding: csi-mock-volumes-6413-9352/csi-attacher-role-cfg Nov 13 05:50:50.453: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6413-9352/csi-provisioner Nov 13 05:50:50.456: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6413 Nov 13 05:50:50.456: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-6413 Nov 13 05:50:50.459: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6413 Nov 13 05:50:50.461: INFO: creating *v1.Role: csi-mock-volumes-6413-9352/external-provisioner-cfg-csi-mock-volumes-6413 Nov 13 05:50:50.464: INFO: creating *v1.RoleBinding: csi-mock-volumes-6413-9352/csi-provisioner-role-cfg Nov 13 05:50:50.466: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6413-9352/csi-resizer Nov 13 05:50:50.469: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6413 Nov 13 05:50:50.469: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-6413 Nov 13 05:50:50.471: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6413 Nov 13 05:50:50.474: INFO: creating *v1.Role: csi-mock-volumes-6413-9352/external-resizer-cfg-csi-mock-volumes-6413 Nov 13 05:50:50.477: INFO: creating *v1.RoleBinding: csi-mock-volumes-6413-9352/csi-resizer-role-cfg Nov 13 05:50:50.480: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6413-9352/csi-snapshotter Nov 13 05:50:50.482: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6413 Nov 13 05:50:50.482: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-6413 Nov 13 05:50:50.485: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6413 Nov 13 05:50:50.488: INFO: creating *v1.Role: csi-mock-volumes-6413-9352/external-snapshotter-leaderelection-csi-mock-volumes-6413 Nov 13 05:50:50.490: INFO: creating *v1.RoleBinding: csi-mock-volumes-6413-9352/external-snapshotter-leaderelection Nov 13 05:50:50.493: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6413-9352/csi-mock Nov 13 05:50:50.495: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6413 Nov 13 05:50:50.498: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6413 Nov 13 05:50:50.501: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6413 Nov 13 05:50:50.504: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6413 Nov 13 05:50:50.507: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6413 Nov 13 05:50:50.509: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6413 Nov 13 05:50:50.514: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6413 Nov 13 05:50:50.517: INFO: creating *v1.StatefulSet: csi-mock-volumes-6413-9352/csi-mockplugin Nov 13 05:50:50.521: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-6413 Nov 13 05:50:50.523: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-6413" Nov 13 05:50:50.525: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-6413 to register on node node1 STEP: Creating pod Nov 13 05:51:00.045: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:51:00.050: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-tfbfm] to have phase Bound Nov 13 05:51:00.052: INFO: PersistentVolumeClaim pvc-tfbfm found but phase is Pending instead of Bound. Nov 13 05:51:02.057: INFO: PersistentVolumeClaim pvc-tfbfm found and phase=Bound (2.007619375s) Nov 13 05:51:02.071: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-tfbfm] to have phase Bound Nov 13 05:51:02.074: INFO: PersistentVolumeClaim pvc-tfbfm found and phase=Bound (2.288953ms) STEP: Waiting for expected CSI calls STEP: Waiting for pod to be running STEP: Deleting the previously created pod Nov 13 05:51:07.112: INFO: Deleting pod "pvc-volume-tester-n4vf8" in namespace "csi-mock-volumes-6413" Nov 13 05:51:07.117: INFO: Wait up to 5m0s for pod "pvc-volume-tester-n4vf8" to be fully deleted STEP: Waiting for all remaining expected CSI calls STEP: Deleting pod pvc-volume-tester-n4vf8 Nov 13 05:51:12.132: INFO: Deleting pod "pvc-volume-tester-n4vf8" in namespace "csi-mock-volumes-6413" STEP: Deleting claim pvc-tfbfm Nov 13 05:51:12.143: INFO: Waiting up to 2m0s for PersistentVolume pvc-5d86116a-4532-4b43-ae85-3dcec5da5fd4 to get deleted Nov 13 05:51:12.146: INFO: PersistentVolume pvc-5d86116a-4532-4b43-ae85-3dcec5da5fd4 found and phase=Bound (2.517115ms) Nov 13 05:51:14.151: INFO: PersistentVolume pvc-5d86116a-4532-4b43-ae85-3dcec5da5fd4 was removed STEP: Deleting storageclass csi-mock-volumes-6413-sc4wvll STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-6413 STEP: Waiting for namespaces [csi-mock-volumes-6413] to vanish STEP: uninstalling csi mock driver Nov 13 05:51:20.164: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6413-9352/csi-attacher Nov 13 05:51:20.168: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6413 Nov 13 05:51:20.172: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6413 Nov 13 05:51:20.175: INFO: deleting *v1.Role: csi-mock-volumes-6413-9352/external-attacher-cfg-csi-mock-volumes-6413 Nov 13 05:51:20.179: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6413-9352/csi-attacher-role-cfg Nov 13 05:51:20.182: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6413-9352/csi-provisioner Nov 13 05:51:20.187: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6413 Nov 13 05:51:20.200: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6413 Nov 13 05:51:20.210: INFO: deleting *v1.Role: csi-mock-volumes-6413-9352/external-provisioner-cfg-csi-mock-volumes-6413 Nov 13 05:51:20.218: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6413-9352/csi-provisioner-role-cfg Nov 13 05:51:20.222: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6413-9352/csi-resizer Nov 13 05:51:20.226: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6413 Nov 13 05:51:20.230: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6413 Nov 13 05:51:20.233: INFO: deleting *v1.Role: csi-mock-volumes-6413-9352/external-resizer-cfg-csi-mock-volumes-6413 Nov 13 05:51:20.237: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6413-9352/csi-resizer-role-cfg Nov 13 05:51:20.240: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6413-9352/csi-snapshotter Nov 13 05:51:20.243: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6413 Nov 13 05:51:20.246: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6413 Nov 13 05:51:20.249: INFO: deleting *v1.Role: csi-mock-volumes-6413-9352/external-snapshotter-leaderelection-csi-mock-volumes-6413 Nov 13 05:51:20.254: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6413-9352/external-snapshotter-leaderelection Nov 13 05:51:20.257: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6413-9352/csi-mock Nov 13 05:51:20.261: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6413 Nov 13 05:51:20.265: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6413 Nov 13 05:51:20.269: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6413 Nov 13 05:51:20.273: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6413 Nov 13 05:51:20.277: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6413 Nov 13 05:51:20.280: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6413 Nov 13 05:51:20.283: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6413 Nov 13 05:51:20.287: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6413-9352/csi-mockplugin Nov 13 05:51:20.291: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-6413 STEP: deleting the driver namespace: csi-mock-volumes-6413-9352 STEP: Waiting for namespaces [csi-mock-volumes-6413-9352] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:51:48.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:57.935 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI NodeStage error cases [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:734 should call NodeUnstage after NodeStage success /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should call NodeUnstage after NodeStage success","total":-1,"completed":15,"skipped":477,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:47:05.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to the key in the secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/secrets_volume.go:440 STEP: Creating secret with name s-test-opt-create-4525e8b6-f217-416c-9957-c24282416a4c STEP: Creating the pod [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:52:05.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1029" for this suite. • [SLOW TEST:300.056 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to the key in the secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/secrets_volume.go:440 ------------------------------ {"msg":"PASSED [sig-storage] Secrets Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]","total":-1,"completed":17,"skipped":461,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Flexvolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:52:05.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename flexvolume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Flexvolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/flexvolume.go:169 Nov 13 05:52:05.157: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-storage] Flexvolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:52:05.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "flexvolume-6580" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds] [sig-storage] Flexvolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should be mountable when non-attachable [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/flexvolume.go:188 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/flexvolume.go:173 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:50:52.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] exhausted, immediate binding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 STEP: Building a driver namespace object, basename csi-mock-volumes-323 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Nov 13 05:50:52.619: INFO: creating *v1.ServiceAccount: csi-mock-volumes-323-8220/csi-attacher Nov 13 05:50:52.628: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-323 Nov 13 05:50:52.628: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-323 Nov 13 05:50:52.631: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-323 Nov 13 05:50:52.634: INFO: creating *v1.Role: csi-mock-volumes-323-8220/external-attacher-cfg-csi-mock-volumes-323 Nov 13 05:50:52.638: INFO: creating *v1.RoleBinding: csi-mock-volumes-323-8220/csi-attacher-role-cfg Nov 13 05:50:52.641: INFO: creating *v1.ServiceAccount: csi-mock-volumes-323-8220/csi-provisioner Nov 13 05:50:52.643: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-323 Nov 13 05:50:52.643: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-323 Nov 13 05:50:52.646: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-323 Nov 13 05:50:52.649: INFO: creating *v1.Role: csi-mock-volumes-323-8220/external-provisioner-cfg-csi-mock-volumes-323 Nov 13 05:50:52.652: INFO: creating *v1.RoleBinding: csi-mock-volumes-323-8220/csi-provisioner-role-cfg Nov 13 05:50:52.655: INFO: creating *v1.ServiceAccount: csi-mock-volumes-323-8220/csi-resizer Nov 13 05:50:52.657: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-323 Nov 13 05:50:52.657: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-323 Nov 13 05:50:52.660: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-323 Nov 13 05:50:52.663: INFO: creating *v1.Role: csi-mock-volumes-323-8220/external-resizer-cfg-csi-mock-volumes-323 Nov 13 05:50:52.666: INFO: creating *v1.RoleBinding: csi-mock-volumes-323-8220/csi-resizer-role-cfg Nov 13 05:50:52.669: INFO: creating *v1.ServiceAccount: csi-mock-volumes-323-8220/csi-snapshotter Nov 13 05:50:52.671: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-323 Nov 13 05:50:52.671: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-323 Nov 13 05:50:52.674: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-323 Nov 13 05:50:52.677: INFO: creating *v1.Role: csi-mock-volumes-323-8220/external-snapshotter-leaderelection-csi-mock-volumes-323 Nov 13 05:50:52.680: INFO: creating *v1.RoleBinding: csi-mock-volumes-323-8220/external-snapshotter-leaderelection Nov 13 05:50:52.682: INFO: creating *v1.ServiceAccount: csi-mock-volumes-323-8220/csi-mock Nov 13 05:50:52.685: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-323 Nov 13 05:50:52.687: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-323 Nov 13 05:50:52.691: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-323 Nov 13 05:50:52.693: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-323 Nov 13 05:50:52.696: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-323 Nov 13 05:50:52.698: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-323 Nov 13 05:50:52.701: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-323 Nov 13 05:50:52.703: INFO: creating *v1.StatefulSet: csi-mock-volumes-323-8220/csi-mockplugin Nov 13 05:50:52.708: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-323 Nov 13 05:50:52.711: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-323" Nov 13 05:50:52.712: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-323 to register on node node1 I1113 05:50:59.781835 25 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-323","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1113 05:50:59.876977 25 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I1113 05:50:59.878679 25 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-323","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1113 05:50:59.880587 25 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I1113 05:50:59.882821 25 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I1113 05:51:00.254911 25 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-323"},"Error":"","FullError":null} STEP: Creating pod Nov 13 05:51:02.229: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:51:02.233: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-dv9c2] to have phase Bound Nov 13 05:51:02.235: INFO: PersistentVolumeClaim pvc-dv9c2 found but phase is Pending instead of Bound. I1113 05:51:02.240894 25 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-4f98cf75-9fba-460b-9637-0e6d20c68514","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}} I1113 05:51:02.243005 25 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-4f98cf75-9fba-460b-9637-0e6d20c68514","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-4f98cf75-9fba-460b-9637-0e6d20c68514"}}},"Error":"","FullError":null} Nov 13 05:51:04.239: INFO: PersistentVolumeClaim pvc-dv9c2 found and phase=Bound (2.005359828s) I1113 05:51:04.713984 25 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Nov 13 05:51:04.717: INFO: >>> kubeConfig: /root/.kube/config I1113 05:51:04.811284 25 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-4f98cf75-9fba-460b-9637-0e6d20c68514/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-4f98cf75-9fba-460b-9637-0e6d20c68514","storage.kubernetes.io/csiProvisionerIdentity":"1636782659881-8081-csi-mock-csi-mock-volumes-323"}},"Response":{},"Error":"","FullError":null} I1113 05:51:04.815976 25 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Nov 13 05:51:04.817: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:51:04.916: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:51:05.071: INFO: >>> kubeConfig: /root/.kube/config I1113 05:51:05.169464 25 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-4f98cf75-9fba-460b-9637-0e6d20c68514/globalmount","target_path":"/var/lib/kubelet/pods/26b7e573-3218-4c42-acf6-88e0795b0ff6/volumes/kubernetes.io~csi/pvc-4f98cf75-9fba-460b-9637-0e6d20c68514/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-4f98cf75-9fba-460b-9637-0e6d20c68514","storage.kubernetes.io/csiProvisionerIdentity":"1636782659881-8081-csi-mock-csi-mock-volumes-323"}},"Response":{},"Error":"","FullError":null} Nov 13 05:51:08.261: INFO: Deleting pod "pvc-volume-tester-5rwbc" in namespace "csi-mock-volumes-323" Nov 13 05:51:08.266: INFO: Wait up to 5m0s for pod "pvc-volume-tester-5rwbc" to be fully deleted Nov 13 05:51:13.127: INFO: >>> kubeConfig: /root/.kube/config I1113 05:51:13.228687 25 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/26b7e573-3218-4c42-acf6-88e0795b0ff6/volumes/kubernetes.io~csi/pvc-4f98cf75-9fba-460b-9637-0e6d20c68514/mount"},"Response":{},"Error":"","FullError":null} I1113 05:51:13.329456 25 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:51:13.331375 25 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-4f98cf75-9fba-460b-9637-0e6d20c68514/globalmount"},"Response":{},"Error":"","FullError":null} I1113 05:51:22.290983 25 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} STEP: Checking PVC events Nov 13 05:51:23.278: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-dv9c2", GenerateName:"pvc-", Namespace:"csi-mock-volumes-323", SelfLink:"", UID:"4f98cf75-9fba-460b-9637-0e6d20c68514", ResourceVersion:"212993", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772379462, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004811e30), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004811e48)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0043c9d40), VolumeMode:(*v1.PersistentVolumeMode)(0xc0043c9d50), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:51:23.278: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-dv9c2", GenerateName:"pvc-", Namespace:"csi-mock-volumes-323", SelfLink:"", UID:"4f98cf75-9fba-460b-9637-0e6d20c68514", ResourceVersion:"212995", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772379462, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-323"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003aacc78), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003aacc90)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003aacca8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003aaccc0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0013cb620), VolumeMode:(*v1.PersistentVolumeMode)(0xc0013cb630), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:51:23.278: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-dv9c2", GenerateName:"pvc-", Namespace:"csi-mock-volumes-323", SelfLink:"", UID:"4f98cf75-9fba-460b-9637-0e6d20c68514", ResourceVersion:"213003", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772379462, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-323"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003b0afd8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003b0aff0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003b0b008), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003b0b020)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-4f98cf75-9fba-460b-9637-0e6d20c68514", StorageClassName:(*string)(0xc004408d40), VolumeMode:(*v1.PersistentVolumeMode)(0xc004408d50), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:51:23.278: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-dv9c2", GenerateName:"pvc-", Namespace:"csi-mock-volumes-323", SelfLink:"", UID:"4f98cf75-9fba-460b-9637-0e6d20c68514", ResourceVersion:"213004", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772379462, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-323"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003c2c060), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003c2c078)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003c2c090), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003c2c0a8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-4f98cf75-9fba-460b-9637-0e6d20c68514", StorageClassName:(*string)(0xc00444f6e0), VolumeMode:(*v1.PersistentVolumeMode)(0xc00444f6f0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:51:23.278: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-dv9c2", GenerateName:"pvc-", Namespace:"csi-mock-volumes-323", SelfLink:"", UID:"4f98cf75-9fba-460b-9637-0e6d20c68514", ResourceVersion:"213241", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772379462, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(0xc003c2c0d8), DeletionGracePeriodSeconds:(*int64)(0xc0043cde88), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-323"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003c2c0f0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003c2c108)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003c2c120), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003c2c138)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-4f98cf75-9fba-460b-9637-0e6d20c68514", StorageClassName:(*string)(0xc00444f730), VolumeMode:(*v1.PersistentVolumeMode)(0xc00444f740), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:51:23.278: INFO: PVC event DELETED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-dv9c2", GenerateName:"pvc-", Namespace:"csi-mock-volumes-323", SelfLink:"", UID:"4f98cf75-9fba-460b-9637-0e6d20c68514", ResourceVersion:"213242", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772379462, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(0xc003c2c168), DeletionGracePeriodSeconds:(*int64)(0xc0043cdf38), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-323"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003c2c180), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003c2c198)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003c2c1b0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003c2c1c8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-4f98cf75-9fba-460b-9637-0e6d20c68514", StorageClassName:(*string)(0xc00444f780), VolumeMode:(*v1.PersistentVolumeMode)(0xc00444f790), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} STEP: Deleting pod pvc-volume-tester-5rwbc Nov 13 05:51:23.279: INFO: Deleting pod "pvc-volume-tester-5rwbc" in namespace "csi-mock-volumes-323" STEP: Deleting claim pvc-dv9c2 STEP: Deleting storageclass csi-mock-volumes-323-sc27ktq STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-323 STEP: Waiting for namespaces [csi-mock-volumes-323] to vanish STEP: uninstalling csi mock driver Nov 13 05:51:29.313: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-323-8220/csi-attacher Nov 13 05:51:29.317: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-323 Nov 13 05:51:29.320: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-323 Nov 13 05:51:29.324: INFO: deleting *v1.Role: csi-mock-volumes-323-8220/external-attacher-cfg-csi-mock-volumes-323 Nov 13 05:51:29.327: INFO: deleting *v1.RoleBinding: csi-mock-volumes-323-8220/csi-attacher-role-cfg Nov 13 05:51:29.331: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-323-8220/csi-provisioner Nov 13 05:51:29.334: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-323 Nov 13 05:51:29.338: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-323 Nov 13 05:51:29.343: INFO: deleting *v1.Role: csi-mock-volumes-323-8220/external-provisioner-cfg-csi-mock-volumes-323 Nov 13 05:51:29.346: INFO: deleting *v1.RoleBinding: csi-mock-volumes-323-8220/csi-provisioner-role-cfg Nov 13 05:51:29.350: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-323-8220/csi-resizer Nov 13 05:51:29.356: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-323 Nov 13 05:51:29.359: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-323 Nov 13 05:51:29.362: INFO: deleting *v1.Role: csi-mock-volumes-323-8220/external-resizer-cfg-csi-mock-volumes-323 Nov 13 05:51:29.369: INFO: deleting *v1.RoleBinding: csi-mock-volumes-323-8220/csi-resizer-role-cfg Nov 13 05:51:29.374: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-323-8220/csi-snapshotter Nov 13 05:51:29.377: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-323 Nov 13 05:51:29.380: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-323 Nov 13 05:51:29.383: INFO: deleting *v1.Role: csi-mock-volumes-323-8220/external-snapshotter-leaderelection-csi-mock-volumes-323 Nov 13 05:51:29.387: INFO: deleting *v1.RoleBinding: csi-mock-volumes-323-8220/external-snapshotter-leaderelection Nov 13 05:51:29.390: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-323-8220/csi-mock Nov 13 05:51:29.393: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-323 Nov 13 05:51:29.396: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-323 Nov 13 05:51:29.399: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-323 Nov 13 05:51:29.402: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-323 Nov 13 05:51:29.405: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-323 Nov 13 05:51:29.409: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-323 Nov 13 05:51:29.412: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-323 Nov 13 05:51:29.415: INFO: deleting *v1.StatefulSet: csi-mock-volumes-323-8220/csi-mockplugin Nov 13 05:51:29.418: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-323 STEP: deleting the driver namespace: csi-mock-volumes-323-8220 STEP: Waiting for namespaces [csi-mock-volumes-323-8220] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:52:13.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:80.880 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 storage capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900 exhausted, immediate binding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, immediate binding","total":-1,"completed":11,"skipped":372,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:52:13.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Nov 13 05:52:13.523: INFO: The status of Pod test-hostpath-type-82sqc is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:52:15.527: INFO: The status of Pod test-hostpath-type-82sqc is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:52:17.528: INFO: The status of Pod test-hostpath-type-82sqc is Running (Ready = true) STEP: running on node node2 STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should fail on mounting directory 'adir' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:89 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:52:23.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-4922" for this suite. • [SLOW TEST:10.110 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting directory 'adir' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:89 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathSocket","total":-1,"completed":12,"skipped":391,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:51:45.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:391 STEP: Setting up local volumes on node "node1" STEP: Initializing test volumes Nov 13 05:51:49.382: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-25d2304a-d08b-4943-aa80-2fe99a2dae42] Namespace:persistent-local-volumes-test-5747 PodName:hostexec-node1-8zmwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:51:49.382: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:51:49.507: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-00fb2654-7b22-474c-9e89-a5ba52f1c558] Namespace:persistent-local-volumes-test-5747 PodName:hostexec-node1-8zmwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:51:49.507: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:51:49.595: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-e53da0de-2d99-4008-8452-3a7a2a8efd7e] Namespace:persistent-local-volumes-test-5747 PodName:hostexec-node1-8zmwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:51:49.595: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:51:49.674: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-1c798e3e-65d5-4f85-934a-53a0f403bbad] Namespace:persistent-local-volumes-test-5747 PodName:hostexec-node1-8zmwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:51:49.674: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:51:49.754: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-993fe272-3364-4b4c-bbc0-dd73945e6633] Namespace:persistent-local-volumes-test-5747 PodName:hostexec-node1-8zmwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:51:49.754: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:51:49.834: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-63c9a58e-754c-474e-b94d-144a3aa42557] Namespace:persistent-local-volumes-test-5747 PodName:hostexec-node1-8zmwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:51:49.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:51:49.914: INFO: Creating a PV followed by a PVC Nov 13 05:51:49.921: INFO: Creating a PV followed by a PVC Nov 13 05:51:49.927: INFO: Creating a PV followed by a PVC Nov 13 05:51:49.934: INFO: Creating a PV followed by a PVC Nov 13 05:51:49.939: INFO: Creating a PV followed by a PVC Nov 13 05:51:49.944: INFO: Creating a PV followed by a PVC Nov 13 05:51:59.989: INFO: PVCs were not bound within 10s (that's good) STEP: Setting up local volumes on node "node2" STEP: Initializing test volumes Nov 13 05:52:02.006: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-8267a1ed-bc52-4ff9-8202-688d6b7d76bd] Namespace:persistent-local-volumes-test-5747 PodName:hostexec-node2-b586b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:52:02.006: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:52:02.100: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-004ccda2-4eda-4a70-ae6f-c493e01a8141] Namespace:persistent-local-volumes-test-5747 PodName:hostexec-node2-b586b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:52:02.100: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:52:02.188: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-51578af3-a22b-47c1-91fb-096e5ca9f211] Namespace:persistent-local-volumes-test-5747 PodName:hostexec-node2-b586b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:52:02.188: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:52:02.274: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-0c27223b-128f-46ed-88d4-cfee1702c862] Namespace:persistent-local-volumes-test-5747 PodName:hostexec-node2-b586b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:52:02.274: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:52:02.362: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-1f22ecf9-e116-479c-bd7a-03fc322d42e9] Namespace:persistent-local-volumes-test-5747 PodName:hostexec-node2-b586b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:52:02.362: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:52:02.500: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-0befd4da-aee4-4878-b4df-40606e1f986d] Namespace:persistent-local-volumes-test-5747 PodName:hostexec-node2-b586b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:52:02.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:52:02.587: INFO: Creating a PV followed by a PVC Nov 13 05:52:02.594: INFO: Creating a PV followed by a PVC Nov 13 05:52:02.600: INFO: Creating a PV followed by a PVC Nov 13 05:52:02.606: INFO: Creating a PV followed by a PVC Nov 13 05:52:02.611: INFO: Creating a PV followed by a PVC Nov 13 05:52:02.616: INFO: Creating a PV followed by a PVC Nov 13 05:52:12.661: INFO: PVCs were not bound within 10s (that's good) [It] should use volumes on one node when pod has affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:419 STEP: Creating a StatefulSet with pod affinity on nodes Nov 13 05:52:12.668: INFO: Found 0 stateful pods, waiting for 3 Nov 13 05:52:22.674: INFO: Waiting for pod local-volume-statefulset-0 to enter Running - Ready=true, currently Running - Ready=true Nov 13 05:52:22.674: INFO: Waiting for pod local-volume-statefulset-1 to enter Running - Ready=true, currently Running - Ready=true Nov 13 05:52:22.674: INFO: Waiting for pod local-volume-statefulset-2 to enter Running - Ready=true, currently Pending - Ready=false Nov 13 05:52:32.672: INFO: Waiting for pod local-volume-statefulset-0 to enter Running - Ready=true, currently Running - Ready=true Nov 13 05:52:32.672: INFO: Waiting for pod local-volume-statefulset-1 to enter Running - Ready=true, currently Running - Ready=true Nov 13 05:52:32.672: INFO: Waiting for pod local-volume-statefulset-2 to enter Running - Ready=true, currently Running - Ready=true Nov 13 05:52:32.676: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol1-local-volume-statefulset-0] to have phase Bound Nov 13 05:52:32.679: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-0 found and phase=Bound (2.837087ms) Nov 13 05:52:32.679: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol2-local-volume-statefulset-0] to have phase Bound Nov 13 05:52:32.682: INFO: PersistentVolumeClaim vol2-local-volume-statefulset-0 found and phase=Bound (2.532532ms) Nov 13 05:52:32.682: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol1-local-volume-statefulset-1] to have phase Bound Nov 13 05:52:32.685: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-1 found and phase=Bound (3.19817ms) Nov 13 05:52:32.685: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol2-local-volume-statefulset-1] to have phase Bound Nov 13 05:52:32.688: INFO: PersistentVolumeClaim vol2-local-volume-statefulset-1 found and phase=Bound (2.492603ms) Nov 13 05:52:32.688: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol1-local-volume-statefulset-2] to have phase Bound Nov 13 05:52:32.690: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-2 found and phase=Bound (2.020361ms) Nov 13 05:52:32.690: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol2-local-volume-statefulset-2] to have phase Bound Nov 13 05:52:32.692: INFO: PersistentVolumeClaim vol2-local-volume-statefulset-2 found and phase=Bound (2.570336ms) [AfterEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:403 STEP: Cleaning up PVC and PV Nov 13 05:52:32.692: INFO: Deleting PersistentVolumeClaim "pvc-cxfjn" Nov 13 05:52:32.699: INFO: Deleting PersistentVolume "local-pvwkfqt" STEP: Cleaning up PVC and PV Nov 13 05:52:32.703: INFO: Deleting PersistentVolumeClaim "pvc-cwqzc" Nov 13 05:52:32.707: INFO: Deleting PersistentVolume "local-pvq4c9c" STEP: Cleaning up PVC and PV Nov 13 05:52:32.710: INFO: Deleting PersistentVolumeClaim "pvc-ltzl5" Nov 13 05:52:32.714: INFO: Deleting PersistentVolume "local-pvkzr9l" STEP: Cleaning up PVC and PV Nov 13 05:52:32.718: INFO: Deleting PersistentVolumeClaim "pvc-b9nbk" Nov 13 05:52:32.722: INFO: Deleting PersistentVolume "local-pvhgj2v" STEP: Cleaning up PVC and PV Nov 13 05:52:32.725: INFO: Deleting PersistentVolumeClaim "pvc-cjm7l" Nov 13 05:52:32.729: INFO: Deleting PersistentVolume "local-pvfvp9l" STEP: Cleaning up PVC and PV Nov 13 05:52:32.733: INFO: Deleting PersistentVolumeClaim "pvc-x4ftb" Nov 13 05:52:32.736: INFO: Deleting PersistentVolume "local-pv5l7fq" STEP: Removing the test directory Nov 13 05:52:32.739: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-25d2304a-d08b-4943-aa80-2fe99a2dae42] Namespace:persistent-local-volumes-test-5747 PodName:hostexec-node1-8zmwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:52:32.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:52:32.834: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-00fb2654-7b22-474c-9e89-a5ba52f1c558] Namespace:persistent-local-volumes-test-5747 PodName:hostexec-node1-8zmwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:52:32.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:52:32.925: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e53da0de-2d99-4008-8452-3a7a2a8efd7e] Namespace:persistent-local-volumes-test-5747 PodName:hostexec-node1-8zmwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:52:32.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:52:33.029: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-1c798e3e-65d5-4f85-934a-53a0f403bbad] Namespace:persistent-local-volumes-test-5747 PodName:hostexec-node1-8zmwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:52:33.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:52:33.125: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-993fe272-3364-4b4c-bbc0-dd73945e6633] Namespace:persistent-local-volumes-test-5747 PodName:hostexec-node1-8zmwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:52:33.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:52:33.216: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-63c9a58e-754c-474e-b94d-144a3aa42557] Namespace:persistent-local-volumes-test-5747 PodName:hostexec-node1-8zmwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:52:33.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up PVC and PV Nov 13 05:52:33.305: INFO: Deleting PersistentVolumeClaim "pvc-zgpfm" Nov 13 05:52:33.311: INFO: Deleting PersistentVolume "local-pvxsxlg" STEP: Cleaning up PVC and PV Nov 13 05:52:33.316: INFO: Deleting PersistentVolumeClaim "pvc-j28kx" Nov 13 05:52:33.320: INFO: Deleting PersistentVolume "local-pvpmxmv" STEP: Cleaning up PVC and PV Nov 13 05:52:33.324: INFO: Deleting PersistentVolumeClaim "pvc-f8fcw" Nov 13 05:52:33.328: INFO: Deleting PersistentVolume "local-pvbvhxv" STEP: Cleaning up PVC and PV Nov 13 05:52:33.331: INFO: Deleting PersistentVolumeClaim "pvc-qq28l" Nov 13 05:52:33.335: INFO: Deleting PersistentVolume "local-pvdvh29" STEP: Cleaning up PVC and PV Nov 13 05:52:33.339: INFO: Deleting PersistentVolumeClaim "pvc-4n5cr" Nov 13 05:52:33.342: INFO: Deleting PersistentVolume "local-pv4pdxn" STEP: Cleaning up PVC and PV Nov 13 05:52:33.346: INFO: Deleting PersistentVolumeClaim "pvc-m8qtl" Nov 13 05:52:33.350: INFO: Deleting PersistentVolume "local-pvvvwxk" STEP: Removing the test directory Nov 13 05:52:33.353: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8267a1ed-bc52-4ff9-8202-688d6b7d76bd] Namespace:persistent-local-volumes-test-5747 PodName:hostexec-node2-b586b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:52:33.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:52:33.448: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-004ccda2-4eda-4a70-ae6f-c493e01a8141] Namespace:persistent-local-volumes-test-5747 PodName:hostexec-node2-b586b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:52:33.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:52:33.533: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-51578af3-a22b-47c1-91fb-096e5ca9f211] Namespace:persistent-local-volumes-test-5747 PodName:hostexec-node2-b586b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:52:33.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:52:33.613: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0c27223b-128f-46ed-88d4-cfee1702c862] Namespace:persistent-local-volumes-test-5747 PodName:hostexec-node2-b586b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:52:33.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:52:33.694: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-1f22ecf9-e116-479c-bd7a-03fc322d42e9] Namespace:persistent-local-volumes-test-5747 PodName:hostexec-node2-b586b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:52:33.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:52:33.782: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0befd4da-aee4-4878-b4df-40606e1f986d] Namespace:persistent-local-volumes-test-5747 PodName:hostexec-node2-b586b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:52:33.782: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:52:33.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5747" for this suite. • [SLOW TEST:48.566 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:384 should use volumes on one node when pod has affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:419 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local StatefulSet with pod affinity [Slow] should use volumes on one node when pod has affinity","total":-1,"completed":13,"skipped":456,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:52:05.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:49 [It] should allow deletion of pod with invalid volume : secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 Nov 13 05:52:35.260: INFO: Deleting pod "pv-5833"/"pod-ephm-test-projected-blbr" Nov 13 05:52:35.260: INFO: Deleting pod "pod-ephm-test-projected-blbr" in namespace "pv-5833" Nov 13 05:52:35.265: INFO: Wait up to 5m0s for pod "pod-ephm-test-projected-blbr" to be fully deleted [AfterEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:52:43.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-5833" for this suite. • [SLOW TEST:38.063 seconds] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 When pod refers to non-existent ephemeral storage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53 should allow deletion of pod with invalid volume : secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 ------------------------------ {"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret","total":-1,"completed":18,"skipped":520,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:42:42.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [It] should fail due to non-existent path /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:307 STEP: Creating local PVC and PV Nov 13 05:42:42.486: INFO: Creating a PV followed by a PVC Nov 13 05:42:42.494: INFO: Waiting for PV local-pvrggm9 to bind to PVC pvc-f7zvn Nov 13 05:42:42.494: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-f7zvn] to have phase Bound Nov 13 05:42:42.496: INFO: PersistentVolumeClaim pvc-f7zvn found but phase is Pending instead of Bound. Nov 13 05:42:44.501: INFO: PersistentVolumeClaim pvc-f7zvn found and phase=Bound (2.006936378s) Nov 13 05:42:44.501: INFO: Waiting up to 3m0s for PersistentVolume local-pvrggm9 to have phase Bound Nov 13 05:42:44.503: INFO: PersistentVolume local-pvrggm9 found and phase=Bound (2.28607ms) STEP: Creating a pod STEP: Cleaning up PVC and PV Nov 13 05:52:44.540: INFO: Deleting PersistentVolumeClaim "pvc-f7zvn" Nov 13 05:52:44.544: INFO: Deleting PersistentVolume "local-pvrggm9" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:52:44.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6328" for this suite. • [SLOW TEST:602.097 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Local volume that cannot be mounted [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:304 should fail due to non-existent path /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:307 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Local volume that cannot be mounted [Slow] should fail due to non-existent path","total":-1,"completed":4,"skipped":194,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:52:44.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename mounted-volume-expand STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:61 Nov 13 05:52:44.628: INFO: Only supported for providers [aws gce] (not local) [AfterEach] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:52:44.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "mounted-volume-expand-5469" for this suite. [AfterEach] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:108 Nov 13 05:52:44.641: INFO: AfterEach: Cleaning up resources for mounted volume resize S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should verify mounted devices can be resized [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:122 Only supported for providers [aws gce] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:62 ------------------------------ SS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:52:33.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-94415f8b-96cc-48b8-a9cc-82e3f971ccdb" Nov 13 05:52:35.989: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-94415f8b-96cc-48b8-a9cc-82e3f971ccdb && dd if=/dev/zero of=/tmp/local-volume-test-94415f8b-96cc-48b8-a9cc-82e3f971ccdb/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-94415f8b-96cc-48b8-a9cc-82e3f971ccdb/file] Namespace:persistent-local-volumes-test-8875 PodName:hostexec-node2-zpf6m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:52:35.989: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:52:36.131: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-94415f8b-96cc-48b8-a9cc-82e3f971ccdb/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-8875 PodName:hostexec-node2-zpf6m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:52:36.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:52:36.212: INFO: Creating a PV followed by a PVC Nov 13 05:52:36.219: INFO: Waiting for PV local-pvgz4bt to bind to PVC pvc-468xf Nov 13 05:52:36.219: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-468xf] to have phase Bound Nov 13 05:52:36.221: INFO: PersistentVolumeClaim pvc-468xf found but phase is Pending instead of Bound. Nov 13 05:52:38.224: INFO: PersistentVolumeClaim pvc-468xf found and phase=Bound (2.0054472s) Nov 13 05:52:38.224: INFO: Waiting up to 3m0s for PersistentVolume local-pvgz4bt to have phase Bound Nov 13 05:52:38.227: INFO: PersistentVolume local-pvgz4bt found and phase=Bound (2.610968ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:52:44.257: INFO: pod "pod-0293fe02-cae3-40f8-b436-c493f752717c" created on Node "node2" STEP: Writing in pod1 Nov 13 05:52:44.257: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8875 PodName:pod-0293fe02-cae3-40f8-b436-c493f752717c ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:52:44.257: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:52:44.329: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Nov 13 05:52:44.329: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8875 PodName:pod-0293fe02-cae3-40f8-b436-c493f752717c ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:52:44.329: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:52:44.411: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Nov 13 05:52:44.411: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /dev/loop0 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8875 PodName:pod-0293fe02-cae3-40f8-b436-c493f752717c ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:52:44.411: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:52:44.505: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /dev/loop0 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-0293fe02-cae3-40f8-b436-c493f752717c in namespace persistent-local-volumes-test-8875 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:52:44.509: INFO: Deleting PersistentVolumeClaim "pvc-468xf" Nov 13 05:52:44.513: INFO: Deleting PersistentVolume "local-pvgz4bt" Nov 13 05:52:44.517: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-94415f8b-96cc-48b8-a9cc-82e3f971ccdb/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-8875 PodName:hostexec-node2-zpf6m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:52:44.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-94415f8b-96cc-48b8-a9cc-82e3f971ccdb/file Nov 13 05:52:44.626: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-8875 PodName:hostexec-node2-zpf6m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:52:44.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-94415f8b-96cc-48b8-a9cc-82e3f971ccdb Nov 13 05:52:44.709: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-94415f8b-96cc-48b8-a9cc-82e3f971ccdb] Namespace:persistent-local-volumes-test-8875 PodName:hostexec-node2-zpf6m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:52:44.709: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:52:44.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8875" for this suite. • [SLOW TEST:10.861 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":14,"skipped":474,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:52:44.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:52:48.702: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-bd4813a2-9ba6-48c2-b0d3-13651083fae7-backend && mount --bind /tmp/local-volume-test-bd4813a2-9ba6-48c2-b0d3-13651083fae7-backend /tmp/local-volume-test-bd4813a2-9ba6-48c2-b0d3-13651083fae7-backend && ln -s /tmp/local-volume-test-bd4813a2-9ba6-48c2-b0d3-13651083fae7-backend /tmp/local-volume-test-bd4813a2-9ba6-48c2-b0d3-13651083fae7] Namespace:persistent-local-volumes-test-4327 PodName:hostexec-node1-j7lqb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:52:48.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:52:48.903: INFO: Creating a PV followed by a PVC Nov 13 05:52:48.910: INFO: Waiting for PV local-pv9mpw6 to bind to PVC pvc-kfkg4 Nov 13 05:52:48.910: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-kfkg4] to have phase Bound Nov 13 05:52:48.912: INFO: PersistentVolumeClaim pvc-kfkg4 found but phase is Pending instead of Bound. Nov 13 05:52:50.915: INFO: PersistentVolumeClaim pvc-kfkg4 found but phase is Pending instead of Bound. Nov 13 05:52:52.919: INFO: PersistentVolumeClaim pvc-kfkg4 found but phase is Pending instead of Bound. Nov 13 05:52:54.923: INFO: PersistentVolumeClaim pvc-kfkg4 found but phase is Pending instead of Bound. Nov 13 05:52:56.927: INFO: PersistentVolumeClaim pvc-kfkg4 found and phase=Bound (8.017343237s) Nov 13 05:52:56.927: INFO: Waiting up to 3m0s for PersistentVolume local-pv9mpw6 to have phase Bound Nov 13 05:52:56.929: INFO: PersistentVolume local-pv9mpw6 found and phase=Bound (1.89463ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Nov 13 05:52:56.935: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:52:56.936: INFO: Deleting PersistentVolumeClaim "pvc-kfkg4" Nov 13 05:52:56.940: INFO: Deleting PersistentVolume "local-pv9mpw6" STEP: Removing the test directory Nov 13 05:52:56.944: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-bd4813a2-9ba6-48c2-b0d3-13651083fae7 && umount /tmp/local-volume-test-bd4813a2-9ba6-48c2-b0d3-13651083fae7-backend && rm -r /tmp/local-volume-test-bd4813a2-9ba6-48c2-b0d3-13651083fae7-backend] Namespace:persistent-local-volumes-test-4327 PodName:hostexec-node1-j7lqb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:52:56.944: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:52:57.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4327" for this suite. S [SKIPPING] [12.436 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:52:57.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:91 STEP: Creating a pod to test downward API volume plugin Nov 13 05:52:57.333: INFO: Waiting up to 5m0s for pod "metadata-volume-760bbf93-d1f6-4b01-b958-69d64c099852" in namespace "downward-api-7244" to be "Succeeded or Failed" Nov 13 05:52:57.335: INFO: Pod "metadata-volume-760bbf93-d1f6-4b01-b958-69d64c099852": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081667ms Nov 13 05:52:59.339: INFO: Pod "metadata-volume-760bbf93-d1f6-4b01-b958-69d64c099852": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005361706s Nov 13 05:53:01.343: INFO: Pod "metadata-volume-760bbf93-d1f6-4b01-b958-69d64c099852": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009489267s STEP: Saw pod success Nov 13 05:53:01.343: INFO: Pod "metadata-volume-760bbf93-d1f6-4b01-b958-69d64c099852" satisfied condition "Succeeded or Failed" Nov 13 05:53:01.345: INFO: Trying to get logs from node node2 pod metadata-volume-760bbf93-d1f6-4b01-b958-69d64c099852 container client-container: STEP: delete the pod Nov 13 05:53:01.365: INFO: Waiting for pod metadata-volume-760bbf93-d1f6-4b01-b958-69d64c099852 to disappear Nov 13 05:53:01.366: INFO: Pod metadata-volume-760bbf93-d1f6-4b01-b958-69d64c099852 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:53:01.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7244" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":5,"skipped":330,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:51:45.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] exhausted, late binding, with topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 STEP: Building a driver namespace object, basename csi-mock-volumes-434 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Nov 13 05:51:45.395: INFO: creating *v1.ServiceAccount: csi-mock-volumes-434-5188/csi-attacher Nov 13 05:51:45.399: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-434 Nov 13 05:51:45.399: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-434 Nov 13 05:51:45.401: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-434 Nov 13 05:51:45.404: INFO: creating *v1.Role: csi-mock-volumes-434-5188/external-attacher-cfg-csi-mock-volumes-434 Nov 13 05:51:45.407: INFO: creating *v1.RoleBinding: csi-mock-volumes-434-5188/csi-attacher-role-cfg Nov 13 05:51:45.409: INFO: creating *v1.ServiceAccount: csi-mock-volumes-434-5188/csi-provisioner Nov 13 05:51:45.412: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-434 Nov 13 05:51:45.412: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-434 Nov 13 05:51:45.415: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-434 Nov 13 05:51:45.418: INFO: creating *v1.Role: csi-mock-volumes-434-5188/external-provisioner-cfg-csi-mock-volumes-434 Nov 13 05:51:45.421: INFO: creating *v1.RoleBinding: csi-mock-volumes-434-5188/csi-provisioner-role-cfg Nov 13 05:51:45.423: INFO: creating *v1.ServiceAccount: csi-mock-volumes-434-5188/csi-resizer Nov 13 05:51:45.426: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-434 Nov 13 05:51:45.426: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-434 Nov 13 05:51:45.428: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-434 Nov 13 05:51:45.431: INFO: creating *v1.Role: csi-mock-volumes-434-5188/external-resizer-cfg-csi-mock-volumes-434 Nov 13 05:51:45.433: INFO: creating *v1.RoleBinding: csi-mock-volumes-434-5188/csi-resizer-role-cfg Nov 13 05:51:45.436: INFO: creating *v1.ServiceAccount: csi-mock-volumes-434-5188/csi-snapshotter Nov 13 05:51:45.438: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-434 Nov 13 05:51:45.438: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-434 Nov 13 05:51:45.441: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-434 Nov 13 05:51:45.443: INFO: creating *v1.Role: csi-mock-volumes-434-5188/external-snapshotter-leaderelection-csi-mock-volumes-434 Nov 13 05:51:45.446: INFO: creating *v1.RoleBinding: csi-mock-volumes-434-5188/external-snapshotter-leaderelection Nov 13 05:51:45.448: INFO: creating *v1.ServiceAccount: csi-mock-volumes-434-5188/csi-mock Nov 13 05:51:45.451: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-434 Nov 13 05:51:45.454: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-434 Nov 13 05:51:45.456: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-434 Nov 13 05:51:45.460: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-434 Nov 13 05:51:45.462: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-434 Nov 13 05:51:45.465: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-434 Nov 13 05:51:45.467: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-434 Nov 13 05:51:45.470: INFO: creating *v1.StatefulSet: csi-mock-volumes-434-5188/csi-mockplugin Nov 13 05:51:45.475: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-434 Nov 13 05:51:45.477: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-434" Nov 13 05:51:45.480: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-434 to register on node node1 I1113 05:51:50.569056 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-434","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1113 05:51:50.648346 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I1113 05:51:50.649831 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-434","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1113 05:51:50.690110 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}},{"Type":{"Service":{"type":2}}}]},"Error":"","FullError":null} I1113 05:51:50.692171 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I1113 05:51:50.930940 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-434","accessible_topology":{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}},"Error":"","FullError":null} STEP: Creating pod Nov 13 05:51:54.997: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil I1113 05:51:55.026369 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-09bc9174-0021-4fbd-84d8-6f54ec21f83f","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}],"accessibility_requirements":{"requisite":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}],"preferred":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}} I1113 05:51:57.743037 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-09bc9174-0021-4fbd-84d8-6f54ec21f83f","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}],"accessibility_requirements":{"requisite":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}],"preferred":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-09bc9174-0021-4fbd-84d8-6f54ec21f83f"},"accessible_topology":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Error":"","FullError":null} I1113 05:51:58.945625 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Nov 13 05:51:58.947: INFO: >>> kubeConfig: /root/.kube/config I1113 05:51:59.047846 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-09bc9174-0021-4fbd-84d8-6f54ec21f83f/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-09bc9174-0021-4fbd-84d8-6f54ec21f83f","storage.kubernetes.io/csiProvisionerIdentity":"1636782710729-8081-csi-mock-csi-mock-volumes-434"}},"Response":{},"Error":"","FullError":null} I1113 05:51:59.053613 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Nov 13 05:51:59.055: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:51:59.143: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:51:59.235: INFO: >>> kubeConfig: /root/.kube/config I1113 05:51:59.322499 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-09bc9174-0021-4fbd-84d8-6f54ec21f83f/globalmount","target_path":"/var/lib/kubelet/pods/32a4d907-38a3-423f-9130-7cc8b9c4af7b/volumes/kubernetes.io~csi/pvc-09bc9174-0021-4fbd-84d8-6f54ec21f83f/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-09bc9174-0021-4fbd-84d8-6f54ec21f83f","storage.kubernetes.io/csiProvisionerIdentity":"1636782710729-8081-csi-mock-csi-mock-volumes-434"}},"Response":{},"Error":"","FullError":null} Nov 13 05:52:03.022: INFO: Deleting pod "pvc-volume-tester-t58kd" in namespace "csi-mock-volumes-434" Nov 13 05:52:03.027: INFO: Wait up to 5m0s for pod "pvc-volume-tester-t58kd" to be fully deleted Nov 13 05:52:05.855: INFO: >>> kubeConfig: /root/.kube/config I1113 05:52:05.954566 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/32a4d907-38a3-423f-9130-7cc8b9c4af7b/volumes/kubernetes.io~csi/pvc-09bc9174-0021-4fbd-84d8-6f54ec21f83f/mount"},"Response":{},"Error":"","FullError":null} I1113 05:52:06.058364 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:52:06.060734 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-09bc9174-0021-4fbd-84d8-6f54ec21f83f/globalmount"},"Response":{},"Error":"","FullError":null} I1113 05:52:13.051732 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} STEP: Checking PVC events Nov 13 05:52:14.038: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-tvj8p", GenerateName:"pvc-", Namespace:"csi-mock-volumes-434", SelfLink:"", UID:"09bc9174-0021-4fbd-84d8-6f54ec21f83f", ResourceVersion:"213710", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772379514, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003ecba88), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003ecbaa0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc004ab5000), VolumeMode:(*v1.PersistentVolumeMode)(0xc004ab5010), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:52:14.038: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-tvj8p", GenerateName:"pvc-", Namespace:"csi-mock-volumes-434", SelfLink:"", UID:"09bc9174-0021-4fbd-84d8-6f54ec21f83f", ResourceVersion:"213713", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772379514, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.kubernetes.io/selected-node":"node1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0005dc438), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0005dc450)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0005dc468), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0005dc480)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0007a2520), VolumeMode:(*v1.PersistentVolumeMode)(0xc0007a2580), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:52:14.038: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-tvj8p", GenerateName:"pvc-", Namespace:"csi-mock-volumes-434", SelfLink:"", UID:"09bc9174-0021-4fbd-84d8-6f54ec21f83f", ResourceVersion:"213714", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772379514, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-434", "volume.kubernetes.io/selected-node":"node1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004672e58), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004672e70)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004672e88), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004672ea0)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004672eb8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004672ed0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc004ab5c10), VolumeMode:(*v1.PersistentVolumeMode)(0xc004ab5c20), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:52:14.039: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-tvj8p", GenerateName:"pvc-", Namespace:"csi-mock-volumes-434", SelfLink:"", UID:"09bc9174-0021-4fbd-84d8-6f54ec21f83f", ResourceVersion:"213717", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772379514, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-434"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004c40ff0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004c41008)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004c41020), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004c41038)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004c41050), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004c41068)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc000678970), VolumeMode:(*v1.PersistentVolumeMode)(0xc0006789b0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:52:14.039: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-tvj8p", GenerateName:"pvc-", Namespace:"csi-mock-volumes-434", SelfLink:"", UID:"09bc9174-0021-4fbd-84d8-6f54ec21f83f", ResourceVersion:"213744", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772379514, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-434", "volume.kubernetes.io/selected-node":"node1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004c41098), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004c410b0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004c410c8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004c410e0)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004c410f8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004c41110)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc000678a90), VolumeMode:(*v1.PersistentVolumeMode)(0xc000678aa0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:52:14.039: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-tvj8p", GenerateName:"pvc-", Namespace:"csi-mock-volumes-434", SelfLink:"", UID:"09bc9174-0021-4fbd-84d8-6f54ec21f83f", ResourceVersion:"213750", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772379514, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-434", "volume.kubernetes.io/selected-node":"node1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004673110), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004673128)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004673140), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004673158)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004673170), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004673188)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-09bc9174-0021-4fbd-84d8-6f54ec21f83f", StorageClassName:(*string)(0xc00067a170), VolumeMode:(*v1.PersistentVolumeMode)(0xc00067a1c0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:52:14.039: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-tvj8p", GenerateName:"pvc-", Namespace:"csi-mock-volumes-434", SelfLink:"", UID:"09bc9174-0021-4fbd-84d8-6f54ec21f83f", ResourceVersion:"213751", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772379514, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-434", "volume.kubernetes.io/selected-node":"node1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0046731b8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0046731d0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0046731e8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004673200)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004673218), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004673230)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-09bc9174-0021-4fbd-84d8-6f54ec21f83f", StorageClassName:(*string)(0xc00067a290), VolumeMode:(*v1.PersistentVolumeMode)(0xc00067a310), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:52:14.039: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-tvj8p", GenerateName:"pvc-", Namespace:"csi-mock-volumes-434", SelfLink:"", UID:"09bc9174-0021-4fbd-84d8-6f54ec21f83f", ResourceVersion:"213920", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772379514, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(0xc004c41140), DeletionGracePeriodSeconds:(*int64)(0xc0056cf1a8), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-434", "volume.kubernetes.io/selected-node":"node1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004c41158), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004c41170)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004c41188), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004c411a0)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004c411b8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004c411d0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-09bc9174-0021-4fbd-84d8-6f54ec21f83f", StorageClassName:(*string)(0xc000678b20), VolumeMode:(*v1.PersistentVolumeMode)(0xc000678b40), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 13 05:52:14.039: INFO: PVC event DELETED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-tvj8p", GenerateName:"pvc-", Namespace:"csi-mock-volumes-434", SelfLink:"", UID:"09bc9174-0021-4fbd-84d8-6f54ec21f83f", ResourceVersion:"213921", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63772379514, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(0xc004c41200), DeletionGracePeriodSeconds:(*int64)(0xc0056cf278), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-434", "volume.kubernetes.io/selected-node":"node1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004c41218), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004c41230)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004c41248), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004c41260)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004c41278), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004c412a8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-09bc9174-0021-4fbd-84d8-6f54ec21f83f", StorageClassName:(*string)(0xc000678c80), VolumeMode:(*v1.PersistentVolumeMode)(0xc000678ce0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} STEP: Deleting pod pvc-volume-tester-t58kd Nov 13 05:52:14.040: INFO: Deleting pod "pvc-volume-tester-t58kd" in namespace "csi-mock-volumes-434" STEP: Deleting claim pvc-tvj8p STEP: Deleting storageclass csi-mock-volumes-434-scjqg8c STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-434 STEP: Waiting for namespaces [csi-mock-volumes-434] to vanish STEP: uninstalling csi mock driver Nov 13 05:52:20.078: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-434-5188/csi-attacher Nov 13 05:52:20.082: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-434 Nov 13 05:52:20.087: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-434 Nov 13 05:52:20.091: INFO: deleting *v1.Role: csi-mock-volumes-434-5188/external-attacher-cfg-csi-mock-volumes-434 Nov 13 05:52:20.094: INFO: deleting *v1.RoleBinding: csi-mock-volumes-434-5188/csi-attacher-role-cfg Nov 13 05:52:20.098: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-434-5188/csi-provisioner Nov 13 05:52:20.102: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-434 Nov 13 05:52:20.107: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-434 Nov 13 05:52:20.110: INFO: deleting *v1.Role: csi-mock-volumes-434-5188/external-provisioner-cfg-csi-mock-volumes-434 Nov 13 05:52:20.114: INFO: deleting *v1.RoleBinding: csi-mock-volumes-434-5188/csi-provisioner-role-cfg Nov 13 05:52:20.117: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-434-5188/csi-resizer Nov 13 05:52:20.120: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-434 Nov 13 05:52:20.123: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-434 Nov 13 05:52:20.127: INFO: deleting *v1.Role: csi-mock-volumes-434-5188/external-resizer-cfg-csi-mock-volumes-434 Nov 13 05:52:20.130: INFO: deleting *v1.RoleBinding: csi-mock-volumes-434-5188/csi-resizer-role-cfg Nov 13 05:52:20.133: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-434-5188/csi-snapshotter Nov 13 05:52:20.137: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-434 Nov 13 05:52:20.140: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-434 Nov 13 05:52:20.144: INFO: deleting *v1.Role: csi-mock-volumes-434-5188/external-snapshotter-leaderelection-csi-mock-volumes-434 Nov 13 05:52:20.148: INFO: deleting *v1.RoleBinding: csi-mock-volumes-434-5188/external-snapshotter-leaderelection Nov 13 05:52:20.151: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-434-5188/csi-mock Nov 13 05:52:20.155: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-434 Nov 13 05:52:20.158: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-434 Nov 13 05:52:20.162: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-434 Nov 13 05:52:20.166: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-434 Nov 13 05:52:20.169: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-434 Nov 13 05:52:20.173: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-434 Nov 13 05:52:20.176: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-434 Nov 13 05:52:20.179: INFO: deleting *v1.StatefulSet: csi-mock-volumes-434-5188/csi-mockplugin Nov 13 05:52:20.183: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-434 STEP: deleting the driver namespace: csi-mock-volumes-434-5188 STEP: Waiting for namespaces [csi-mock-volumes-434-5188] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:53:04.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:78.872 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 storage capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900 exhausted, late binding, with topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology","total":-1,"completed":14,"skipped":566,"failed":1,"failures":["[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:53:04.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-834d3fec-ec6a-4a4c-afc5-a7f9769fc06f" Nov 13 05:53:06.301: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-834d3fec-ec6a-4a4c-afc5-a7f9769fc06f && dd if=/dev/zero of=/tmp/local-volume-test-834d3fec-ec6a-4a4c-afc5-a7f9769fc06f/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-834d3fec-ec6a-4a4c-afc5-a7f9769fc06f/file] Namespace:persistent-local-volumes-test-7844 PodName:hostexec-node1-zc66r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:53:06.301: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:53:06.434: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-834d3fec-ec6a-4a4c-afc5-a7f9769fc06f/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-7844 PodName:hostexec-node1-zc66r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:53:06.434: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:53:06.534: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-834d3fec-ec6a-4a4c-afc5-a7f9769fc06f && chmod o+rwx /tmp/local-volume-test-834d3fec-ec6a-4a4c-afc5-a7f9769fc06f] Namespace:persistent-local-volumes-test-7844 PodName:hostexec-node1-zc66r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:53:06.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:53:06.796: INFO: Creating a PV followed by a PVC Nov 13 05:53:06.803: INFO: Waiting for PV local-pv9gpmg to bind to PVC pvc-xfpsn Nov 13 05:53:06.803: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-xfpsn] to have phase Bound Nov 13 05:53:06.806: INFO: PersistentVolumeClaim pvc-xfpsn found but phase is Pending instead of Bound. Nov 13 05:53:08.810: INFO: PersistentVolumeClaim pvc-xfpsn found but phase is Pending instead of Bound. Nov 13 05:53:10.813: INFO: PersistentVolumeClaim pvc-xfpsn found but phase is Pending instead of Bound. Nov 13 05:53:12.817: INFO: PersistentVolumeClaim pvc-xfpsn found and phase=Bound (6.013364902s) Nov 13 05:53:12.817: INFO: Waiting up to 3m0s for PersistentVolume local-pv9gpmg to have phase Bound Nov 13 05:53:12.820: INFO: PersistentVolume local-pv9gpmg found and phase=Bound (3.674911ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Nov 13 05:53:16.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-7844 exec pod-abdbc4cc-acc3-49ae-9202-17a39800d629 --namespace=persistent-local-volumes-test-7844 -- stat -c %g /mnt/volume1' Nov 13 05:53:17.214: INFO: stderr: "" Nov 13 05:53:17.214: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Nov 13 05:53:21.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-7844 exec pod-d75cc066-2604-4f1f-8b48-9938c5f0662b --namespace=persistent-local-volumes-test-7844 -- stat -c %g /mnt/volume1' Nov 13 05:53:21.486: INFO: stderr: "" Nov 13 05:53:21.486: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod pod-abdbc4cc-acc3-49ae-9202-17a39800d629 in namespace persistent-local-volumes-test-7844 STEP: Deleting second pod STEP: Deleting pod pod-d75cc066-2604-4f1f-8b48-9938c5f0662b in namespace persistent-local-volumes-test-7844 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:53:21.496: INFO: Deleting PersistentVolumeClaim "pvc-xfpsn" Nov 13 05:53:21.499: INFO: Deleting PersistentVolume "local-pv9gpmg" Nov 13 05:53:21.503: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-834d3fec-ec6a-4a4c-afc5-a7f9769fc06f] Namespace:persistent-local-volumes-test-7844 PodName:hostexec-node1-zc66r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:53:21.503: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:53:21.644: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-834d3fec-ec6a-4a4c-afc5-a7f9769fc06f/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-7844 PodName:hostexec-node1-zc66r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:53:21.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node1" at path /tmp/local-volume-test-834d3fec-ec6a-4a4c-afc5-a7f9769fc06f/file Nov 13 05:53:21.752: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-7844 PodName:hostexec-node1-zc66r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:53:21.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-834d3fec-ec6a-4a4c-afc5-a7f9769fc06f Nov 13 05:53:21.840: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-834d3fec-ec6a-4a4c-afc5-a7f9769fc06f] Namespace:persistent-local-volumes-test-7844 PodName:hostexec-node1-zc66r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:53:21.841: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:53:22.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7844" for this suite. • [SLOW TEST:17.760 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":15,"skipped":587,"failed":1,"failures":["[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:52:43.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:391 STEP: Setting up local volumes on node "node1" STEP: Initializing test volumes Nov 13 05:52:47.418: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-45e7532f-80c5-4698-8f6a-53d07a860d36] Namespace:persistent-local-volumes-test-4400 PodName:hostexec-node1-dbvs7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:52:47.418: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:52:48.847: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-f1f10a56-8ca8-47f9-ba7c-8ec60f93448b] Namespace:persistent-local-volumes-test-4400 PodName:hostexec-node1-dbvs7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:52:48.847: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:52:49.511: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-fe0e218f-435b-42b0-a3e5-8866ded1262a] Namespace:persistent-local-volumes-test-4400 PodName:hostexec-node1-dbvs7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:52:49.511: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:52:51.146: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-040e8106-1880-4f1d-9c40-7e633f39a1e7] Namespace:persistent-local-volumes-test-4400 PodName:hostexec-node1-dbvs7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:52:51.146: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:52:51.235: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-8c4c45f4-f9fd-4d69-9e32-1d9625d67abe] Namespace:persistent-local-volumes-test-4400 PodName:hostexec-node1-dbvs7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:52:51.235: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:52:51.319: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-d188572b-a540-40d4-94ba-4f5b4ed64f01] Namespace:persistent-local-volumes-test-4400 PodName:hostexec-node1-dbvs7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:52:51.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:52:51.411: INFO: Creating a PV followed by a PVC Nov 13 05:52:51.416: INFO: Creating a PV followed by a PVC Nov 13 05:52:51.422: INFO: Creating a PV followed by a PVC Nov 13 05:52:51.428: INFO: Creating a PV followed by a PVC Nov 13 05:52:51.433: INFO: Creating a PV followed by a PVC Nov 13 05:52:51.441: INFO: Creating a PV followed by a PVC Nov 13 05:53:01.484: INFO: PVCs were not bound within 10s (that's good) STEP: Setting up local volumes on node "node2" STEP: Initializing test volumes Nov 13 05:53:03.501: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-51b3690d-69ba-4075-adaf-378885d379c5] Namespace:persistent-local-volumes-test-4400 PodName:hostexec-node2-wmtrd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:53:03.501: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:53:03.600: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-21e04ff5-4c56-4da3-bcbc-1b8fda6b4547] Namespace:persistent-local-volumes-test-4400 PodName:hostexec-node2-wmtrd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:53:03.600: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:53:03.692: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-ca03bf54-0fe2-456d-b091-5ee564fc32fe] Namespace:persistent-local-volumes-test-4400 PodName:hostexec-node2-wmtrd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:53:03.692: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:53:03.778: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-eac6f4ff-c393-4e6d-ab5b-b1aed4775ec8] Namespace:persistent-local-volumes-test-4400 PodName:hostexec-node2-wmtrd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:53:03.779: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:53:03.858: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-481df1ca-4fa2-49fb-882c-bb1339f6cb43] Namespace:persistent-local-volumes-test-4400 PodName:hostexec-node2-wmtrd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:53:03.858: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:53:03.937: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-bc696a26-493e-44ac-8d0e-e378268ef7d3] Namespace:persistent-local-volumes-test-4400 PodName:hostexec-node2-wmtrd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:53:03.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:53:04.020: INFO: Creating a PV followed by a PVC Nov 13 05:53:04.027: INFO: Creating a PV followed by a PVC Nov 13 05:53:04.033: INFO: Creating a PV followed by a PVC Nov 13 05:53:04.039: INFO: Creating a PV followed by a PVC Nov 13 05:53:04.044: INFO: Creating a PV followed by a PVC Nov 13 05:53:04.050: INFO: Creating a PV followed by a PVC Nov 13 05:53:14.099: INFO: PVCs were not bound within 10s (that's good) [It] should use volumes on one node when pod management is parallel and pod has affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:434 STEP: Creating a StatefulSet with pod affinity on nodes Nov 13 05:53:14.109: INFO: Found 0 stateful pods, waiting for 3 Nov 13 05:53:24.113: INFO: Waiting for pod local-volume-statefulset-0 to enter Running - Ready=true, currently Running - Ready=true Nov 13 05:53:24.113: INFO: Waiting for pod local-volume-statefulset-1 to enter Running - Ready=true, currently Running - Ready=true Nov 13 05:53:24.113: INFO: Waiting for pod local-volume-statefulset-2 to enter Running - Ready=true, currently Running - Ready=true Nov 13 05:53:24.117: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol1-local-volume-statefulset-0] to have phase Bound Nov 13 05:53:24.120: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-0 found and phase=Bound (2.269652ms) Nov 13 05:53:24.120: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol1-local-volume-statefulset-1] to have phase Bound Nov 13 05:53:24.122: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-1 found and phase=Bound (2.426477ms) Nov 13 05:53:24.122: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol1-local-volume-statefulset-2] to have phase Bound Nov 13 05:53:24.125: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-2 found and phase=Bound (2.582ms) [AfterEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:403 STEP: Cleaning up PVC and PV Nov 13 05:53:24.125: INFO: Deleting PersistentVolumeClaim "pvc-fvfwk" Nov 13 05:53:24.131: INFO: Deleting PersistentVolume "local-pvxw69w" STEP: Cleaning up PVC and PV Nov 13 05:53:24.135: INFO: Deleting PersistentVolumeClaim "pvc-ct5ht" Nov 13 05:53:24.139: INFO: Deleting PersistentVolume "local-pvbxfsd" STEP: Cleaning up PVC and PV Nov 13 05:53:24.143: INFO: Deleting PersistentVolumeClaim "pvc-58z2l" Nov 13 05:53:24.146: INFO: Deleting PersistentVolume "local-pvtskh6" STEP: Cleaning up PVC and PV Nov 13 05:53:24.150: INFO: Deleting PersistentVolumeClaim "pvc-bf967" Nov 13 05:53:24.153: INFO: Deleting PersistentVolume "local-pv9w7g6" STEP: Cleaning up PVC and PV Nov 13 05:53:24.157: INFO: Deleting PersistentVolumeClaim "pvc-99pt9" Nov 13 05:53:24.161: INFO: Deleting PersistentVolume "local-pvk6nds" STEP: Cleaning up PVC and PV Nov 13 05:53:24.164: INFO: Deleting PersistentVolumeClaim "pvc-l4z9r" Nov 13 05:53:24.167: INFO: Deleting PersistentVolume "local-pvprf7l" STEP: Removing the test directory Nov 13 05:53:24.171: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-45e7532f-80c5-4698-8f6a-53d07a860d36] Namespace:persistent-local-volumes-test-4400 PodName:hostexec-node1-dbvs7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:53:24.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:53:24.346: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f1f10a56-8ca8-47f9-ba7c-8ec60f93448b] Namespace:persistent-local-volumes-test-4400 PodName:hostexec-node1-dbvs7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:53:24.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:53:24.437: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-fe0e218f-435b-42b0-a3e5-8866ded1262a] Namespace:persistent-local-volumes-test-4400 PodName:hostexec-node1-dbvs7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:53:24.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:53:24.535: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-040e8106-1880-4f1d-9c40-7e633f39a1e7] Namespace:persistent-local-volumes-test-4400 PodName:hostexec-node1-dbvs7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:53:24.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:53:24.624: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8c4c45f4-f9fd-4d69-9e32-1d9625d67abe] Namespace:persistent-local-volumes-test-4400 PodName:hostexec-node1-dbvs7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:53:24.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:53:24.735: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-d188572b-a540-40d4-94ba-4f5b4ed64f01] Namespace:persistent-local-volumes-test-4400 PodName:hostexec-node1-dbvs7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:53:24.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up PVC and PV Nov 13 05:53:24.822: INFO: Deleting PersistentVolumeClaim "pvc-46xwl" Nov 13 05:53:24.825: INFO: Deleting PersistentVolume "local-pvvxrp7" STEP: Cleaning up PVC and PV Nov 13 05:53:24.830: INFO: Deleting PersistentVolumeClaim "pvc-b488c" Nov 13 05:53:24.833: INFO: Deleting PersistentVolume "local-pvvxmkn" STEP: Cleaning up PVC and PV Nov 13 05:53:24.837: INFO: Deleting PersistentVolumeClaim "pvc-zgg4x" Nov 13 05:53:24.840: INFO: Deleting PersistentVolume "local-pvkp8fp" STEP: Cleaning up PVC and PV Nov 13 05:53:24.843: INFO: Deleting PersistentVolumeClaim "pvc-8q9t2" Nov 13 05:53:24.847: INFO: Deleting PersistentVolume "local-pvc6pkl" STEP: Cleaning up PVC and PV Nov 13 05:53:24.851: INFO: Deleting PersistentVolumeClaim "pvc-d49x8" Nov 13 05:53:24.855: INFO: Deleting PersistentVolume "local-pvgbghr" STEP: Cleaning up PVC and PV Nov 13 05:53:24.858: INFO: Deleting PersistentVolumeClaim "pvc-5wcqh" Nov 13 05:53:24.864: INFO: Deleting PersistentVolume "local-pv6nc4n" STEP: Removing the test directory Nov 13 05:53:24.868: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-51b3690d-69ba-4075-adaf-378885d379c5] Namespace:persistent-local-volumes-test-4400 PodName:hostexec-node2-wmtrd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:53:24.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:53:24.969: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-21e04ff5-4c56-4da3-bcbc-1b8fda6b4547] Namespace:persistent-local-volumes-test-4400 PodName:hostexec-node2-wmtrd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:53:24.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:53:25.139: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ca03bf54-0fe2-456d-b091-5ee564fc32fe] Namespace:persistent-local-volumes-test-4400 PodName:hostexec-node2-wmtrd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:53:25.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:53:25.225: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-eac6f4ff-c393-4e6d-ab5b-b1aed4775ec8] Namespace:persistent-local-volumes-test-4400 PodName:hostexec-node2-wmtrd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:53:25.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:53:25.310: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-481df1ca-4fa2-49fb-882c-bb1339f6cb43] Namespace:persistent-local-volumes-test-4400 PodName:hostexec-node2-wmtrd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:53:25.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 13 05:53:25.404: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-bc696a26-493e-44ac-8d0e-e378268ef7d3] Namespace:persistent-local-volumes-test-4400 PodName:hostexec-node2-wmtrd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:53:25.404: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:53:25.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4400" for this suite. • [SLOW TEST:42.162 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:384 should use volumes on one node when pod management is parallel and pod has affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:434 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local StatefulSet with pod affinity [Slow] should use volumes on one node when pod management is parallel and pod has affinity","total":-1,"completed":19,"skipped":559,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:52:23.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not be passed when CSIDriver does not exist /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 STEP: Building a driver namespace object, basename csi-mock-volumes-3809 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:52:23.731: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3809-6597/csi-attacher Nov 13 05:52:23.734: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3809 Nov 13 05:52:23.734: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-3809 Nov 13 05:52:23.737: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3809 Nov 13 05:52:23.740: INFO: creating *v1.Role: csi-mock-volumes-3809-6597/external-attacher-cfg-csi-mock-volumes-3809 Nov 13 05:52:23.743: INFO: creating *v1.RoleBinding: csi-mock-volumes-3809-6597/csi-attacher-role-cfg Nov 13 05:52:23.746: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3809-6597/csi-provisioner Nov 13 05:52:23.749: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3809 Nov 13 05:52:23.749: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-3809 Nov 13 05:52:23.752: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3809 Nov 13 05:52:23.754: INFO: creating *v1.Role: csi-mock-volumes-3809-6597/external-provisioner-cfg-csi-mock-volumes-3809 Nov 13 05:52:23.757: INFO: creating *v1.RoleBinding: csi-mock-volumes-3809-6597/csi-provisioner-role-cfg Nov 13 05:52:23.760: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3809-6597/csi-resizer Nov 13 05:52:23.762: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3809 Nov 13 05:52:23.762: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-3809 Nov 13 05:52:23.766: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3809 Nov 13 05:52:23.768: INFO: creating *v1.Role: csi-mock-volumes-3809-6597/external-resizer-cfg-csi-mock-volumes-3809 Nov 13 05:52:23.771: INFO: creating *v1.RoleBinding: csi-mock-volumes-3809-6597/csi-resizer-role-cfg Nov 13 05:52:23.773: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3809-6597/csi-snapshotter Nov 13 05:52:23.776: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3809 Nov 13 05:52:23.776: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-3809 Nov 13 05:52:23.778: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3809 Nov 13 05:52:23.780: INFO: creating *v1.Role: csi-mock-volumes-3809-6597/external-snapshotter-leaderelection-csi-mock-volumes-3809 Nov 13 05:52:23.782: INFO: creating *v1.RoleBinding: csi-mock-volumes-3809-6597/external-snapshotter-leaderelection Nov 13 05:52:23.785: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3809-6597/csi-mock Nov 13 05:52:23.787: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3809 Nov 13 05:52:23.789: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3809 Nov 13 05:52:23.792: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3809 Nov 13 05:52:23.795: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3809 Nov 13 05:52:23.797: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3809 Nov 13 05:52:23.800: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3809 Nov 13 05:52:23.802: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3809 Nov 13 05:52:23.805: INFO: creating *v1.StatefulSet: csi-mock-volumes-3809-6597/csi-mockplugin Nov 13 05:52:23.809: INFO: creating *v1.StatefulSet: csi-mock-volumes-3809-6597/csi-mockplugin-attacher Nov 13 05:52:23.815: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-3809 to register on node node1 STEP: Creating pod Nov 13 05:52:33.333: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:52:33.337: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-nw66w] to have phase Bound Nov 13 05:52:33.339: INFO: PersistentVolumeClaim pvc-nw66w found but phase is Pending instead of Bound. Nov 13 05:52:35.342: INFO: PersistentVolumeClaim pvc-nw66w found and phase=Bound (2.005614945s) STEP: Deleting the previously created pod Nov 13 05:52:43.362: INFO: Deleting pod "pvc-volume-tester-s85n4" in namespace "csi-mock-volumes-3809" Nov 13 05:52:43.367: INFO: Wait up to 5m0s for pod "pvc-volume-tester-s85n4" to be fully deleted STEP: Checking CSI driver logs Nov 13 05:52:51.383: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/916efad3-96e3-433e-b1e3-5e019b324a2b/volumes/kubernetes.io~csi/pvc-94f038a9-49c0-4070-85df-b0e7e60db9f0/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-s85n4 Nov 13 05:52:51.383: INFO: Deleting pod "pvc-volume-tester-s85n4" in namespace "csi-mock-volumes-3809" STEP: Deleting claim pvc-nw66w Nov 13 05:52:51.391: INFO: Waiting up to 2m0s for PersistentVolume pvc-94f038a9-49c0-4070-85df-b0e7e60db9f0 to get deleted Nov 13 05:52:51.393: INFO: PersistentVolume pvc-94f038a9-49c0-4070-85df-b0e7e60db9f0 found and phase=Bound (1.981929ms) Nov 13 05:52:53.398: INFO: PersistentVolume pvc-94f038a9-49c0-4070-85df-b0e7e60db9f0 was removed STEP: Deleting storageclass csi-mock-volumes-3809-scchk49 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-3809 STEP: Waiting for namespaces [csi-mock-volumes-3809] to vanish STEP: uninstalling csi mock driver Nov 13 05:52:59.414: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3809-6597/csi-attacher Nov 13 05:52:59.418: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3809 Nov 13 05:52:59.422: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3809 Nov 13 05:52:59.425: INFO: deleting *v1.Role: csi-mock-volumes-3809-6597/external-attacher-cfg-csi-mock-volumes-3809 Nov 13 05:52:59.429: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3809-6597/csi-attacher-role-cfg Nov 13 05:52:59.434: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3809-6597/csi-provisioner Nov 13 05:52:59.437: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3809 Nov 13 05:52:59.442: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3809 Nov 13 05:52:59.448: INFO: deleting *v1.Role: csi-mock-volumes-3809-6597/external-provisioner-cfg-csi-mock-volumes-3809 Nov 13 05:52:59.452: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3809-6597/csi-provisioner-role-cfg Nov 13 05:52:59.456: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3809-6597/csi-resizer Nov 13 05:52:59.460: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3809 Nov 13 05:52:59.464: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3809 Nov 13 05:52:59.467: INFO: deleting *v1.Role: csi-mock-volumes-3809-6597/external-resizer-cfg-csi-mock-volumes-3809 Nov 13 05:52:59.472: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3809-6597/csi-resizer-role-cfg Nov 13 05:52:59.476: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3809-6597/csi-snapshotter Nov 13 05:52:59.479: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3809 Nov 13 05:52:59.483: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3809 Nov 13 05:52:59.486: INFO: deleting *v1.Role: csi-mock-volumes-3809-6597/external-snapshotter-leaderelection-csi-mock-volumes-3809 Nov 13 05:52:59.490: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3809-6597/external-snapshotter-leaderelection Nov 13 05:52:59.493: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3809-6597/csi-mock Nov 13 05:52:59.497: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3809 Nov 13 05:52:59.501: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3809 Nov 13 05:52:59.504: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3809 Nov 13 05:52:59.507: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3809 Nov 13 05:52:59.511: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3809 Nov 13 05:52:59.514: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3809 Nov 13 05:52:59.518: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3809 Nov 13 05:52:59.521: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3809-6597/csi-mockplugin Nov 13 05:52:59.524: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3809-6597/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-3809-6597 STEP: Waiting for namespaces [csi-mock-volumes-3809-6597] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:53:27.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:63.892 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443 should not be passed when CSIDriver does not exist /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist","total":-1,"completed":13,"skipped":417,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:48:33.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:548 STEP: Creating the pod [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:53:33.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7230" for this suite. • [SLOW TEST:300.056 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:548 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap Should fail non-optional pod creation due to configMap object does not exist [Slow]","total":-1,"completed":11,"skipped":268,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:53:33.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:68 Nov 13 05:53:33.385: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:53:33.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-9325" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.036 seconds] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 NFSv4 [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:78 should be mountable for NFSv4 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:79 Only supported for node OS distro [gci ubuntu custom] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:69 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:53:22.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:42 [It] should be mountable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:48 STEP: starting configmap-client STEP: Checking that text file contents are perfect. Nov 13 05:53:26.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=volume-4250 exec configmap-client --namespace=volume-4250 -- cat /opt/0/firstfile' Nov 13 05:53:26.335: INFO: stderr: "" Nov 13 05:53:26.335: INFO: stdout: "this is the first file" Nov 13 05:53:26.335: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /opt/0] Namespace:volume-4250 PodName:configmap-client ContainerName:configmap-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:53:26.335: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:53:26.411: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /opt/0] Namespace:volume-4250 PodName:configmap-client ContainerName:configmap-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:53:26.411: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:53:26.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=volume-4250 exec configmap-client --namespace=volume-4250 -- cat /opt/1/secondfile' Nov 13 05:53:26.718: INFO: stderr: "" Nov 13 05:53:26.719: INFO: stdout: "this is the second file" Nov 13 05:53:26.719: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /opt/1] Namespace:volume-4250 PodName:configmap-client ContainerName:configmap-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:53:26.719: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:53:26.793: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /opt/1] Namespace:volume-4250 PodName:configmap-client ContainerName:configmap-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:53:26.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod configmap-client in namespace volume-4250 Nov 13 05:53:26.877: INFO: Waiting for pod configmap-client to disappear Nov 13 05:53:26.880: INFO: Pod configmap-client still exists Nov 13 05:53:28.882: INFO: Waiting for pod configmap-client to disappear Nov 13 05:53:28.885: INFO: Pod configmap-client still exists Nov 13 05:53:30.880: INFO: Waiting for pod configmap-client to disappear Nov 13 05:53:30.883: INFO: Pod configmap-client still exists Nov 13 05:53:32.880: INFO: Waiting for pod configmap-client to disappear Nov 13 05:53:32.884: INFO: Pod configmap-client still exists Nov 13 05:53:34.883: INFO: Waiting for pod configmap-client to disappear Nov 13 05:53:34.885: INFO: Pod configmap-client still exists Nov 13 05:53:36.881: INFO: Waiting for pod configmap-client to disappear Nov 13 05:53:36.884: INFO: Pod configmap-client still exists Nov 13 05:53:38.884: INFO: Waiting for pod configmap-client to disappear Nov 13 05:53:38.887: INFO: Pod configmap-client still exists Nov 13 05:53:40.880: INFO: Waiting for pod configmap-client to disappear Nov 13 05:53:40.885: INFO: Pod configmap-client still exists Nov 13 05:53:42.881: INFO: Waiting for pod configmap-client to disappear Nov 13 05:53:42.885: INFO: Pod configmap-client no longer exists [AfterEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:53:42.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-4250" for this suite. • [SLOW TEST:20.839 seconds] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:47 should be mountable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:48 ------------------------------ {"msg":"PASSED [sig-storage] Volumes ConfigMap should be mountable","total":-1,"completed":16,"skipped":606,"failed":1,"failures":["[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} SSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:53:33.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:53:37.543: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-107ef6f5-a554-4849-b3b4-f6576ee83de6-backend && ln -s /tmp/local-volume-test-107ef6f5-a554-4849-b3b4-f6576ee83de6-backend /tmp/local-volume-test-107ef6f5-a554-4849-b3b4-f6576ee83de6] Namespace:persistent-local-volumes-test-4285 PodName:hostexec-node2-8gx98 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:53:37.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:53:37.647: INFO: Creating a PV followed by a PVC Nov 13 05:53:37.655: INFO: Waiting for PV local-pvwvmsp to bind to PVC pvc-vq976 Nov 13 05:53:37.655: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-vq976] to have phase Bound Nov 13 05:53:37.656: INFO: PersistentVolumeClaim pvc-vq976 found but phase is Pending instead of Bound. Nov 13 05:53:39.661: INFO: PersistentVolumeClaim pvc-vq976 found and phase=Bound (2.006689554s) Nov 13 05:53:39.661: INFO: Waiting up to 3m0s for PersistentVolume local-pvwvmsp to have phase Bound Nov 13 05:53:39.663: INFO: PersistentVolume local-pvwvmsp found and phase=Bound (2.127732ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:53:43.691: INFO: pod "pod-438bf8aa-f6ea-4eb0-a7c2-c3635e6b031b" created on Node "node2" STEP: Writing in pod1 Nov 13 05:53:43.691: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4285 PodName:pod-438bf8aa-f6ea-4eb0-a7c2-c3635e6b031b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:53:43.691: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:53:43.947: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Nov 13 05:53:43.947: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4285 PodName:pod-438bf8aa-f6ea-4eb0-a7c2-c3635e6b031b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:53:43.947: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:53:44.203: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-438bf8aa-f6ea-4eb0-a7c2-c3635e6b031b in namespace persistent-local-volumes-test-4285 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:53:44.208: INFO: Deleting PersistentVolumeClaim "pvc-vq976" Nov 13 05:53:44.211: INFO: Deleting PersistentVolume "local-pvwvmsp" STEP: Removing the test directory Nov 13 05:53:44.216: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-107ef6f5-a554-4849-b3b4-f6576ee83de6 && rm -r /tmp/local-volume-test-107ef6f5-a554-4849-b3b4-f6576ee83de6-backend] Namespace:persistent-local-volumes-test-4285 PodName:hostexec-node2-8gx98 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:53:44.216: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:53:44.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4285" for this suite. • [SLOW TEST:10.877 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":12,"skipped":393,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:53:25.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 13 05:53:29.623: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-2fd37095-2dc7-4116-bf09-38d22de042a8-backend && mount --bind /tmp/local-volume-test-2fd37095-2dc7-4116-bf09-38d22de042a8-backend /tmp/local-volume-test-2fd37095-2dc7-4116-bf09-38d22de042a8-backend && ln -s /tmp/local-volume-test-2fd37095-2dc7-4116-bf09-38d22de042a8-backend /tmp/local-volume-test-2fd37095-2dc7-4116-bf09-38d22de042a8] Namespace:persistent-local-volumes-test-6334 PodName:hostexec-node1-nl76s ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:53:29.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:53:29.734: INFO: Creating a PV followed by a PVC Nov 13 05:53:29.742: INFO: Waiting for PV local-pvd746p to bind to PVC pvc-m2wlc Nov 13 05:53:29.742: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-m2wlc] to have phase Bound Nov 13 05:53:29.744: INFO: PersistentVolumeClaim pvc-m2wlc found but phase is Pending instead of Bound. Nov 13 05:53:31.747: INFO: PersistentVolumeClaim pvc-m2wlc found but phase is Pending instead of Bound. Nov 13 05:53:33.750: INFO: PersistentVolumeClaim pvc-m2wlc found but phase is Pending instead of Bound. Nov 13 05:53:35.753: INFO: PersistentVolumeClaim pvc-m2wlc found but phase is Pending instead of Bound. Nov 13 05:53:37.756: INFO: PersistentVolumeClaim pvc-m2wlc found but phase is Pending instead of Bound. Nov 13 05:53:39.760: INFO: PersistentVolumeClaim pvc-m2wlc found but phase is Pending instead of Bound. Nov 13 05:53:41.763: INFO: PersistentVolumeClaim pvc-m2wlc found and phase=Bound (12.021407211s) Nov 13 05:53:41.763: INFO: Waiting up to 3m0s for PersistentVolume local-pvd746p to have phase Bound Nov 13 05:53:41.766: INFO: PersistentVolume local-pvd746p found and phase=Bound (2.505759ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Nov 13 05:53:45.797: INFO: pod "pod-0d25b1c4-ab31-4b9f-82c5-11f89da69db9" created on Node "node1" STEP: Writing in pod1 Nov 13 05:53:45.797: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6334 PodName:pod-0d25b1c4-ab31-4b9f-82c5-11f89da69db9 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:53:45.798: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:53:45.878: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 13 05:53:45.878: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6334 PodName:pod-0d25b1c4-ab31-4b9f-82c5-11f89da69db9 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:53:45.878: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:53:45.965: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-0d25b1c4-ab31-4b9f-82c5-11f89da69db9 in namespace persistent-local-volumes-test-6334 STEP: Creating pod2 STEP: Creating a pod Nov 13 05:53:49.994: INFO: pod "pod-250b4b71-420b-4911-a8e4-7887c38c592c" created on Node "node1" STEP: Reading in pod2 Nov 13 05:53:49.994: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6334 PodName:pod-250b4b71-420b-4911-a8e4-7887c38c592c ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:53:49.994: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:53:50.073: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-250b4b71-420b-4911-a8e4-7887c38c592c in namespace persistent-local-volumes-test-6334 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:53:50.078: INFO: Deleting PersistentVolumeClaim "pvc-m2wlc" Nov 13 05:53:50.082: INFO: Deleting PersistentVolume "local-pvd746p" STEP: Removing the test directory Nov 13 05:53:50.086: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-2fd37095-2dc7-4116-bf09-38d22de042a8 && umount /tmp/local-volume-test-2fd37095-2dc7-4116-bf09-38d22de042a8-backend && rm -r /tmp/local-volume-test-2fd37095-2dc7-4116-bf09-38d22de042a8-backend] Namespace:persistent-local-volumes-test-6334 PodName:hostexec-node1-nl76s ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:53:50.086: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:53:50.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6334" for this suite. • [SLOW TEST:24.654 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":20,"skipped":575,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:52:44.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not call NodeUnstage after NodeStage final error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 STEP: Building a driver namespace object, basename csi-mock-volumes-8882 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Nov 13 05:52:44.864: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8882-4813/csi-attacher Nov 13 05:52:44.867: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8882 Nov 13 05:52:44.867: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-8882 Nov 13 05:52:44.869: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8882 Nov 13 05:52:44.872: INFO: creating *v1.Role: csi-mock-volumes-8882-4813/external-attacher-cfg-csi-mock-volumes-8882 Nov 13 05:52:44.874: INFO: creating *v1.RoleBinding: csi-mock-volumes-8882-4813/csi-attacher-role-cfg Nov 13 05:52:44.877: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8882-4813/csi-provisioner Nov 13 05:52:44.879: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8882 Nov 13 05:52:44.879: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-8882 Nov 13 05:52:44.882: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8882 Nov 13 05:52:44.885: INFO: creating *v1.Role: csi-mock-volumes-8882-4813/external-provisioner-cfg-csi-mock-volumes-8882 Nov 13 05:52:44.888: INFO: creating *v1.RoleBinding: csi-mock-volumes-8882-4813/csi-provisioner-role-cfg Nov 13 05:52:44.891: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8882-4813/csi-resizer Nov 13 05:52:44.893: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8882 Nov 13 05:52:44.893: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-8882 Nov 13 05:52:44.896: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8882 Nov 13 05:52:44.898: INFO: creating *v1.Role: csi-mock-volumes-8882-4813/external-resizer-cfg-csi-mock-volumes-8882 Nov 13 05:52:44.901: INFO: creating *v1.RoleBinding: csi-mock-volumes-8882-4813/csi-resizer-role-cfg Nov 13 05:52:44.905: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8882-4813/csi-snapshotter Nov 13 05:52:44.908: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8882 Nov 13 05:52:44.908: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-8882 Nov 13 05:52:44.913: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8882 Nov 13 05:52:44.918: INFO: creating *v1.Role: csi-mock-volumes-8882-4813/external-snapshotter-leaderelection-csi-mock-volumes-8882 Nov 13 05:52:44.923: INFO: creating *v1.RoleBinding: csi-mock-volumes-8882-4813/external-snapshotter-leaderelection Nov 13 05:52:44.928: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8882-4813/csi-mock Nov 13 05:52:44.932: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8882 Nov 13 05:52:44.937: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8882 Nov 13 05:52:44.939: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8882 Nov 13 05:52:44.942: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8882 Nov 13 05:52:44.945: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8882 Nov 13 05:52:44.948: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8882 Nov 13 05:52:44.950: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8882 Nov 13 05:52:44.953: INFO: creating *v1.StatefulSet: csi-mock-volumes-8882-4813/csi-mockplugin Nov 13 05:52:44.957: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-8882 Nov 13 05:52:44.960: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-8882" Nov 13 05:52:44.962: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-8882 to register on node node1 I1113 05:52:54.026392 26 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I1113 05:52:54.028127 26 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-8882","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1113 05:52:54.029334 26 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I1113 05:52:54.030664 26 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I1113 05:52:54.172055 26 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-8882","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1113 05:52:54.329896 26 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-8882"},"Error":"","FullError":null} STEP: Creating pod Nov 13 05:52:54.476: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:52:54.480: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-l85fj] to have phase Bound Nov 13 05:52:54.482: INFO: PersistentVolumeClaim pvc-l85fj found but phase is Pending instead of Bound. I1113 05:52:54.488627 26 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-e1fbb474-ba74-4239-b2bd-a5f14efe68e4","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-e1fbb474-ba74-4239-b2bd-a5f14efe68e4"}}},"Error":"","FullError":null} Nov 13 05:52:56.486: INFO: PersistentVolumeClaim pvc-l85fj found and phase=Bound (2.005280371s) Nov 13 05:52:56.501: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-l85fj] to have phase Bound Nov 13 05:52:56.503: INFO: PersistentVolumeClaim pvc-l85fj found and phase=Bound (1.825309ms) STEP: Waiting for expected CSI calls I1113 05:52:56.753196 26 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:52:56.755758 26 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e1fbb474-ba74-4239-b2bd-a5f14efe68e4/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-e1fbb474-ba74-4239-b2bd-a5f14efe68e4","storage.kubernetes.io/csiProvisionerIdentity":"1636782774029-8081-csi-mock-csi-mock-volumes-8882"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} I1113 05:52:57.268490 26 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:52:57.295415 26 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e1fbb474-ba74-4239-b2bd-a5f14efe68e4/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-e1fbb474-ba74-4239-b2bd-a5f14efe68e4","storage.kubernetes.io/csiProvisionerIdentity":"1636782774029-8081-csi-mock-csi-mock-volumes-8882"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} STEP: Deleting the previously created pod Nov 13 05:52:57.504: INFO: Deleting pod "pvc-volume-tester-6l4qn" in namespace "csi-mock-volumes-8882" Nov 13 05:52:57.508: INFO: Wait up to 5m0s for pod "pvc-volume-tester-6l4qn" to be fully deleted I1113 05:52:58.398915 26 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:52:58.400955 26 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e1fbb474-ba74-4239-b2bd-a5f14efe68e4/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-e1fbb474-ba74-4239-b2bd-a5f14efe68e4","storage.kubernetes.io/csiProvisionerIdentity":"1636782774029-8081-csi-mock-csi-mock-volumes-8882"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} I1113 05:53:00.438840 26 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1113 05:53:00.441866 26 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e1fbb474-ba74-4239-b2bd-a5f14efe68e4/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-e1fbb474-ba74-4239-b2bd-a5f14efe68e4","storage.kubernetes.io/csiProvisionerIdentity":"1636782774029-8081-csi-mock-csi-mock-volumes-8882"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} STEP: Waiting for all remaining expected CSI calls STEP: Deleting pod pvc-volume-tester-6l4qn Nov 13 05:53:02.513: INFO: Deleting pod "pvc-volume-tester-6l4qn" in namespace "csi-mock-volumes-8882" STEP: Deleting claim pvc-l85fj Nov 13 05:53:02.523: INFO: Waiting up to 2m0s for PersistentVolume pvc-e1fbb474-ba74-4239-b2bd-a5f14efe68e4 to get deleted Nov 13 05:53:02.526: INFO: PersistentVolume pvc-e1fbb474-ba74-4239-b2bd-a5f14efe68e4 found and phase=Bound (2.579729ms) I1113 05:53:02.572336 26 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} Nov 13 05:53:04.529: INFO: PersistentVolume pvc-e1fbb474-ba74-4239-b2bd-a5f14efe68e4 was removed STEP: Deleting storageclass csi-mock-volumes-8882-scrzkw6 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-8882 STEP: Waiting for namespaces [csi-mock-volumes-8882] to vanish STEP: uninstalling csi mock driver Nov 13 05:53:10.556: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8882-4813/csi-attacher Nov 13 05:53:10.561: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8882 Nov 13 05:53:10.565: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8882 Nov 13 05:53:10.568: INFO: deleting *v1.Role: csi-mock-volumes-8882-4813/external-attacher-cfg-csi-mock-volumes-8882 Nov 13 05:53:10.574: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8882-4813/csi-attacher-role-cfg Nov 13 05:53:10.579: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8882-4813/csi-provisioner Nov 13 05:53:10.583: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8882 Nov 13 05:53:10.586: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8882 Nov 13 05:53:10.590: INFO: deleting *v1.Role: csi-mock-volumes-8882-4813/external-provisioner-cfg-csi-mock-volumes-8882 Nov 13 05:53:10.594: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8882-4813/csi-provisioner-role-cfg Nov 13 05:53:10.598: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8882-4813/csi-resizer Nov 13 05:53:10.600: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8882 Nov 13 05:53:10.603: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8882 Nov 13 05:53:10.606: INFO: deleting *v1.Role: csi-mock-volumes-8882-4813/external-resizer-cfg-csi-mock-volumes-8882 Nov 13 05:53:10.609: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8882-4813/csi-resizer-role-cfg Nov 13 05:53:10.612: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8882-4813/csi-snapshotter Nov 13 05:53:10.616: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8882 Nov 13 05:53:10.619: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8882 Nov 13 05:53:10.623: INFO: deleting *v1.Role: csi-mock-volumes-8882-4813/external-snapshotter-leaderelection-csi-mock-volumes-8882 Nov 13 05:53:10.626: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8882-4813/external-snapshotter-leaderelection Nov 13 05:53:10.630: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8882-4813/csi-mock Nov 13 05:53:10.635: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8882 Nov 13 05:53:10.638: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8882 Nov 13 05:53:10.642: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8882 Nov 13 05:53:10.645: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8882 Nov 13 05:53:10.649: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8882 Nov 13 05:53:10.652: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8882 Nov 13 05:53:10.655: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8882 Nov 13 05:53:10.658: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8882-4813/csi-mockplugin Nov 13 05:53:10.662: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-8882 STEP: deleting the driver namespace: csi-mock-volumes-8882-4813 STEP: Waiting for namespaces [csi-mock-volumes-8882-4813] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:53:54.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:69.880 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI NodeStage error cases [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:734 should not call NodeUnstage after NodeStage final error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should not call NodeUnstage after NodeStage final error","total":-1,"completed":15,"skipped":476,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:53:50.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Nov 13 05:53:50.284: INFO: The status of Pod test-hostpath-type-bjwll is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:53:52.288: INFO: The status of Pod test-hostpath-type-bjwll is Pending, waiting for it to be Running (with Ready = true) Nov 13 05:53:54.287: INFO: The status of Pod test-hostpath-type-bjwll is Running (Ready = true) STEP: running on node node1 STEP: Create a block device for further testing Nov 13 05:53:54.289: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-433 PodName:test-hostpath-type-bjwll ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 05:53:54.289: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:364 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:53:56.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-433" for this suite. • [SLOW TEST:6.162 seconds] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting block device 'ablkdev' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:364 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathSocket","total":-1,"completed":21,"skipped":584,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:48:58.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:469 STEP: Creating configMap with name cm-test-opt-create-6a1d805e-8dd5-4561-b2d8-b6f97aa64f7a STEP: Creating the pod [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:53:58.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3444" for this suite. • [SLOW TEST:300.058 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:469 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]","total":-1,"completed":14,"skipped":442,"failed":0} S ------------------------------ Nov 13 05:53:58.083: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:53:44.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-7628fb6f-fa85-492d-96f2-99c7bc16da42" Nov 13 05:53:48.466: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-7628fb6f-fa85-492d-96f2-99c7bc16da42 && dd if=/dev/zero of=/tmp/local-volume-test-7628fb6f-fa85-492d-96f2-99c7bc16da42/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-7628fb6f-fa85-492d-96f2-99c7bc16da42/file] Namespace:persistent-local-volumes-test-1253 PodName:hostexec-node2-zlzsb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:53:48.466: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:53:48.578: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-7628fb6f-fa85-492d-96f2-99c7bc16da42/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1253 PodName:hostexec-node2-zlzsb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:53:48.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:53:48.669: INFO: Creating a PV followed by a PVC Nov 13 05:53:48.675: INFO: Waiting for PV local-pvq2d6c to bind to PVC pvc-47sl9 Nov 13 05:53:48.675: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-47sl9] to have phase Bound Nov 13 05:53:48.678: INFO: PersistentVolumeClaim pvc-47sl9 found but phase is Pending instead of Bound. Nov 13 05:53:50.682: INFO: PersistentVolumeClaim pvc-47sl9 found but phase is Pending instead of Bound. Nov 13 05:53:52.685: INFO: PersistentVolumeClaim pvc-47sl9 found but phase is Pending instead of Bound. Nov 13 05:53:54.689: INFO: PersistentVolumeClaim pvc-47sl9 found but phase is Pending instead of Bound. Nov 13 05:53:56.693: INFO: PersistentVolumeClaim pvc-47sl9 found and phase=Bound (8.017741546s) Nov 13 05:53:56.693: INFO: Waiting up to 3m0s for PersistentVolume local-pvq2d6c to have phase Bound Nov 13 05:53:56.695: INFO: PersistentVolume local-pvq2d6c found and phase=Bound (2.113312ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Nov 13 05:54:02.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-1253 exec pod-ae6d1ff7-b6d2-42dd-89df-1152c45c177e --namespace=persistent-local-volumes-test-1253 -- stat -c %g /mnt/volume1' Nov 13 05:54:02.968: INFO: stderr: "" Nov 13 05:54:02.968: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Nov 13 05:54:06.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-1253 exec pod-9a86f7c9-6b25-4ada-8cb2-cea575d71999 --namespace=persistent-local-volumes-test-1253 -- stat -c %g /mnt/volume1' Nov 13 05:54:07.238: INFO: stderr: "" Nov 13 05:54:07.238: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod pod-ae6d1ff7-b6d2-42dd-89df-1152c45c177e in namespace persistent-local-volumes-test-1253 STEP: Deleting second pod STEP: Deleting pod pod-9a86f7c9-6b25-4ada-8cb2-cea575d71999 in namespace persistent-local-volumes-test-1253 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 13 05:54:07.247: INFO: Deleting PersistentVolumeClaim "pvc-47sl9" Nov 13 05:54:07.251: INFO: Deleting PersistentVolume "local-pvq2d6c" Nov 13 05:54:07.255: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-7628fb6f-fa85-492d-96f2-99c7bc16da42/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1253 PodName:hostexec-node2-zlzsb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:54:07.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-7628fb6f-fa85-492d-96f2-99c7bc16da42/file Nov 13 05:54:07.345: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-1253 PodName:hostexec-node2-zlzsb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:54:07.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-7628fb6f-fa85-492d-96f2-99c7bc16da42 Nov 13 05:54:07.504: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-7628fb6f-fa85-492d-96f2-99c7bc16da42] Namespace:persistent-local-volumes-test-1253 PodName:hostexec-node2-zlzsb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:54:07.504: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:54:07.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1253" for this suite. • [SLOW TEST:23.296 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":13,"skipped":415,"failed":0} Nov 13 05:54:07.717: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:53:01.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity disabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 STEP: Building a driver namespace object, basename csi-mock-volumes-974 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:53:01.447: INFO: creating *v1.ServiceAccount: csi-mock-volumes-974-2283/csi-attacher Nov 13 05:53:01.450: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-974 Nov 13 05:53:01.450: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-974 Nov 13 05:53:01.452: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-974 Nov 13 05:53:01.456: INFO: creating *v1.Role: csi-mock-volumes-974-2283/external-attacher-cfg-csi-mock-volumes-974 Nov 13 05:53:01.458: INFO: creating *v1.RoleBinding: csi-mock-volumes-974-2283/csi-attacher-role-cfg Nov 13 05:53:01.462: INFO: creating *v1.ServiceAccount: csi-mock-volumes-974-2283/csi-provisioner Nov 13 05:53:01.464: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-974 Nov 13 05:53:01.464: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-974 Nov 13 05:53:01.468: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-974 Nov 13 05:53:01.471: INFO: creating *v1.Role: csi-mock-volumes-974-2283/external-provisioner-cfg-csi-mock-volumes-974 Nov 13 05:53:01.473: INFO: creating *v1.RoleBinding: csi-mock-volumes-974-2283/csi-provisioner-role-cfg Nov 13 05:53:01.476: INFO: creating *v1.ServiceAccount: csi-mock-volumes-974-2283/csi-resizer Nov 13 05:53:01.478: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-974 Nov 13 05:53:01.478: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-974 Nov 13 05:53:01.480: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-974 Nov 13 05:53:01.483: INFO: creating *v1.Role: csi-mock-volumes-974-2283/external-resizer-cfg-csi-mock-volumes-974 Nov 13 05:53:01.485: INFO: creating *v1.RoleBinding: csi-mock-volumes-974-2283/csi-resizer-role-cfg Nov 13 05:53:01.490: INFO: creating *v1.ServiceAccount: csi-mock-volumes-974-2283/csi-snapshotter Nov 13 05:53:01.493: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-974 Nov 13 05:53:01.493: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-974 Nov 13 05:53:01.496: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-974 Nov 13 05:53:01.498: INFO: creating *v1.Role: csi-mock-volumes-974-2283/external-snapshotter-leaderelection-csi-mock-volumes-974 Nov 13 05:53:01.500: INFO: creating *v1.RoleBinding: csi-mock-volumes-974-2283/external-snapshotter-leaderelection Nov 13 05:53:01.504: INFO: creating *v1.ServiceAccount: csi-mock-volumes-974-2283/csi-mock Nov 13 05:53:01.506: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-974 Nov 13 05:53:01.509: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-974 Nov 13 05:53:01.511: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-974 Nov 13 05:53:01.514: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-974 Nov 13 05:53:01.516: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-974 Nov 13 05:53:01.519: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-974 Nov 13 05:53:01.521: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-974 Nov 13 05:53:01.523: INFO: creating *v1.StatefulSet: csi-mock-volumes-974-2283/csi-mockplugin Nov 13 05:53:01.528: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-974 Nov 13 05:53:01.530: INFO: creating *v1.StatefulSet: csi-mock-volumes-974-2283/csi-mockplugin-attacher Nov 13 05:53:01.534: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-974" Nov 13 05:53:01.536: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-974 to register on node node1 STEP: Creating pod Nov 13 05:53:16.055: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Deleting the previously created pod Nov 13 05:53:22.078: INFO: Deleting pod "pvc-volume-tester-vp6gn" in namespace "csi-mock-volumes-974" Nov 13 05:53:22.082: INFO: Wait up to 5m0s for pod "pvc-volume-tester-vp6gn" to be fully deleted STEP: Deleting pod pvc-volume-tester-vp6gn Nov 13 05:53:32.088: INFO: Deleting pod "pvc-volume-tester-vp6gn" in namespace "csi-mock-volumes-974" STEP: Deleting claim pvc-d5dnr Nov 13 05:53:32.098: INFO: Waiting up to 2m0s for PersistentVolume pvc-f972afe8-7723-43ab-bae5-bfb7a02a0f35 to get deleted Nov 13 05:53:32.100: INFO: PersistentVolume pvc-f972afe8-7723-43ab-bae5-bfb7a02a0f35 found and phase=Bound (2.389114ms) Nov 13 05:53:34.103: INFO: PersistentVolume pvc-f972afe8-7723-43ab-bae5-bfb7a02a0f35 was removed STEP: Deleting storageclass mock-csi-storage-capacity-csi-mock-volumes-974 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-974 STEP: Waiting for namespaces [csi-mock-volumes-974] to vanish STEP: uninstalling csi mock driver Nov 13 05:53:40.117: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-974-2283/csi-attacher Nov 13 05:53:40.121: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-974 Nov 13 05:53:40.125: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-974 Nov 13 05:53:40.129: INFO: deleting *v1.Role: csi-mock-volumes-974-2283/external-attacher-cfg-csi-mock-volumes-974 Nov 13 05:53:40.134: INFO: deleting *v1.RoleBinding: csi-mock-volumes-974-2283/csi-attacher-role-cfg Nov 13 05:53:40.138: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-974-2283/csi-provisioner Nov 13 05:53:40.142: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-974 Nov 13 05:53:40.146: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-974 Nov 13 05:53:40.152: INFO: deleting *v1.Role: csi-mock-volumes-974-2283/external-provisioner-cfg-csi-mock-volumes-974 Nov 13 05:53:40.155: INFO: deleting *v1.RoleBinding: csi-mock-volumes-974-2283/csi-provisioner-role-cfg Nov 13 05:53:40.162: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-974-2283/csi-resizer Nov 13 05:53:40.165: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-974 Nov 13 05:53:40.172: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-974 Nov 13 05:53:40.175: INFO: deleting *v1.Role: csi-mock-volumes-974-2283/external-resizer-cfg-csi-mock-volumes-974 Nov 13 05:53:40.178: INFO: deleting *v1.RoleBinding: csi-mock-volumes-974-2283/csi-resizer-role-cfg Nov 13 05:53:40.182: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-974-2283/csi-snapshotter Nov 13 05:53:40.185: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-974 Nov 13 05:53:40.189: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-974 Nov 13 05:53:40.192: INFO: deleting *v1.Role: csi-mock-volumes-974-2283/external-snapshotter-leaderelection-csi-mock-volumes-974 Nov 13 05:53:40.195: INFO: deleting *v1.RoleBinding: csi-mock-volumes-974-2283/external-snapshotter-leaderelection Nov 13 05:53:40.201: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-974-2283/csi-mock Nov 13 05:53:40.205: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-974 Nov 13 05:53:40.209: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-974 Nov 13 05:53:40.213: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-974 Nov 13 05:53:40.216: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-974 Nov 13 05:53:40.220: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-974 Nov 13 05:53:40.223: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-974 Nov 13 05:53:40.226: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-974 Nov 13 05:53:40.230: INFO: deleting *v1.StatefulSet: csi-mock-volumes-974-2283/csi-mockplugin Nov 13 05:53:40.234: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-974 Nov 13 05:53:40.238: INFO: deleting *v1.StatefulSet: csi-mock-volumes-974-2283/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-974-2283 STEP: Waiting for namespaces [csi-mock-volumes-974-2283] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:54:08.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:66.877 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134 CSIStorageCapacity disabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity disabled","total":-1,"completed":6,"skipped":332,"failed":0} Nov 13 05:54:08.261: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:53:56.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should let an external dynamic provisioner create and delete persistent volumes [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:627 Nov 13 05:53:56.443: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: creating an external dynamic provisioner pod STEP: locating the provisioner pod STEP: creating a StorageClass STEP: Creating a StorageClass Nov 13 05:54:00.581: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: creating a claim with a external provisioning annotation STEP: creating claim=&PersistentVolumeClaim{ObjectMeta:{ pvc- volume-provisioning-2444 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1572864000 0} {} 1500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*volume-provisioning-2444-external8q4b7,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} Nov 13 05:54:00.587: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-8c752] to have phase Bound Nov 13 05:54:00.588: INFO: PersistentVolumeClaim pvc-8c752 found but phase is Pending instead of Bound. Nov 13 05:54:02.592: INFO: PersistentVolumeClaim pvc-8c752 found but phase is Pending instead of Bound. Nov 13 05:54:04.601: INFO: PersistentVolumeClaim pvc-8c752 found but phase is Pending instead of Bound. Nov 13 05:54:06.606: INFO: PersistentVolumeClaim pvc-8c752 found and phase=Bound (6.019495338s) STEP: checking the claim STEP: checking the PV STEP: deleting claim "volume-provisioning-2444"/"pvc-8c752" STEP: deleting the claim's PV "pvc-3bfc194c-4418-4b6e-9454-1883598c4488" Nov 13 05:54:06.616: INFO: Waiting up to 20m0s for PersistentVolume pvc-3bfc194c-4418-4b6e-9454-1883598c4488 to get deleted Nov 13 05:54:06.618: INFO: PersistentVolume pvc-3bfc194c-4418-4b6e-9454-1883598c4488 found and phase=Bound (2.221517ms) Nov 13 05:54:11.621: INFO: PersistentVolume pvc-3bfc194c-4418-4b6e-9454-1883598c4488 was removed Nov 13 05:54:11.621: INFO: deleting claim "volume-provisioning-2444"/"pvc-8c752" Nov 13 05:54:11.624: INFO: deleting storage class volume-provisioning-2444-external8q4b7 STEP: Deleting pod external-provisioner-t8dvn in namespace volume-provisioning-2444 [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:54:11.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-2444" for this suite. • [SLOW TEST:15.227 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 DynamicProvisioner External /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:626 should let an external dynamic provisioner create and delete persistent volumes [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:627 ------------------------------ {"msg":"PASSED [sig-storage] Dynamic Provisioning DynamicProvisioner External should let an external dynamic provisioner create and delete persistent volumes [Slow]","total":-1,"completed":22,"skipped":585,"failed":0} Nov 13 05:54:11.643: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:53:27.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity unused /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 STEP: Building a driver namespace object, basename csi-mock-volumes-7820 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:53:27.615: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7820-5267/csi-attacher Nov 13 05:53:27.619: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7820 Nov 13 05:53:27.619: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-7820 Nov 13 05:53:27.621: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7820 Nov 13 05:53:27.625: INFO: creating *v1.Role: csi-mock-volumes-7820-5267/external-attacher-cfg-csi-mock-volumes-7820 Nov 13 05:53:27.627: INFO: creating *v1.RoleBinding: csi-mock-volumes-7820-5267/csi-attacher-role-cfg Nov 13 05:53:27.630: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7820-5267/csi-provisioner Nov 13 05:53:27.633: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7820 Nov 13 05:53:27.633: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-7820 Nov 13 05:53:27.636: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7820 Nov 13 05:53:27.640: INFO: creating *v1.Role: csi-mock-volumes-7820-5267/external-provisioner-cfg-csi-mock-volumes-7820 Nov 13 05:53:27.642: INFO: creating *v1.RoleBinding: csi-mock-volumes-7820-5267/csi-provisioner-role-cfg Nov 13 05:53:27.645: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7820-5267/csi-resizer Nov 13 05:53:27.648: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7820 Nov 13 05:53:27.648: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-7820 Nov 13 05:53:27.651: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7820 Nov 13 05:53:27.653: INFO: creating *v1.Role: csi-mock-volumes-7820-5267/external-resizer-cfg-csi-mock-volumes-7820 Nov 13 05:53:27.656: INFO: creating *v1.RoleBinding: csi-mock-volumes-7820-5267/csi-resizer-role-cfg Nov 13 05:53:27.659: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7820-5267/csi-snapshotter Nov 13 05:53:27.661: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7820 Nov 13 05:53:27.661: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-7820 Nov 13 05:53:27.663: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7820 Nov 13 05:53:27.666: INFO: creating *v1.Role: csi-mock-volumes-7820-5267/external-snapshotter-leaderelection-csi-mock-volumes-7820 Nov 13 05:53:27.668: INFO: creating *v1.RoleBinding: csi-mock-volumes-7820-5267/external-snapshotter-leaderelection Nov 13 05:53:27.670: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7820-5267/csi-mock Nov 13 05:53:27.673: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7820 Nov 13 05:53:27.675: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7820 Nov 13 05:53:27.678: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7820 Nov 13 05:53:27.681: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7820 Nov 13 05:53:27.683: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7820 Nov 13 05:53:27.686: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7820 Nov 13 05:53:27.688: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7820 Nov 13 05:53:27.690: INFO: creating *v1.StatefulSet: csi-mock-volumes-7820-5267/csi-mockplugin Nov 13 05:53:27.694: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-7820 Nov 13 05:53:27.697: INFO: creating *v1.StatefulSet: csi-mock-volumes-7820-5267/csi-mockplugin-attacher Nov 13 05:53:27.700: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-7820" Nov 13 05:53:27.702: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-7820 to register on node node2 STEP: Creating pod Nov 13 05:53:42.223: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Deleting the previously created pod Nov 13 05:54:04.247: INFO: Deleting pod "pvc-volume-tester-5dlg9" in namespace "csi-mock-volumes-7820" Nov 13 05:54:04.252: INFO: Wait up to 5m0s for pod "pvc-volume-tester-5dlg9" to be fully deleted STEP: Deleting pod pvc-volume-tester-5dlg9 Nov 13 05:54:12.261: INFO: Deleting pod "pvc-volume-tester-5dlg9" in namespace "csi-mock-volumes-7820" STEP: Deleting claim pvc-q8c5q Nov 13 05:54:12.272: INFO: Waiting up to 2m0s for PersistentVolume pvc-d7b124b3-011d-43db-bbc2-a89a639e3377 to get deleted Nov 13 05:54:12.274: INFO: PersistentVolume pvc-d7b124b3-011d-43db-bbc2-a89a639e3377 found and phase=Bound (2.107785ms) Nov 13 05:54:14.280: INFO: PersistentVolume pvc-d7b124b3-011d-43db-bbc2-a89a639e3377 was removed STEP: Deleting storageclass mock-csi-storage-capacity-csi-mock-volumes-7820 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-7820 STEP: Waiting for namespaces [csi-mock-volumes-7820] to vanish STEP: uninstalling csi mock driver Nov 13 05:54:20.297: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7820-5267/csi-attacher Nov 13 05:54:20.303: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7820 Nov 13 05:54:20.307: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7820 Nov 13 05:54:20.310: INFO: deleting *v1.Role: csi-mock-volumes-7820-5267/external-attacher-cfg-csi-mock-volumes-7820 Nov 13 05:54:20.313: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7820-5267/csi-attacher-role-cfg Nov 13 05:54:20.317: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7820-5267/csi-provisioner Nov 13 05:54:20.321: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7820 Nov 13 05:54:20.324: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7820 Nov 13 05:54:20.328: INFO: deleting *v1.Role: csi-mock-volumes-7820-5267/external-provisioner-cfg-csi-mock-volumes-7820 Nov 13 05:54:20.334: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7820-5267/csi-provisioner-role-cfg Nov 13 05:54:20.341: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7820-5267/csi-resizer Nov 13 05:54:20.344: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7820 Nov 13 05:54:20.352: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7820 Nov 13 05:54:20.356: INFO: deleting *v1.Role: csi-mock-volumes-7820-5267/external-resizer-cfg-csi-mock-volumes-7820 Nov 13 05:54:20.360: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7820-5267/csi-resizer-role-cfg Nov 13 05:54:20.363: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7820-5267/csi-snapshotter Nov 13 05:54:20.367: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7820 Nov 13 05:54:20.372: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7820 Nov 13 05:54:20.376: INFO: deleting *v1.Role: csi-mock-volumes-7820-5267/external-snapshotter-leaderelection-csi-mock-volumes-7820 Nov 13 05:54:20.379: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7820-5267/external-snapshotter-leaderelection Nov 13 05:54:20.383: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7820-5267/csi-mock Nov 13 05:54:20.387: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7820 Nov 13 05:54:20.390: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7820 Nov 13 05:54:20.393: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7820 Nov 13 05:54:20.397: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7820 Nov 13 05:54:20.401: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7820 Nov 13 05:54:20.404: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7820 Nov 13 05:54:20.407: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7820 Nov 13 05:54:20.410: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7820-5267/csi-mockplugin Nov 13 05:54:20.414: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-7820 Nov 13 05:54:20.418: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7820-5267/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-7820-5267 STEP: Waiting for namespaces [csi-mock-volumes-7820-5267] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:54:26.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:58.880 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134 CSIStorageCapacity unused /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity unused","total":-1,"completed":14,"skipped":422,"failed":0} Nov 13 05:54:26.439: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:51:48.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not expand volume if resizingOnDriver=off, resizingOnSC=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 STEP: Building a driver namespace object, basename csi-mock-volumes-5950 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:51:48.391: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5950-763/csi-attacher Nov 13 05:51:48.394: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5950 Nov 13 05:51:48.394: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-5950 Nov 13 05:51:48.397: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5950 Nov 13 05:51:48.400: INFO: creating *v1.Role: csi-mock-volumes-5950-763/external-attacher-cfg-csi-mock-volumes-5950 Nov 13 05:51:48.403: INFO: creating *v1.RoleBinding: csi-mock-volumes-5950-763/csi-attacher-role-cfg Nov 13 05:51:48.406: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5950-763/csi-provisioner Nov 13 05:51:48.409: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5950 Nov 13 05:51:48.409: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-5950 Nov 13 05:51:48.411: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5950 Nov 13 05:51:48.414: INFO: creating *v1.Role: csi-mock-volumes-5950-763/external-provisioner-cfg-csi-mock-volumes-5950 Nov 13 05:51:48.421: INFO: creating *v1.RoleBinding: csi-mock-volumes-5950-763/csi-provisioner-role-cfg Nov 13 05:51:48.426: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5950-763/csi-resizer Nov 13 05:51:48.432: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5950 Nov 13 05:51:48.432: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-5950 Nov 13 05:51:48.436: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5950 Nov 13 05:51:48.438: INFO: creating *v1.Role: csi-mock-volumes-5950-763/external-resizer-cfg-csi-mock-volumes-5950 Nov 13 05:51:48.441: INFO: creating *v1.RoleBinding: csi-mock-volumes-5950-763/csi-resizer-role-cfg Nov 13 05:51:48.444: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5950-763/csi-snapshotter Nov 13 05:51:48.447: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5950 Nov 13 05:51:48.447: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-5950 Nov 13 05:51:48.450: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5950 Nov 13 05:51:48.452: INFO: creating *v1.Role: csi-mock-volumes-5950-763/external-snapshotter-leaderelection-csi-mock-volumes-5950 Nov 13 05:51:48.455: INFO: creating *v1.RoleBinding: csi-mock-volumes-5950-763/external-snapshotter-leaderelection Nov 13 05:51:48.458: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5950-763/csi-mock Nov 13 05:51:48.460: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5950 Nov 13 05:51:48.462: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5950 Nov 13 05:51:48.465: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5950 Nov 13 05:51:48.468: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5950 Nov 13 05:51:48.471: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5950 Nov 13 05:51:48.473: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5950 Nov 13 05:51:48.476: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5950 Nov 13 05:51:48.479: INFO: creating *v1.StatefulSet: csi-mock-volumes-5950-763/csi-mockplugin Nov 13 05:51:48.483: INFO: creating *v1.StatefulSet: csi-mock-volumes-5950-763/csi-mockplugin-attacher Nov 13 05:51:48.487: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5950 to register on node node2 STEP: Creating pod Nov 13 05:51:53.500: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:51:53.507: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-cw75l] to have phase Bound Nov 13 05:51:53.509: INFO: PersistentVolumeClaim pvc-cw75l found but phase is Pending instead of Bound. Nov 13 05:51:55.513: INFO: PersistentVolumeClaim pvc-cw75l found and phase=Bound (2.006713129s) STEP: Expanding current pvc STEP: Deleting pod pvc-volume-tester-vhrzh Nov 13 05:54:07.552: INFO: Deleting pod "pvc-volume-tester-vhrzh" in namespace "csi-mock-volumes-5950" Nov 13 05:54:07.557: INFO: Wait up to 5m0s for pod "pvc-volume-tester-vhrzh" to be fully deleted STEP: Deleting claim pvc-cw75l Nov 13 05:54:11.573: INFO: Waiting up to 2m0s for PersistentVolume pvc-eb21f67c-814e-478f-ad4f-a3de0b9486bb to get deleted Nov 13 05:54:11.575: INFO: PersistentVolume pvc-eb21f67c-814e-478f-ad4f-a3de0b9486bb found and phase=Bound (2.139739ms) Nov 13 05:54:13.579: INFO: PersistentVolume pvc-eb21f67c-814e-478f-ad4f-a3de0b9486bb found and phase=Released (2.00585805s) Nov 13 05:54:15.584: INFO: PersistentVolume pvc-eb21f67c-814e-478f-ad4f-a3de0b9486bb was removed STEP: Deleting storageclass csi-mock-volumes-5950-scb5hvk STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-5950 STEP: Waiting for namespaces [csi-mock-volumes-5950] to vanish STEP: uninstalling csi mock driver Nov 13 05:54:21.597: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5950-763/csi-attacher Nov 13 05:54:21.600: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5950 Nov 13 05:54:21.604: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5950 Nov 13 05:54:21.608: INFO: deleting *v1.Role: csi-mock-volumes-5950-763/external-attacher-cfg-csi-mock-volumes-5950 Nov 13 05:54:21.612: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5950-763/csi-attacher-role-cfg Nov 13 05:54:21.615: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5950-763/csi-provisioner Nov 13 05:54:21.620: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5950 Nov 13 05:54:21.626: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5950 Nov 13 05:54:21.633: INFO: deleting *v1.Role: csi-mock-volumes-5950-763/external-provisioner-cfg-csi-mock-volumes-5950 Nov 13 05:54:21.639: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5950-763/csi-provisioner-role-cfg Nov 13 05:54:21.645: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5950-763/csi-resizer Nov 13 05:54:21.649: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5950 Nov 13 05:54:21.653: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5950 Nov 13 05:54:21.656: INFO: deleting *v1.Role: csi-mock-volumes-5950-763/external-resizer-cfg-csi-mock-volumes-5950 Nov 13 05:54:21.660: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5950-763/csi-resizer-role-cfg Nov 13 05:54:21.663: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5950-763/csi-snapshotter Nov 13 05:54:21.667: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5950 Nov 13 05:54:21.671: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5950 Nov 13 05:54:21.674: INFO: deleting *v1.Role: csi-mock-volumes-5950-763/external-snapshotter-leaderelection-csi-mock-volumes-5950 Nov 13 05:54:21.678: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5950-763/external-snapshotter-leaderelection Nov 13 05:54:21.682: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5950-763/csi-mock Nov 13 05:54:21.685: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5950 Nov 13 05:54:21.689: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5950 Nov 13 05:54:21.692: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5950 Nov 13 05:54:21.695: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5950 Nov 13 05:54:21.699: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5950 Nov 13 05:54:21.702: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5950 Nov 13 05:54:21.705: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5950 Nov 13 05:54:21.709: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5950-763/csi-mockplugin Nov 13 05:54:21.713: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5950-763/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-5950-763 STEP: Waiting for namespaces [csi-mock-volumes-5950-763] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:54:33.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:165.413 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI Volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561 should not expand volume if resizingOnDriver=off, resizingOnSC=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:53:42.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] token should not be plumbed down when csiServiceAccountTokenEnabled=false /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1402 STEP: Building a driver namespace object, basename csi-mock-volumes-4160 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 13 05:53:42.975: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4160-99/csi-attacher Nov 13 05:53:42.978: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4160 Nov 13 05:53:42.978: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4160 Nov 13 05:53:42.981: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4160 Nov 13 05:53:42.984: INFO: creating *v1.Role: csi-mock-volumes-4160-99/external-attacher-cfg-csi-mock-volumes-4160 Nov 13 05:53:42.986: INFO: creating *v1.RoleBinding: csi-mock-volumes-4160-99/csi-attacher-role-cfg Nov 13 05:53:42.989: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4160-99/csi-provisioner Nov 13 05:53:42.992: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4160 Nov 13 05:53:42.992: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4160 Nov 13 05:53:42.995: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4160 Nov 13 05:53:42.998: INFO: creating *v1.Role: csi-mock-volumes-4160-99/external-provisioner-cfg-csi-mock-volumes-4160 Nov 13 05:53:43.000: INFO: creating *v1.RoleBinding: csi-mock-volumes-4160-99/csi-provisioner-role-cfg Nov 13 05:53:43.005: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4160-99/csi-resizer Nov 13 05:53:43.008: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4160 Nov 13 05:53:43.008: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4160 Nov 13 05:53:43.010: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4160 Nov 13 05:53:43.013: INFO: creating *v1.Role: csi-mock-volumes-4160-99/external-resizer-cfg-csi-mock-volumes-4160 Nov 13 05:53:43.016: INFO: creating *v1.RoleBinding: csi-mock-volumes-4160-99/csi-resizer-role-cfg Nov 13 05:53:43.019: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4160-99/csi-snapshotter Nov 13 05:53:43.021: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4160 Nov 13 05:53:43.021: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4160 Nov 13 05:53:43.024: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4160 Nov 13 05:53:43.027: INFO: creating *v1.Role: csi-mock-volumes-4160-99/external-snapshotter-leaderelection-csi-mock-volumes-4160 Nov 13 05:53:43.029: INFO: creating *v1.RoleBinding: csi-mock-volumes-4160-99/external-snapshotter-leaderelection Nov 13 05:53:43.032: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4160-99/csi-mock Nov 13 05:53:43.034: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4160 Nov 13 05:53:43.036: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4160 Nov 13 05:53:43.039: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4160 Nov 13 05:53:43.041: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4160 Nov 13 05:53:43.044: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4160 Nov 13 05:53:43.046: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4160 Nov 13 05:53:43.048: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4160 Nov 13 05:53:43.050: INFO: creating *v1.StatefulSet: csi-mock-volumes-4160-99/csi-mockplugin Nov 13 05:53:43.054: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4160 Nov 13 05:53:43.057: INFO: creating *v1.StatefulSet: csi-mock-volumes-4160-99/csi-mockplugin-attacher Nov 13 05:53:43.060: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4160" Nov 13 05:53:43.063: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4160 to register on node node2 STEP: Creating pod Nov 13 05:53:52.579: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 13 05:53:52.583: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-6ctk7] to have phase Bound Nov 13 05:53:52.586: INFO: PersistentVolumeClaim pvc-6ctk7 found but phase is Pending instead of Bound. Nov 13 05:53:54.592: INFO: PersistentVolumeClaim pvc-6ctk7 found and phase=Bound (2.008220099s) STEP: Deleting the previously created pod Nov 13 05:54:14.613: INFO: Deleting pod "pvc-volume-tester-8585m" in namespace "csi-mock-volumes-4160" Nov 13 05:54:14.617: INFO: Wait up to 5m0s for pod "pvc-volume-tester-8585m" to be fully deleted STEP: Checking CSI driver logs Nov 13 05:54:18.633: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/2408229e-b10b-4f5f-a89e-aba8592707f1/volumes/kubernetes.io~csi/pvc-cd0d151c-6611-4b6e-91ef-dfee5c715ff0/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-8585m Nov 13 05:54:18.633: INFO: Deleting pod "pvc-volume-tester-8585m" in namespace "csi-mock-volumes-4160" STEP: Deleting claim pvc-6ctk7 Nov 13 05:54:18.643: INFO: Waiting up to 2m0s for PersistentVolume pvc-cd0d151c-6611-4b6e-91ef-dfee5c715ff0 to get deleted Nov 13 05:54:18.646: INFO: PersistentVolume pvc-cd0d151c-6611-4b6e-91ef-dfee5c715ff0 found and phase=Bound (2.350636ms) Nov 13 05:54:20.652: INFO: PersistentVolume pvc-cd0d151c-6611-4b6e-91ef-dfee5c715ff0 found and phase=Released (2.008140207s) Nov 13 05:54:22.655: INFO: PersistentVolume pvc-cd0d151c-6611-4b6e-91ef-dfee5c715ff0 found and phase=Released (4.011872592s) Nov 13 05:54:24.660: INFO: PersistentVolume pvc-cd0d151c-6611-4b6e-91ef-dfee5c715ff0 found and phase=Released (6.01607686s) Nov 13 05:54:26.663: INFO: PersistentVolume pvc-cd0d151c-6611-4b6e-91ef-dfee5c715ff0 was removed STEP: Deleting storageclass csi-mock-volumes-4160-sc7k77q STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4160 STEP: Waiting for namespaces [csi-mock-volumes-4160] to vanish STEP: uninstalling csi mock driver Nov 13 05:54:32.676: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4160-99/csi-attacher Nov 13 05:54:32.680: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4160 Nov 13 05:54:32.683: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4160 Nov 13 05:54:32.687: INFO: deleting *v1.Role: csi-mock-volumes-4160-99/external-attacher-cfg-csi-mock-volumes-4160 Nov 13 05:54:32.690: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4160-99/csi-attacher-role-cfg Nov 13 05:54:32.694: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4160-99/csi-provisioner Nov 13 05:54:32.698: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4160 Nov 13 05:54:32.701: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4160 Nov 13 05:54:32.708: INFO: deleting *v1.Role: csi-mock-volumes-4160-99/external-provisioner-cfg-csi-mock-volumes-4160 Nov 13 05:54:32.711: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4160-99/csi-provisioner-role-cfg Nov 13 05:54:32.717: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4160-99/csi-resizer Nov 13 05:54:32.721: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4160 Nov 13 05:54:32.724: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4160 Nov 13 05:54:32.728: INFO: deleting *v1.Role: csi-mock-volumes-4160-99/external-resizer-cfg-csi-mock-volumes-4160 Nov 13 05:54:32.732: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4160-99/csi-resizer-role-cfg Nov 13 05:54:32.735: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4160-99/csi-snapshotter Nov 13 05:54:32.738: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4160 Nov 13 05:54:32.741: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4160 Nov 13 05:54:32.744: INFO: deleting *v1.Role: csi-mock-volumes-4160-99/external-snapshotter-leaderelection-csi-mock-volumes-4160 Nov 13 05:54:32.751: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4160-99/external-snapshotter-leaderelection Nov 13 05:54:32.754: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4160-99/csi-mock Nov 13 05:54:32.757: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4160 Nov 13 05:54:32.760: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4160 Nov 13 05:54:32.764: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4160 Nov 13 05:54:32.767: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4160 Nov 13 05:54:32.770: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4160 Nov 13 05:54:32.773: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4160 Nov 13 05:54:32.776: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4160 Nov 13 05:54:32.779: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4160-99/csi-mockplugin Nov 13 05:54:32.782: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-4160 Nov 13 05:54:32.785: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4160-99/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-4160-99 STEP: Waiting for namespaces [csi-mock-volumes-4160-99] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:54:38.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:55.894 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIServiceAccountToken /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1374 token should not be plumbed down when csiServiceAccountTokenEnabled=false /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1402 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false","total":-1,"completed":17,"skipped":611,"failed":1,"failures":["[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} Nov 13 05:54:38.807: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:49:32.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [It] should fail due to wrong node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:324 STEP: Initializing test volumes Nov 13 05:49:36.865: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-a5034d4f-b683-4d84-8764-6f6224ea04f6] Namespace:persistent-local-volumes-test-549 PodName:hostexec-node2-vctp7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:49:36.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 13 05:49:37.106: INFO: Creating a PV followed by a PVC Nov 13 05:49:37.113: INFO: Waiting for PV local-pvk48fv to bind to PVC pvc-dmbhs Nov 13 05:49:37.113: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-dmbhs] to have phase Bound Nov 13 05:49:37.115: INFO: PersistentVolumeClaim pvc-dmbhs found but phase is Pending instead of Bound. Nov 13 05:49:39.119: INFO: PersistentVolumeClaim pvc-dmbhs found but phase is Pending instead of Bound. Nov 13 05:49:41.123: INFO: PersistentVolumeClaim pvc-dmbhs found but phase is Pending instead of Bound. Nov 13 05:49:43.126: INFO: PersistentVolumeClaim pvc-dmbhs found and phase=Bound (6.013435773s) Nov 13 05:49:43.126: INFO: Waiting up to 3m0s for PersistentVolume local-pvk48fv to have phase Bound Nov 13 05:49:43.128: INFO: PersistentVolume local-pvk48fv found and phase=Bound (1.774552ms) STEP: Cleaning up PVC and PV Nov 13 05:54:43.152: INFO: Deleting PersistentVolumeClaim "pvc-dmbhs" Nov 13 05:54:43.160: INFO: Deleting PersistentVolume "local-pvk48fv" STEP: Removing the test directory Nov 13 05:54:43.164: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a5034d4f-b683-4d84-8764-6f6224ea04f6] Namespace:persistent-local-volumes-test-549 PodName:hostexec-node2-vctp7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 05:54:43.164: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:54:43.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-549" for this suite. • [SLOW TEST:310.453 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Local volume that cannot be mounted [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:304 should fail due to wrong node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:324 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Local volume that cannot be mounted [Slow] should fail due to wrong node","total":-1,"completed":26,"skipped":1076,"failed":0} Nov 13 05:54:43.272: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:50:36.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:411 STEP: Creating the pod [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:55:36.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7290" for this suite. • [SLOW TEST:300.054 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:411 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret Should fail non-optional pod creation due to secret object does not exist [Slow]","total":-1,"completed":18,"skipped":757,"failed":0} Nov 13 05:55:36.120: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:53:54.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:557 STEP: Creating configMap with name cm-test-opt-create-656a7879-5d17-485e-a3f4-3dfa61ebc8ab STEP: Creating the pod [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:58:54.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-293" for this suite. • [SLOW TEST:300.061 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:557 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]","total":-1,"completed":16,"skipped":487,"failed":0} Nov 13 05:58:54.782: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on","total":-1,"completed":16,"skipped":479,"failed":0} Nov 13 05:54:33.732: INFO: Running AfterSuite actions on all nodes Nov 13 05:58:54.827: INFO: Running AfterSuite actions on node 1 Nov 13 05:58:54.827: INFO: Skipping dumping logs from cluster Summarizing 1 Failure: [Fail] [sig-storage] HostPath [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:742 Ran 163 of 5770 Specs in 1137.442 seconds FAIL! -- 162 Passed | 1 Failed | 0 Pending | 5607 Skipped Ginkgo ran 1 suite in 18m59.034955746s Test Suite Failed