I0524 20:31:26.470035 17 test_context.go:457] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0524 20:31:26.470177 17 e2e.go:129] Starting e2e run "a4798658-5b7e-4f68-89e0-c010e36391a7" on Ginkgo node 1 {"msg":"Test Suite starting","total":18,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1621888284 - Will randomize all specs Will run 18 of 5667 specs May 24 20:31:26.593: INFO: >>> kubeConfig: /root/.kube/config May 24 20:31:26.597: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 24 20:31:26.626: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 24 20:31:26.678: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 24 20:31:26.678: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 24 20:31:26.678: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 24 20:31:26.691: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) May 24 20:31:26.691: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 24 20:31:26.691: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds' (0 seconds elapsed) May 24 20:31:26.691: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 24 20:31:26.691: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'tune-sysctls' (0 seconds elapsed) May 24 20:31:26.691: INFO: e2e test version: v1.20.6 May 24 20:31:26.692: INFO: kube-apiserver version: v1.20.7 May 24 20:31:26.692: INFO: >>> kubeConfig: /root/.kube/config May 24 20:31:26.699: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create volume metrics with the correct PVC ref /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:203 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:31:26.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv May 24 20:31:26.732: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 24 20:31:26.742: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 24 20:31:26.745: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:31:26.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-244" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.056 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create volume metrics with the correct PVC ref [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:203 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:517 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:31:26.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:441 STEP: Setting up 10 local volumes on node "leguer-worker" STEP: Creating tmpfs mount point on node "leguer-worker" at path "/tmp/local-volume-test-487a8c05-082d-440b-8d96-dda4cd8e6ec1" May 24 20:31:28.821: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-487a8c05-082d-440b-8d96-dda4cd8e6ec1" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-487a8c05-082d-440b-8d96-dda4cd8e6ec1" "/tmp/local-volume-test-487a8c05-082d-440b-8d96-dda4cd8e6ec1"] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker-f5zc4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:31:28.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "leguer-worker" at path "/tmp/local-volume-test-243d5d92-6a6c-44d6-98a1-cbfdd3486f95" May 24 20:31:29.000: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-243d5d92-6a6c-44d6-98a1-cbfdd3486f95" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-243d5d92-6a6c-44d6-98a1-cbfdd3486f95" "/tmp/local-volume-test-243d5d92-6a6c-44d6-98a1-cbfdd3486f95"] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker-f5zc4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:31:29.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "leguer-worker" at path "/tmp/local-volume-test-bda01b87-9bbc-4bc0-8c2c-1f5282b18811" May 24 20:31:29.134: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-bda01b87-9bbc-4bc0-8c2c-1f5282b18811" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-bda01b87-9bbc-4bc0-8c2c-1f5282b18811" "/tmp/local-volume-test-bda01b87-9bbc-4bc0-8c2c-1f5282b18811"] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker-f5zc4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:31:29.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "leguer-worker" at path "/tmp/local-volume-test-11d8fde2-d636-4bad-ac98-c226e7c80f07" May 24 20:31:29.272: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-11d8fde2-d636-4bad-ac98-c226e7c80f07" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-11d8fde2-d636-4bad-ac98-c226e7c80f07" "/tmp/local-volume-test-11d8fde2-d636-4bad-ac98-c226e7c80f07"] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker-f5zc4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:31:29.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "leguer-worker" at path "/tmp/local-volume-test-ff4c6aff-0b48-49cb-9197-6b695f97b662" May 24 20:31:29.415: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-ff4c6aff-0b48-49cb-9197-6b695f97b662" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-ff4c6aff-0b48-49cb-9197-6b695f97b662" "/tmp/local-volume-test-ff4c6aff-0b48-49cb-9197-6b695f97b662"] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker-f5zc4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:31:29.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "leguer-worker" at path "/tmp/local-volume-test-41037a10-0c1c-4efd-b8bf-ecff8c7f0bae" May 24 20:31:29.555: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-41037a10-0c1c-4efd-b8bf-ecff8c7f0bae" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-41037a10-0c1c-4efd-b8bf-ecff8c7f0bae" "/tmp/local-volume-test-41037a10-0c1c-4efd-b8bf-ecff8c7f0bae"] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker-f5zc4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:31:29.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "leguer-worker" at path "/tmp/local-volume-test-caee0fff-cf34-4b5b-a6ca-685e840b2b03" May 24 20:31:29.699: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-caee0fff-cf34-4b5b-a6ca-685e840b2b03" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-caee0fff-cf34-4b5b-a6ca-685e840b2b03" "/tmp/local-volume-test-caee0fff-cf34-4b5b-a6ca-685e840b2b03"] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker-f5zc4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:31:29.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "leguer-worker" at path "/tmp/local-volume-test-46a41293-59ad-42de-abee-3d44e92a04a3" May 24 20:31:29.842: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-46a41293-59ad-42de-abee-3d44e92a04a3" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-46a41293-59ad-42de-abee-3d44e92a04a3" "/tmp/local-volume-test-46a41293-59ad-42de-abee-3d44e92a04a3"] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker-f5zc4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:31:29.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "leguer-worker" at path "/tmp/local-volume-test-e4e7aed0-1385-452c-b75a-56c240c19ea4" May 24 20:31:29.976: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-e4e7aed0-1385-452c-b75a-56c240c19ea4" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-e4e7aed0-1385-452c-b75a-56c240c19ea4" "/tmp/local-volume-test-e4e7aed0-1385-452c-b75a-56c240c19ea4"] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker-f5zc4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:31:29.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "leguer-worker" at path "/tmp/local-volume-test-1d5e3ae3-8828-40ea-acb0-38c387aa428d" May 24 20:31:30.114: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-1d5e3ae3-8828-40ea-acb0-38c387aa428d" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-1d5e3ae3-8828-40ea-acb0-38c387aa428d" "/tmp/local-volume-test-1d5e3ae3-8828-40ea-acb0-38c387aa428d"] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker-f5zc4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:31:30.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Setting up 10 local volumes on node "leguer-worker2" STEP: Creating tmpfs mount point on node "leguer-worker2" at path "/tmp/local-volume-test-85ab4b07-032a-4cd5-a2b0-05e269ea2ead" May 24 20:31:32.258: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-85ab4b07-032a-4cd5-a2b0-05e269ea2ead" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-85ab4b07-032a-4cd5-a2b0-05e269ea2ead" "/tmp/local-volume-test-85ab4b07-032a-4cd5-a2b0-05e269ea2ead"] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker2-sr62l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:31:32.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "leguer-worker2" at path "/tmp/local-volume-test-31d890ef-abc0-47af-b085-86dfaf87a74c" May 24 20:31:32.446: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-31d890ef-abc0-47af-b085-86dfaf87a74c" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-31d890ef-abc0-47af-b085-86dfaf87a74c" "/tmp/local-volume-test-31d890ef-abc0-47af-b085-86dfaf87a74c"] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker2-sr62l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:31:32.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "leguer-worker2" at path "/tmp/local-volume-test-9ce9dcd1-d0dc-4053-971f-065b9905e621" May 24 20:31:32.586: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-9ce9dcd1-d0dc-4053-971f-065b9905e621" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-9ce9dcd1-d0dc-4053-971f-065b9905e621" "/tmp/local-volume-test-9ce9dcd1-d0dc-4053-971f-065b9905e621"] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker2-sr62l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:31:32.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "leguer-worker2" at path "/tmp/local-volume-test-806930ff-0531-464a-9365-8b9c5e3049df" May 24 20:31:32.722: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-806930ff-0531-464a-9365-8b9c5e3049df" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-806930ff-0531-464a-9365-8b9c5e3049df" "/tmp/local-volume-test-806930ff-0531-464a-9365-8b9c5e3049df"] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker2-sr62l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:31:32.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "leguer-worker2" at path "/tmp/local-volume-test-dea60aa1-8265-4288-a7cf-192a0ba62905" May 24 20:31:32.862: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-dea60aa1-8265-4288-a7cf-192a0ba62905" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-dea60aa1-8265-4288-a7cf-192a0ba62905" "/tmp/local-volume-test-dea60aa1-8265-4288-a7cf-192a0ba62905"] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker2-sr62l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:31:32.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "leguer-worker2" at path "/tmp/local-volume-test-80c19c5d-beed-44bd-af2c-0f30bbc6a6c5" May 24 20:31:33.010: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-80c19c5d-beed-44bd-af2c-0f30bbc6a6c5" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-80c19c5d-beed-44bd-af2c-0f30bbc6a6c5" "/tmp/local-volume-test-80c19c5d-beed-44bd-af2c-0f30bbc6a6c5"] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker2-sr62l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:31:33.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "leguer-worker2" at path "/tmp/local-volume-test-60af8d8c-2aef-4904-98e5-38144bae8d26" May 24 20:31:33.137: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-60af8d8c-2aef-4904-98e5-38144bae8d26" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-60af8d8c-2aef-4904-98e5-38144bae8d26" "/tmp/local-volume-test-60af8d8c-2aef-4904-98e5-38144bae8d26"] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker2-sr62l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:31:33.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "leguer-worker2" at path "/tmp/local-volume-test-ee3a1740-b1d2-4878-a561-aec9b3f599ac" May 24 20:31:33.278: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-ee3a1740-b1d2-4878-a561-aec9b3f599ac" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-ee3a1740-b1d2-4878-a561-aec9b3f599ac" "/tmp/local-volume-test-ee3a1740-b1d2-4878-a561-aec9b3f599ac"] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker2-sr62l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:31:33.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "leguer-worker2" at path "/tmp/local-volume-test-3e770f87-b52e-47c1-a7fd-b0f0292fcc6b" May 24 20:31:33.432: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-3e770f87-b52e-47c1-a7fd-b0f0292fcc6b" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-3e770f87-b52e-47c1-a7fd-b0f0292fcc6b" "/tmp/local-volume-test-3e770f87-b52e-47c1-a7fd-b0f0292fcc6b"] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker2-sr62l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:31:33.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "leguer-worker2" at path "/tmp/local-volume-test-ff201289-3b20-4599-8585-3dcfc689f4c8" May 24 20:31:33.566: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-ff201289-3b20-4599-8585-3dcfc689f4c8" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-ff201289-3b20-4599-8585-3dcfc689f4c8" "/tmp/local-volume-test-ff201289-3b20-4599-8585-3dcfc689f4c8"] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker2-sr62l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:31:33.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Create 20 PVs STEP: Start a goroutine to recycle unbound PVs [It] should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:517 STEP: Creating 7 pods periodically STEP: Waiting for all pods to complete successfully May 24 20:31:38.919: INFO: Deleting pod pod-5d4d196c-6df1-4449-81c1-d38f002cd71a May 24 20:31:38.928: INFO: Deleting PersistentVolumeClaim "pvc-kd69d" May 24 20:31:38.934: INFO: Deleting PersistentVolumeClaim "pvc-kjggf" May 24 20:31:38.939: INFO: Deleting PersistentVolumeClaim "pvc-sjvgn" May 24 20:31:38.944: INFO: 1/28 pods finished May 24 20:31:38.944: INFO: Deleting pod pod-ccd31fd6-3a47-4ecb-a912-31a92bfba5e3 May 24 20:31:38.951: INFO: Deleting PersistentVolumeClaim "pvc-q9nc9" May 24 20:31:38.955: INFO: Deleting PersistentVolumeClaim "pvc-dgfcl" STEP: Delete "local-pv5lbvz" and create a new PV for same local volume storage May 24 20:31:38.959: INFO: Deleting PersistentVolumeClaim "pvc-znjnd" May 24 20:31:38.962: INFO: 2/28 pods finished STEP: Delete "local-pv5lbvz" and create a new PV for same local volume storage STEP: Delete "local-pv6qnzs" and create a new PV for same local volume storage STEP: Delete "local-pv6qnzs" and create a new PV for same local volume storage STEP: Delete "local-pvcx5ns" and create a new PV for same local volume storage STEP: Delete "local-pvxmmmg" and create a new PV for same local volume storage STEP: Delete "local-pvk2xrw" and create a new PV for same local volume storage STEP: Delete "local-pvdv4zh" and create a new PV for same local volume storage STEP: Delete "local-pvdv4zh" and create a new PV for same local volume storage May 24 20:31:40.919: INFO: Deleting pod pod-34f855b2-6586-44a8-8618-69bf15ca545e May 24 20:31:40.928: INFO: Deleting PersistentVolumeClaim "pvc-8f2bq" May 24 20:31:40.934: INFO: Deleting PersistentVolumeClaim "pvc-bj9wx" May 24 20:31:41.023: INFO: Deleting PersistentVolumeClaim "pvc-5pn6w" May 24 20:31:41.029: INFO: 3/28 pods finished STEP: Delete "local-pv8mght" and create a new PV for same local volume storage STEP: Delete "local-pv8mght" and create a new PV for same local volume storage STEP: Delete "local-pv8g6sp" and create a new PV for same local volume storage STEP: Delete "local-pv8g6sp" and create a new PV for same local volume storage STEP: Delete "local-pvkk75d" and create a new PV for same local volume storage STEP: Delete "local-pvkk75d" and create a new PV for same local volume storage May 24 20:31:41.918: INFO: Deleting pod pod-7505b3a4-0849-4013-899c-4e9cce1d9c5d May 24 20:31:41.932: INFO: Deleting PersistentVolumeClaim "pvc-748dq" May 24 20:31:41.938: INFO: Deleting PersistentVolumeClaim "pvc-vjkzx" May 24 20:31:41.943: INFO: Deleting PersistentVolumeClaim "pvc-7wx75" May 24 20:31:42.025: INFO: 4/28 pods finished STEP: Delete "local-pvvqvst" and create a new PV for same local volume storage STEP: Delete "local-pvvqvst" and create a new PV for same local volume storage STEP: Delete "local-pv25bbq" and create a new PV for same local volume storage STEP: Delete "local-pvcp9vz" and create a new PV for same local volume storage May 24 20:31:44.110: INFO: Deleting pod pod-b9664013-3364-47b9-910e-e1a0d078170a May 24 20:31:44.117: INFO: Deleting PersistentVolumeClaim "pvc-mhh2t" May 24 20:31:44.127: INFO: Deleting PersistentVolumeClaim "pvc-8k5ht" May 24 20:31:44.324: INFO: Deleting PersistentVolumeClaim "pvc-qdl2d" May 24 20:31:44.339: INFO: 5/28 pods finished STEP: Delete "local-pvcngnl" and create a new PV for same local volume storage STEP: Delete "local-pvcngnl" and create a new PV for same local volume storage STEP: Delete "local-pv4w97h" and create a new PV for same local volume storage STEP: Delete "local-pv4w97h" and create a new PV for same local volume storage STEP: Delete "local-pvmxh9q" and create a new PV for same local volume storage STEP: Delete "local-pvmxh9q" and create a new PV for same local volume storage May 24 20:31:44.918: INFO: Deleting pod pod-c2b3e347-23ad-478e-a4fb-ba4133f1816a May 24 20:31:45.026: INFO: Deleting PersistentVolumeClaim "pvc-6p552" May 24 20:31:45.035: INFO: Deleting PersistentVolumeClaim "pvc-8xcw9" May 24 20:31:45.040: INFO: Deleting PersistentVolumeClaim "pvc-rzn5c" May 24 20:31:45.046: INFO: 6/28 pods finished STEP: Delete "local-pvz7mxg" and create a new PV for same local volume storage STEP: Delete "local-pvz7mxg" and create a new PV for same local volume storage STEP: Delete "local-pvbw68z" and create a new PV for same local volume storage STEP: Delete "local-pvbw68z" and create a new PV for same local volume storage STEP: Delete "local-pvpvhgj" and create a new PV for same local volume storage STEP: Delete "local-pvpvhgj" and create a new PV for same local volume storage May 24 20:31:51.124: INFO: Deleting pod pod-71708a30-e08a-46b4-ac08-68c8fde82884 May 24 20:31:51.624: INFO: Deleting PersistentVolumeClaim "pvc-f6vbz" May 24 20:31:51.630: INFO: Deleting PersistentVolumeClaim "pvc-9bq2s" May 24 20:31:51.825: INFO: Deleting PersistentVolumeClaim "pvc-gp89r" May 24 20:31:51.921: INFO: 7/28 pods finished May 24 20:31:51.922: INFO: Deleting pod pod-7f9ad0c9-1a06-4c52-a415-66d70c2742d5 STEP: Delete "local-pvqs2hm" and create a new PV for same local volume storage May 24 20:31:51.940: INFO: Deleting PersistentVolumeClaim "pvc-x8rd8" May 24 20:31:51.949: INFO: Deleting PersistentVolumeClaim "pvc-2kfrs" STEP: Delete "local-pvqs2hm" and create a new PV for same local volume storage May 24 20:31:51.953: INFO: Deleting PersistentVolumeClaim "pvc-6fvrs" STEP: Delete "local-pvf7xps" and create a new PV for same local volume storage May 24 20:31:51.957: INFO: 8/28 pods finished May 24 20:31:51.957: INFO: Deleting pod pod-c58fdab9-c32e-4061-9c93-0e50131dbb4c May 24 20:31:51.964: INFO: Deleting PersistentVolumeClaim "pvc-p96bg" STEP: Delete "local-pvf7xps" and create a new PV for same local volume storage May 24 20:31:51.968: INFO: Deleting PersistentVolumeClaim "pvc-dlrn2" STEP: Delete "local-pv74mhw" and create a new PV for same local volume storage May 24 20:31:51.971: INFO: Deleting PersistentVolumeClaim "pvc-k5pxl" May 24 20:31:51.974: INFO: 9/28 pods finished STEP: Delete "local-pv74mhw" and create a new PV for same local volume storage STEP: Delete "local-pvzfrq7" and create a new PV for same local volume storage STEP: Delete "local-pvqf7l2" and create a new PV for same local volume storage STEP: Delete "local-pvqf7l2" and create a new PV for same local volume storage STEP: Delete "local-pv4nx9x" and create a new PV for same local volume storage STEP: Delete "local-pvmcmlx" and create a new PV for same local volume storage STEP: Delete "local-pv2kdc5" and create a new PV for same local volume storage STEP: Delete "local-pvzl79w" and create a new PV for same local volume storage May 24 20:31:52.935: INFO: Deleting pod pod-0749002b-508c-47d9-83e1-e5f79ef10348 May 24 20:31:53.030: INFO: Deleting PersistentVolumeClaim "pvc-mpkgv" May 24 20:31:53.035: INFO: Deleting PersistentVolumeClaim "pvc-xdgxx" May 24 20:31:53.039: INFO: Deleting PersistentVolumeClaim "pvc-5r52t" May 24 20:31:53.044: INFO: 10/28 pods finished STEP: Delete "local-pv9htst" and create a new PV for same local volume storage STEP: Delete "local-pv9htst" and create a new PV for same local volume storage STEP: Delete "local-pv782p4" and create a new PV for same local volume storage STEP: Delete "local-pv782p4" and create a new PV for same local volume storage STEP: Delete "local-pvkwp5l" and create a new PV for same local volume storage STEP: Delete "local-pvkwp5l" and create a new PV for same local volume storage May 24 20:31:53.932: INFO: Deleting pod pod-34d89216-e6be-4107-a4cf-bfe27a1e0e2e May 24 20:31:53.940: INFO: Deleting PersistentVolumeClaim "pvc-8z6g9" May 24 20:31:53.945: INFO: Deleting PersistentVolumeClaim "pvc-tdqpp" May 24 20:31:53.950: INFO: Deleting PersistentVolumeClaim "pvc-wjzlb" May 24 20:31:53.955: INFO: 11/28 pods finished STEP: Delete "local-pvbwvfd" and create a new PV for same local volume storage STEP: Delete "local-pvbwvfd" and create a new PV for same local volume storage STEP: Delete "local-pvrqtn2" and create a new PV for same local volume storage STEP: Delete "local-pvrqtn2" and create a new PV for same local volume storage STEP: Delete "local-pvnqkxs" and create a new PV for same local volume storage STEP: Delete "local-pvnqkxs" and create a new PV for same local volume storage May 24 20:31:54.921: INFO: Deleting pod pod-0ad8217f-d137-4aaa-a172-6a57a287c2ff May 24 20:31:54.932: INFO: Deleting PersistentVolumeClaim "pvc-sp8tq" May 24 20:31:54.936: INFO: Deleting PersistentVolumeClaim "pvc-vxtrq" May 24 20:31:54.941: INFO: Deleting PersistentVolumeClaim "pvc-lh28m" May 24 20:31:54.977: INFO: 12/28 pods finished STEP: Delete "local-pvtlpqh" and create a new PV for same local volume storage STEP: Delete "local-pvtlpqh" and create a new PV for same local volume storage STEP: Delete "local-pvwwtrw" and create a new PV for same local volume storage STEP: Delete "local-pvwwtrw" and create a new PV for same local volume storage STEP: Delete "local-pvd665d" and create a new PV for same local volume storage STEP: Delete "local-pvd665d" and create a new PV for same local volume storage May 24 20:31:58.919: INFO: Deleting pod pod-b9f9cd91-75d1-4090-89ce-157dddfcb21e May 24 20:31:58.929: INFO: Deleting PersistentVolumeClaim "pvc-6vfdd" May 24 20:31:58.934: INFO: Deleting PersistentVolumeClaim "pvc-v6k6m" May 24 20:31:58.941: INFO: Deleting PersistentVolumeClaim "pvc-jv7qq" May 24 20:31:58.945: INFO: 13/28 pods finished May 24 20:31:58.945: INFO: Deleting pod pod-fb9a1e7d-3607-4810-84e7-e75974a3ac84 May 24 20:31:58.952: INFO: Deleting PersistentVolumeClaim "pvc-7xrf7" May 24 20:31:58.957: INFO: Deleting PersistentVolumeClaim "pvc-69sw7" STEP: Delete "local-pv85tgj" and create a new PV for same local volume storage May 24 20:31:58.961: INFO: Deleting PersistentVolumeClaim "pvc-fpk2j" May 24 20:31:58.964: INFO: 14/28 pods finished STEP: Delete "local-pv85tgj" and create a new PV for same local volume storage STEP: Delete "local-pvpjwh8" and create a new PV for same local volume storage STEP: Delete "local-pvpjwh8" and create a new PV for same local volume storage STEP: Delete "local-pvjbfw9" and create a new PV for same local volume storage STEP: Delete "local-pvstljz" and create a new PV for same local volume storage STEP: Delete "local-pvbz78p" and create a new PV for same local volume storage STEP: Delete "local-pvr6787" and create a new PV for same local volume storage May 24 20:32:00.919: INFO: Deleting pod pod-0444173f-679f-4034-9afc-cfd4dc2e35d3 May 24 20:32:01.032: INFO: Deleting PersistentVolumeClaim "pvc-89sfw" May 24 20:32:01.037: INFO: Deleting PersistentVolumeClaim "pvc-dfcgw" May 24 20:32:01.041: INFO: Deleting PersistentVolumeClaim "pvc-nz8cz" May 24 20:32:01.046: INFO: 15/28 pods finished STEP: Delete "local-pvt5zfh" and create a new PV for same local volume storage STEP: Delete "local-pvt5zfh" and create a new PV for same local volume storage STEP: Delete "local-pv8qv7m" and create a new PV for same local volume storage STEP: Delete "local-pv8qv7m" and create a new PV for same local volume storage STEP: Delete "local-pv4h2h7" and create a new PV for same local volume storage STEP: Delete "local-pv4h2h7" and create a new PV for same local volume storage May 24 20:32:01.932: INFO: Deleting pod pod-0127f822-f7b0-49fb-b355-dfa250cca9d7 May 24 20:32:01.941: INFO: Deleting PersistentVolumeClaim "pvc-h48w5" May 24 20:32:01.946: INFO: Deleting PersistentVolumeClaim "pvc-bkm56" May 24 20:32:01.950: INFO: Deleting PersistentVolumeClaim "pvc-lvr2r" May 24 20:32:01.955: INFO: 16/28 pods finished STEP: Delete "local-pvsdzvz" and create a new PV for same local volume storage STEP: Delete "local-pvsdzvz" and create a new PV for same local volume storage STEP: Delete "local-pvwm9vk" and create a new PV for same local volume storage STEP: Delete "local-pvwm9vk" and create a new PV for same local volume storage STEP: Delete "local-pvrtdwf" and create a new PV for same local volume storage STEP: Delete "local-pvrtdwf" and create a new PV for same local volume storage May 24 20:32:03.918: INFO: Deleting pod pod-bdca7aa9-10fe-455d-9fba-32af4418d9c4 May 24 20:32:03.928: INFO: Deleting PersistentVolumeClaim "pvc-c8xsv" May 24 20:32:03.932: INFO: Deleting PersistentVolumeClaim "pvc-6dz4f" May 24 20:32:03.937: INFO: Deleting PersistentVolumeClaim "pvc-wx8pn" May 24 20:32:03.942: INFO: 17/28 pods finished STEP: Delete "local-pvgnjv9" and create a new PV for same local volume storage STEP: Delete "local-pvgnjv9" and create a new PV for same local volume storage STEP: Delete "local-pv8hjq6" and create a new PV for same local volume storage STEP: Delete "local-pv8hjq6" and create a new PV for same local volume storage STEP: Delete "local-pvqgrc9" and create a new PV for same local volume storage STEP: Delete "local-pvqgrc9" and create a new PV for same local volume storage May 24 20:32:08.918: INFO: Deleting pod pod-7ebfb321-1720-453c-be15-cbb821a7d6d9 May 24 20:32:08.927: INFO: Deleting PersistentVolumeClaim "pvc-tqm4n" May 24 20:32:08.933: INFO: Deleting PersistentVolumeClaim "pvc-lfn6p" May 24 20:32:08.938: INFO: Deleting PersistentVolumeClaim "pvc-mdzjp" May 24 20:32:08.943: INFO: 18/28 pods finished May 24 20:32:08.943: INFO: Deleting pod pod-bbfd10e6-8ed2-42c5-a639-7a3047b02ce7 May 24 20:32:08.951: INFO: Deleting PersistentVolumeClaim "pvc-djdw9" May 24 20:32:08.956: INFO: Deleting PersistentVolumeClaim "pvc-6rcz6" STEP: Delete "local-pv2l5zg" and create a new PV for same local volume storage May 24 20:32:08.960: INFO: Deleting PersistentVolumeClaim "pvc-cdvjd" May 24 20:32:08.963: INFO: 19/28 pods finished STEP: Delete "local-pv2l5zg" and create a new PV for same local volume storage STEP: Delete "local-pvhkfcb" and create a new PV for same local volume storage STEP: Delete "local-pvhkfcb" and create a new PV for same local volume storage STEP: Delete "local-pvv6rb2" and create a new PV for same local volume storage STEP: Delete "local-pvv6rb2" and create a new PV for same local volume storage STEP: Delete "local-pvwrtz7" and create a new PV for same local volume storage STEP: Delete "local-pv75j2s" and create a new PV for same local volume storage STEP: Delete "local-pvpq7b4" and create a new PV for same local volume storage STEP: Delete "local-pvpq7b4" and create a new PV for same local volume storage May 24 20:32:09.918: INFO: Deleting pod pod-a0dd19f7-db5c-4a2b-8837-0f7069f49391 May 24 20:32:09.928: INFO: Deleting PersistentVolumeClaim "pvc-79cwj" May 24 20:32:09.933: INFO: Deleting PersistentVolumeClaim "pvc-8bt6r" May 24 20:32:09.938: INFO: Deleting PersistentVolumeClaim "pvc-ltd8d" May 24 20:32:09.943: INFO: 20/28 pods finished STEP: Delete "local-pv55bgq" and create a new PV for same local volume storage STEP: Delete "local-pv55bgq" and create a new PV for same local volume storage STEP: Delete "local-pv28jdr" and create a new PV for same local volume storage STEP: Delete "local-pvckfhv" and create a new PV for same local volume storage May 24 20:32:10.918: INFO: Deleting pod pod-7ac85036-6bea-422d-8d8c-4ac1658f0096 May 24 20:32:10.929: INFO: Deleting PersistentVolumeClaim "pvc-l2rhd" May 24 20:32:10.933: INFO: Deleting PersistentVolumeClaim "pvc-t6jrj" May 24 20:32:10.938: INFO: Deleting PersistentVolumeClaim "pvc-prtkb" May 24 20:32:10.943: INFO: 21/28 pods finished STEP: Delete "local-pvvnd7d" and create a new PV for same local volume storage STEP: Delete "local-pvvnd7d" and create a new PV for same local volume storage STEP: Delete "local-pv6vpq6" and create a new PV for same local volume storage STEP: Delete "local-pv6vpq6" and create a new PV for same local volume storage STEP: Delete "local-pvvwljt" and create a new PV for same local volume storage STEP: Delete "local-pvvwljt" and create a new PV for same local volume storage May 24 20:32:15.918: INFO: Deleting pod pod-f384a26a-7089-4e4e-b29a-65ca11e44300 May 24 20:32:15.931: INFO: Deleting PersistentVolumeClaim "pvc-n8662" May 24 20:32:15.936: INFO: Deleting PersistentVolumeClaim "pvc-pqj5c" May 24 20:32:15.940: INFO: Deleting PersistentVolumeClaim "pvc-f9c52" May 24 20:32:15.946: INFO: 22/28 pods finished STEP: Delete "local-pvnjsk6" and create a new PV for same local volume storage STEP: Delete "local-pvnjsk6" and create a new PV for same local volume storage STEP: Delete "local-pvzlxsq" and create a new PV for same local volume storage STEP: Delete "local-pvzlxsq" and create a new PV for same local volume storage STEP: Delete "local-pv8zxsg" and create a new PV for same local volume storage STEP: Delete "local-pv8zxsg" and create a new PV for same local volume storage May 24 20:32:16.919: INFO: Deleting pod pod-09d2a4d8-c0e1-4fac-af65-a8f7475aa2cd May 24 20:32:17.032: INFO: Deleting PersistentVolumeClaim "pvc-m2hh7" May 24 20:32:17.124: INFO: Deleting PersistentVolumeClaim "pvc-nsm74" May 24 20:32:17.139: INFO: Deleting PersistentVolumeClaim "pvc-z2h8b" May 24 20:32:17.151: INFO: 23/28 pods finished May 24 20:32:17.151: INFO: Deleting pod pod-9f428810-5e9a-4f8d-88e2-5d9b1dc995e8 May 24 20:32:17.163: INFO: Deleting PersistentVolumeClaim "pvc-wrm4x" May 24 20:32:17.166: INFO: Deleting PersistentVolumeClaim "pvc-nhk76" May 24 20:32:17.169: INFO: Deleting PersistentVolumeClaim "pvc-p9m6f" STEP: Delete "local-pv8wj5r" and create a new PV for same local volume storage May 24 20:32:17.172: INFO: 24/28 pods finished STEP: Delete "local-pv8wj5r" and create a new PV for same local volume storage STEP: Delete "local-pvmscvw" and create a new PV for same local volume storage STEP: Delete "local-pvmscvw" and create a new PV for same local volume storage STEP: Delete "local-pvh7lzw" and create a new PV for same local volume storage STEP: Delete "local-pvh7lzw" and create a new PV for same local volume storage STEP: Delete "local-pvcfgd4" and create a new PV for same local volume storage STEP: Delete "local-pvcfgd4" and create a new PV for same local volume storage STEP: Delete "local-pv4726x" and create a new PV for same local volume storage STEP: Delete "local-pv7nffh" and create a new PV for same local volume storage May 24 20:32:18.918: INFO: Deleting pod pod-77443aaf-1cde-45c7-b0ff-97bbf576611c May 24 20:32:18.929: INFO: Deleting PersistentVolumeClaim "pvc-jzlnc" May 24 20:32:18.934: INFO: Deleting PersistentVolumeClaim "pvc-rb2n5" May 24 20:32:18.939: INFO: Deleting PersistentVolumeClaim "pvc-nk4f4" May 24 20:32:18.944: INFO: 25/28 pods finished STEP: Delete "local-pvsd72k" and create a new PV for same local volume storage STEP: Delete "local-pvsd72k" and create a new PV for same local volume storage STEP: Delete "local-pvsxtjp" and create a new PV for same local volume storage STEP: Delete "local-pvsxtjp" and create a new PV for same local volume storage STEP: Delete "local-pvfkjlw" and create a new PV for same local volume storage STEP: Delete "local-pvfkjlw" and create a new PV for same local volume storage May 24 20:32:19.927: INFO: Deleting pod pod-5a68556e-c8bc-414c-a811-c127363daf37 May 24 20:32:20.031: INFO: Deleting PersistentVolumeClaim "pvc-bltp5" May 24 20:32:20.036: INFO: Deleting PersistentVolumeClaim "pvc-9stgf" May 24 20:32:20.041: INFO: Deleting PersistentVolumeClaim "pvc-5bk2c" May 24 20:32:20.045: INFO: 26/28 pods finished STEP: Delete "local-pvvjfpx" and create a new PV for same local volume storage STEP: Delete "local-pvvjfpx" and create a new PV for same local volume storage STEP: Delete "local-pvldr5l" and create a new PV for same local volume storage STEP: Delete "local-pvft2mt" and create a new PV for same local volume storage STEP: Delete "local-pvft2mt" and create a new PV for same local volume storage May 24 20:32:20.918: INFO: Deleting pod pod-8ef1fb53-afb7-4a0b-94ac-57adc6f85113 May 24 20:32:20.928: INFO: Deleting PersistentVolumeClaim "pvc-v2mdl" May 24 20:32:20.932: INFO: Deleting PersistentVolumeClaim "pvc-xftc5" May 24 20:32:20.937: INFO: Deleting PersistentVolumeClaim "pvc-g4f42" May 24 20:32:20.942: INFO: 27/28 pods finished STEP: Delete "local-pvcgb68" and create a new PV for same local volume storage STEP: Delete "local-pvcgb68" and create a new PV for same local volume storage STEP: Delete "local-pv7xvwt" and create a new PV for same local volume storage STEP: Delete "local-pvxpbzz" and create a new PV for same local volume storage May 24 20:32:28.027: INFO: Deleting pod pod-1da49c9d-c337-45fb-9458-c8aa05813dba May 24 20:32:28.331: INFO: Deleting PersistentVolumeClaim "pvc-nmpgd" May 24 20:32:28.723: INFO: Deleting PersistentVolumeClaim "pvc-ftmwz" May 24 20:32:28.922: INFO: Deleting PersistentVolumeClaim "pvc-gm2dp" May 24 20:32:28.927: INFO: 28/28 pods finished [AfterEach] Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:505 STEP: Stop and wait for recycle goroutine to finish STEP: Clean all PVs STEP: Cleaning up 10 local volumes on node "leguer-worker2" STEP: Cleaning up PVC and PV May 24 20:32:28.927: INFO: pvc is nil May 24 20:32:28.927: INFO: Deleting PersistentVolume "local-pvprlmk" STEP: Cleaning up PVC and PV May 24 20:32:29.128: INFO: pvc is nil May 24 20:32:29.128: INFO: Deleting PersistentVolume "local-pvjpkzq" STEP: Cleaning up PVC and PV May 24 20:32:29.133: INFO: pvc is nil May 24 20:32:29.133: INFO: Deleting PersistentVolume "local-pvmhs7n" STEP: Cleaning up PVC and PV May 24 20:32:29.429: INFO: pvc is nil May 24 20:32:29.429: INFO: Deleting PersistentVolume "local-pv6ptz9" STEP: Cleaning up PVC and PV May 24 20:32:29.437: INFO: pvc is nil May 24 20:32:29.437: INFO: Deleting PersistentVolume "local-pvkjj2d" STEP: Cleaning up PVC and PV May 24 20:32:29.632: INFO: pvc is nil May 24 20:32:29.632: INFO: Deleting PersistentVolume "local-pvqkmbs" STEP: Cleaning up PVC and PV May 24 20:32:29.638: INFO: pvc is nil May 24 20:32:29.638: INFO: Deleting PersistentVolume "local-pv47bfk" STEP: Cleaning up PVC and PV May 24 20:32:29.643: INFO: pvc is nil May 24 20:32:29.643: INFO: Deleting PersistentVolume "local-pv4gdc6" STEP: Cleaning up PVC and PV May 24 20:32:29.647: INFO: pvc is nil May 24 20:32:29.647: INFO: Deleting PersistentVolume "local-pv8q729" STEP: Cleaning up PVC and PV May 24 20:32:29.651: INFO: pvc is nil May 24 20:32:29.651: INFO: Deleting PersistentVolume "local-pvgzbfh" STEP: Unmount tmpfs mount point on node "leguer-worker2" at path "/tmp/local-volume-test-85ab4b07-032a-4cd5-a2b0-05e269ea2ead" May 24 20:32:29.655: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-85ab4b07-032a-4cd5-a2b0-05e269ea2ead"] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker2-sr62l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:32:29.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 24 20:32:29.834: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-85ab4b07-032a-4cd5-a2b0-05e269ea2ead] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker2-sr62l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:32:29.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "leguer-worker2" at path "/tmp/local-volume-test-31d890ef-abc0-47af-b085-86dfaf87a74c" May 24 20:32:30.252: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-31d890ef-abc0-47af-b085-86dfaf87a74c"] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker2-sr62l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:32:30.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 24 20:32:30.555: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-31d890ef-abc0-47af-b085-86dfaf87a74c] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker2-sr62l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:32:30.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "leguer-worker2" at path "/tmp/local-volume-test-9ce9dcd1-d0dc-4053-971f-065b9905e621" May 24 20:32:30.939: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-9ce9dcd1-d0dc-4053-971f-065b9905e621"] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker2-sr62l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:32:30.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 24 20:32:31.441: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-9ce9dcd1-d0dc-4053-971f-065b9905e621] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker2-sr62l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:32:31.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "leguer-worker2" at path "/tmp/local-volume-test-806930ff-0531-464a-9365-8b9c5e3049df" May 24 20:32:31.754: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-806930ff-0531-464a-9365-8b9c5e3049df"] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker2-sr62l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:32:31.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 24 20:32:32.051: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-806930ff-0531-464a-9365-8b9c5e3049df] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker2-sr62l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:32:32.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "leguer-worker2" at path "/tmp/local-volume-test-dea60aa1-8265-4288-a7cf-192a0ba62905" May 24 20:32:32.243: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-dea60aa1-8265-4288-a7cf-192a0ba62905"] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker2-sr62l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:32:32.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 24 20:32:32.457: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-dea60aa1-8265-4288-a7cf-192a0ba62905] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker2-sr62l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:32:32.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "leguer-worker2" at path "/tmp/local-volume-test-80c19c5d-beed-44bd-af2c-0f30bbc6a6c5" May 24 20:32:32.743: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-80c19c5d-beed-44bd-af2c-0f30bbc6a6c5"] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker2-sr62l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:32:32.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 24 20:32:32.893: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-80c19c5d-beed-44bd-af2c-0f30bbc6a6c5] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker2-sr62l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:32:32.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "leguer-worker2" at path "/tmp/local-volume-test-60af8d8c-2aef-4904-98e5-38144bae8d26" May 24 20:32:33.055: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-60af8d8c-2aef-4904-98e5-38144bae8d26"] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker2-sr62l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:32:33.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 24 20:32:33.193: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-60af8d8c-2aef-4904-98e5-38144bae8d26] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker2-sr62l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:32:33.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "leguer-worker2" at path "/tmp/local-volume-test-ee3a1740-b1d2-4878-a561-aec9b3f599ac" May 24 20:32:33.327: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-ee3a1740-b1d2-4878-a561-aec9b3f599ac"] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker2-sr62l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:32:33.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 24 20:32:33.456: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ee3a1740-b1d2-4878-a561-aec9b3f599ac] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker2-sr62l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:32:33.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "leguer-worker2" at path "/tmp/local-volume-test-3e770f87-b52e-47c1-a7fd-b0f0292fcc6b" May 24 20:32:33.644: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-3e770f87-b52e-47c1-a7fd-b0f0292fcc6b"] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker2-sr62l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:32:33.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 24 20:32:33.829: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-3e770f87-b52e-47c1-a7fd-b0f0292fcc6b] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker2-sr62l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:32:33.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "leguer-worker2" at path "/tmp/local-volume-test-ff201289-3b20-4599-8585-3dcfc689f4c8" May 24 20:32:33.966: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-ff201289-3b20-4599-8585-3dcfc689f4c8"] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker2-sr62l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:32:33.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 24 20:32:34.103: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ff201289-3b20-4599-8585-3dcfc689f4c8] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker2-sr62l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:32:34.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up 10 local volumes on node "leguer-worker" STEP: Cleaning up PVC and PV May 24 20:32:34.226: INFO: pvc is nil May 24 20:32:34.226: INFO: Deleting PersistentVolume "local-pvt9h2h" STEP: Cleaning up PVC and PV May 24 20:32:34.336: INFO: pvc is nil May 24 20:32:34.336: INFO: Deleting PersistentVolume "local-pv6qd4f" STEP: Cleaning up PVC and PV May 24 20:32:34.341: INFO: pvc is nil May 24 20:32:34.341: INFO: Deleting PersistentVolume "local-pvsr8lz" STEP: Cleaning up PVC and PV May 24 20:32:34.346: INFO: pvc is nil May 24 20:32:34.346: INFO: Deleting PersistentVolume "local-pvk76md" STEP: Cleaning up PVC and PV May 24 20:32:34.351: INFO: pvc is nil May 24 20:32:34.351: INFO: Deleting PersistentVolume "local-pvf289k" STEP: Cleaning up PVC and PV May 24 20:32:34.424: INFO: pvc is nil May 24 20:32:34.424: INFO: Deleting PersistentVolume "local-pvf9wp9" STEP: Cleaning up PVC and PV May 24 20:32:34.527: INFO: pvc is nil May 24 20:32:34.527: INFO: Deleting PersistentVolume "local-pvfjgnq" STEP: Cleaning up PVC and PV May 24 20:32:34.535: INFO: pvc is nil May 24 20:32:34.535: INFO: Deleting PersistentVolume "local-pvkm8nt" STEP: Cleaning up PVC and PV May 24 20:32:34.540: INFO: pvc is nil May 24 20:32:34.540: INFO: Deleting PersistentVolume "local-pvrfq5p" STEP: Cleaning up PVC and PV May 24 20:32:34.626: INFO: pvc is nil May 24 20:32:34.626: INFO: Deleting PersistentVolume "local-pvjlk8k" STEP: Unmount tmpfs mount point on node "leguer-worker" at path "/tmp/local-volume-test-487a8c05-082d-440b-8d96-dda4cd8e6ec1" May 24 20:32:34.633: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-487a8c05-082d-440b-8d96-dda4cd8e6ec1"] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker-f5zc4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:32:34.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 24 20:32:34.852: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-487a8c05-082d-440b-8d96-dda4cd8e6ec1] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker-f5zc4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:32:34.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "leguer-worker" at path "/tmp/local-volume-test-243d5d92-6a6c-44d6-98a1-cbfdd3486f95" May 24 20:32:35.037: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-243d5d92-6a6c-44d6-98a1-cbfdd3486f95"] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker-f5zc4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:32:35.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 24 20:32:35.186: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-243d5d92-6a6c-44d6-98a1-cbfdd3486f95] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker-f5zc4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:32:35.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "leguer-worker" at path "/tmp/local-volume-test-bda01b87-9bbc-4bc0-8c2c-1f5282b18811" May 24 20:32:35.334: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-bda01b87-9bbc-4bc0-8c2c-1f5282b18811"] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker-f5zc4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:32:35.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 24 20:32:35.461: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-bda01b87-9bbc-4bc0-8c2c-1f5282b18811] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker-f5zc4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:32:35.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "leguer-worker" at path "/tmp/local-volume-test-11d8fde2-d636-4bad-ac98-c226e7c80f07" May 24 20:32:35.607: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-11d8fde2-d636-4bad-ac98-c226e7c80f07"] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker-f5zc4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:32:35.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 24 20:32:35.730: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-11d8fde2-d636-4bad-ac98-c226e7c80f07] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker-f5zc4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:32:35.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "leguer-worker" at path "/tmp/local-volume-test-ff4c6aff-0b48-49cb-9197-6b695f97b662" May 24 20:32:35.947: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-ff4c6aff-0b48-49cb-9197-6b695f97b662"] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker-f5zc4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:32:35.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 24 20:32:36.081: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ff4c6aff-0b48-49cb-9197-6b695f97b662] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker-f5zc4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:32:36.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "leguer-worker" at path "/tmp/local-volume-test-41037a10-0c1c-4efd-b8bf-ecff8c7f0bae" May 24 20:32:36.215: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-41037a10-0c1c-4efd-b8bf-ecff8c7f0bae"] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker-f5zc4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:32:36.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 24 20:32:36.340: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-41037a10-0c1c-4efd-b8bf-ecff8c7f0bae] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker-f5zc4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:32:36.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "leguer-worker" at path "/tmp/local-volume-test-caee0fff-cf34-4b5b-a6ca-685e840b2b03" May 24 20:32:36.472: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-caee0fff-cf34-4b5b-a6ca-685e840b2b03"] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker-f5zc4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:32:36.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 24 20:32:36.601: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-caee0fff-cf34-4b5b-a6ca-685e840b2b03] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker-f5zc4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:32:36.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "leguer-worker" at path "/tmp/local-volume-test-46a41293-59ad-42de-abee-3d44e92a04a3" May 24 20:32:36.743: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-46a41293-59ad-42de-abee-3d44e92a04a3"] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker-f5zc4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:32:36.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 24 20:32:36.926: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-46a41293-59ad-42de-abee-3d44e92a04a3] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker-f5zc4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:32:36.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "leguer-worker" at path "/tmp/local-volume-test-e4e7aed0-1385-452c-b75a-56c240c19ea4" May 24 20:32:37.057: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-e4e7aed0-1385-452c-b75a-56c240c19ea4"] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker-f5zc4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:32:37.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 24 20:32:37.212: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e4e7aed0-1385-452c-b75a-56c240c19ea4] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker-f5zc4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:32:37.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "leguer-worker" at path "/tmp/local-volume-test-1d5e3ae3-8828-40ea-acb0-38c387aa428d" May 24 20:32:37.351: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-1d5e3ae3-8828-40ea-acb0-38c387aa428d"] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker-f5zc4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:32:37.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 24 20:32:37.490: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-1d5e3ae3-8828-40ea-acb0-38c387aa428d] Namespace:persistent-local-volumes-test-2620 PodName:hostexec-leguer-worker-f5zc4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:32:37.490: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:32:37.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2620" for this suite. • [SLOW TEST:70.879 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:427 should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:517 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","total":18,"completed":1,"skipped":524,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create metrics for total number of volumes in A/D Controller /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:321 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:32:37.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 24 20:32:37.680: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:32:37.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-8890" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.044 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create metrics for total number of volumes in A/D Controller [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:321 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Pod Disks [Serial] attach on previously attached volumes should work /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:457 [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:32:37.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 [It] [Serial] attach on previously attached volumes should work /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:457 May 24 20:32:37.735: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:32:37.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-5894" for this suite. S [SKIPPING] [0.053 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Serial] attach on previously attached volumes should work [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:457 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:458 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create unbound pvc count metrics for pvc controller after creating pvc only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:493 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:32:37.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 24 20:32:37.780: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:32:37.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-8386" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.041 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:382 should create unbound pvc count metrics for pvc controller after creating pvc only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:493 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create prometheus metrics for volume provisioning and attach/detach /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:100 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:32:37.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 24 20:32:37.940: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:32:37.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-2096" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.152 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create prometheus metrics for volume provisioning and attach/detach [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:100 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:642 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:32:37.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:619 [It] all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:642 STEP: Create a PVC STEP: Create 50 pods to use this PVC STEP: Wait for all pods are running [AfterEach] Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:633 STEP: Clean PV local-pvnk4nq [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:33:54.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6748" for this suite. • [SLOW TEST:76.593 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:614 all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:642 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running","total":18,"completed":2,"skipped":2350,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:251 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:33:54.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 May 24 20:34:02.612: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-4978 PodName:hostexec-leguer-worker-mw8gr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:34:02.612: INFO: >>> kubeConfig: /root/.kube/config May 24 20:34:02.790: INFO: exec leguer-worker: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l May 24 20:34:02.790: INFO: exec leguer-worker: stdout: "0\n" May 24 20:34:02.790: INFO: exec leguer-worker: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" May 24 20:34:02.790: INFO: exec leguer-worker: exit code: 0 May 24 20:34:02.790: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:34:02.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4978" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [8.374 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 Two pods mounting a local volume one after the other [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:251 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:245 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:34:02.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 May 24 20:34:37.125: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-6430 PodName:hostexec-leguer-worker-bdq6n ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:34:37.125: INFO: >>> kubeConfig: /root/.kube/config May 24 20:34:37.272: INFO: exec leguer-worker: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l May 24 20:34:37.272: INFO: exec leguer-worker: stdout: "0\n" May 24 20:34:37.272: INFO: exec leguer-worker: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" May 24 20:34:37.272: INFO: exec leguer-worker: exit code: 0 May 24 20:34:37.272: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:34:37.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6430" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [34.347 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 Two pods mounting a local volume at the same time [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:245 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create bound pv/pvc count metrics for pvc controller after creating both pv and pvc /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:502 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:34:37.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 24 20:34:37.318: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:34:37.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-128" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.144 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:382 should create bound pv/pvc count metrics for pvc controller after creating both pv and pvc /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:502 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create volume metrics in Volume Manager /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:291 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:34:37.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 24 20:34:37.463: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:34:37.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-2473" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.042 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create volume metrics in Volume Manager [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:291 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create none metrics for pvc controller before creating any PV or PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:480 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:34:37.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 24 20:34:37.505: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:34:37.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-2484" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.041 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:382 should create none metrics for pvc controller before creating any PV or PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:480 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create total pv count metrics for with plugin and volume mode labels after creating pv /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:512 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:34:37.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 24 20:34:37.647: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:34:37.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-8969" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.141 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:382 should create total pv count metrics for with plugin and volume mode labels after creating pv /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:512 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:282 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:34:37.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 May 24 20:34:41.714: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-8758 PodName:hostexec-leguer-worker-txt8t ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:34:41.714: INFO: >>> kubeConfig: /root/.kube/config May 24 20:34:41.894: INFO: exec leguer-worker: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l May 24 20:34:41.894: INFO: exec leguer-worker: stdout: "0\n" May 24 20:34:41.894: INFO: exec leguer-worker: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" May 24 20:34:41.894: INFO: exec leguer-worker: exit code: 0 May 24 20:34:41.894: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:34:41.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8758" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [4.242 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:256 should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:282 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create metrics for total time taken in volume operations in P/V Controller /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:260 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:34:41.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 24 20:34:41.960: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:34:41.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3858" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.045 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create metrics for total time taken in volume operations in P/V Controller [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:260 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] One pod requesting one prebound PVC should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:228 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:34:41.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 May 24 20:34:44.026: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-710 PodName:hostexec-leguer-worker-cttcq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:34:44.026: INFO: >>> kubeConfig: /root/.kube/config May 24 20:34:44.196: INFO: exec leguer-worker: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l May 24 20:34:44.196: INFO: exec leguer-worker: stdout: "0\n" May 24 20:34:44.196: INFO: exec leguer-worker: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" May 24 20:34:44.196: INFO: exec leguer-worker: exit code: 0 May 24 20:34:44.196: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:34:44.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-710" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.237 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 One pod requesting one prebound PVC [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:228 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create unbound pv count metrics for pvc controller after creating pv only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:484 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:34:44.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 24 20:34:44.249: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:34:44.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-4678" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.045 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:382 should create unbound pv count metrics for pvc controller after creating pv only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:484 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] One pod requesting one prebound PVC should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:234 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:34:44.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 May 24 20:34:46.308: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-2150 PodName:hostexec-leguer-worker-rkfmb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 24 20:34:46.308: INFO: >>> kubeConfig: /root/.kube/config May 24 20:34:46.473: INFO: exec leguer-worker: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l May 24 20:34:46.473: INFO: exec leguer-worker: stdout: "0\n" May 24 20:34:46.473: INFO: exec leguer-worker: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" May 24 20:34:46.473: INFO: exec leguer-worker: exit code: 0 May 24 20:34:46.473: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:34:46.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2150" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.223 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 One pod requesting one prebound PVC [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:234 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 24 20:34:46.491: INFO: Running AfterSuite actions on all nodes May 24 20:34:46.491: INFO: Running AfterSuite actions on node 1 May 24 20:34:46.491: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_storage_serial/junit_01.xml {"msg":"Test Suite completed","total":18,"completed":2,"skipped":5665,"failed":0} Ran 2 of 5667 Specs in 199.902 seconds SUCCESS! -- 2 Passed | 0 Failed | 0 Pending | 5665 Skipped PASS Ginkgo ran 1 suite in 3m21.607052697s Test Suite Passed