I0507 00:29:49.749317 22 e2e.go:129] Starting e2e run "19ae13ac-fdfa-46b3-8eda-0fdd569bde2f" on Ginkgo node 1 {"msg":"Test Suite starting","total":21,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1651883388 - Will randomize all specs Will run 21 of 5773 specs May 7 00:29:49.830: INFO: >>> kubeConfig: /root/.kube/config May 7 00:29:49.835: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 7 00:29:49.864: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 7 00:29:49.928: INFO: The status of Pod cmk-init-discover-node1-tp69t is Succeeded, skipping waiting May 7 00:29:49.928: INFO: The status of Pod cmk-init-discover-node2-kt2nj is Succeeded, skipping waiting May 7 00:29:49.928: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 7 00:29:49.928: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 7 00:29:49.928: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 7 00:29:49.946: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) May 7 00:29:49.946: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) May 7 00:29:49.946: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) May 7 00:29:49.946: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) May 7 00:29:49.946: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) May 7 00:29:49.946: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) May 7 00:29:49.946: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) May 7 00:29:49.946: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 7 00:29:49.946: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) May 7 00:29:49.946: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) May 7 00:29:49.946: INFO: e2e test version: v1.21.9 May 7 00:29:49.948: INFO: kube-apiserver version: v1.21.1 May 7 00:29:49.948: INFO: >>> kubeConfig: /root/.kube/config May 7 00:29:49.955: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:531 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 7 00:29:49.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test W0507 00:29:49.995307 22 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 7 00:29:49.995: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 7 00:29:49.998: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:455 STEP: Setting up 10 local volumes on node "node1" STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-f3d1f0d7-8706-4dce-8b5f-8e8dcf49a376" May 7 00:29:52.033: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-f3d1f0d7-8706-4dce-8b5f-8e8dcf49a376" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-f3d1f0d7-8706-4dce-8b5f-8e8dcf49a376" "/tmp/local-volume-test-f3d1f0d7-8706-4dce-8b5f-8e8dcf49a376"] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node1-4dw8d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:29:52.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-6ae804f0-ac40-4bb7-8060-e21ac839a47c" May 7 00:29:52.127: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-6ae804f0-ac40-4bb7-8060-e21ac839a47c" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-6ae804f0-ac40-4bb7-8060-e21ac839a47c" "/tmp/local-volume-test-6ae804f0-ac40-4bb7-8060-e21ac839a47c"] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node1-4dw8d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:29:52.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-9a5beff0-efce-46f7-9f7d-7ac1b836d963" May 7 00:29:52.218: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-9a5beff0-efce-46f7-9f7d-7ac1b836d963" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-9a5beff0-efce-46f7-9f7d-7ac1b836d963" "/tmp/local-volume-test-9a5beff0-efce-46f7-9f7d-7ac1b836d963"] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node1-4dw8d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:29:52.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-d3f01fbe-1d0c-4b09-aac7-7375cf23feb6" May 7 00:29:52.305: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-d3f01fbe-1d0c-4b09-aac7-7375cf23feb6" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-d3f01fbe-1d0c-4b09-aac7-7375cf23feb6" "/tmp/local-volume-test-d3f01fbe-1d0c-4b09-aac7-7375cf23feb6"] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node1-4dw8d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:29:52.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-f91597e1-bad4-4e4a-a82e-966aeeaacbb1" May 7 00:29:52.392: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-f91597e1-bad4-4e4a-a82e-966aeeaacbb1" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-f91597e1-bad4-4e4a-a82e-966aeeaacbb1" "/tmp/local-volume-test-f91597e1-bad4-4e4a-a82e-966aeeaacbb1"] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node1-4dw8d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:29:52.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-91ab0886-9c2b-4080-a7d6-78983895d9a8" May 7 00:29:52.516: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-91ab0886-9c2b-4080-a7d6-78983895d9a8" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-91ab0886-9c2b-4080-a7d6-78983895d9a8" "/tmp/local-volume-test-91ab0886-9c2b-4080-a7d6-78983895d9a8"] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node1-4dw8d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:29:52.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-2026210a-b023-4419-978b-966593d4a9f7" May 7 00:29:52.604: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-2026210a-b023-4419-978b-966593d4a9f7" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-2026210a-b023-4419-978b-966593d4a9f7" "/tmp/local-volume-test-2026210a-b023-4419-978b-966593d4a9f7"] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node1-4dw8d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:29:52.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-cf1b6dc9-fd73-414c-8c22-7e54160655f9" May 7 00:29:52.693: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-cf1b6dc9-fd73-414c-8c22-7e54160655f9" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-cf1b6dc9-fd73-414c-8c22-7e54160655f9" "/tmp/local-volume-test-cf1b6dc9-fd73-414c-8c22-7e54160655f9"] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node1-4dw8d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:29:52.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-f47a0d81-00dc-461e-b591-b0df5376411e" May 7 00:29:52.795: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-f47a0d81-00dc-461e-b591-b0df5376411e" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-f47a0d81-00dc-461e-b591-b0df5376411e" "/tmp/local-volume-test-f47a0d81-00dc-461e-b591-b0df5376411e"] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node1-4dw8d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:29:52.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-6c93ea2a-2cc3-46f3-8f6d-918c446bb3eb" May 7 00:29:52.921: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-6c93ea2a-2cc3-46f3-8f6d-918c446bb3eb" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-6c93ea2a-2cc3-46f3-8f6d-918c446bb3eb" "/tmp/local-volume-test-6c93ea2a-2cc3-46f3-8f6d-918c446bb3eb"] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node1-4dw8d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:29:52.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Setting up 10 local volumes on node "node2" STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-83557c97-b64e-4b61-bb5b-dfe7954d82b0" May 7 00:29:57.102: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-83557c97-b64e-4b61-bb5b-dfe7954d82b0" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-83557c97-b64e-4b61-bb5b-dfe7954d82b0" "/tmp/local-volume-test-83557c97-b64e-4b61-bb5b-dfe7954d82b0"] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node2-qxwqf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:29:57.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-852ad5f5-a98e-4c78-ba46-273d4b330cd2" May 7 00:29:57.204: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-852ad5f5-a98e-4c78-ba46-273d4b330cd2" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-852ad5f5-a98e-4c78-ba46-273d4b330cd2" "/tmp/local-volume-test-852ad5f5-a98e-4c78-ba46-273d4b330cd2"] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node2-qxwqf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:29:57.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-0b8f0a1a-bf86-48ec-9de0-0983fbbed05d" May 7 00:29:57.296: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-0b8f0a1a-bf86-48ec-9de0-0983fbbed05d" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-0b8f0a1a-bf86-48ec-9de0-0983fbbed05d" "/tmp/local-volume-test-0b8f0a1a-bf86-48ec-9de0-0983fbbed05d"] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node2-qxwqf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:29:57.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-915cc1b4-bdf7-4e56-a446-69c7b106c3ac" May 7 00:29:57.463: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-915cc1b4-bdf7-4e56-a446-69c7b106c3ac" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-915cc1b4-bdf7-4e56-a446-69c7b106c3ac" "/tmp/local-volume-test-915cc1b4-bdf7-4e56-a446-69c7b106c3ac"] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node2-qxwqf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:29:57.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-506924ac-14b2-4e6f-aa2c-581116d02764" May 7 00:29:57.570: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-506924ac-14b2-4e6f-aa2c-581116d02764" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-506924ac-14b2-4e6f-aa2c-581116d02764" "/tmp/local-volume-test-506924ac-14b2-4e6f-aa2c-581116d02764"] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node2-qxwqf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:29:57.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-9f31a35c-4f90-4e3c-9708-be85fe548fe8" May 7 00:29:57.680: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-9f31a35c-4f90-4e3c-9708-be85fe548fe8" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-9f31a35c-4f90-4e3c-9708-be85fe548fe8" "/tmp/local-volume-test-9f31a35c-4f90-4e3c-9708-be85fe548fe8"] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node2-qxwqf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:29:57.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-ab39713f-533f-4e55-b647-ac40024f72c2" May 7 00:29:57.781: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-ab39713f-533f-4e55-b647-ac40024f72c2" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-ab39713f-533f-4e55-b647-ac40024f72c2" "/tmp/local-volume-test-ab39713f-533f-4e55-b647-ac40024f72c2"] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node2-qxwqf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:29:57.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-e13a9b4f-1372-4b1f-9f69-60d21476a8ae" May 7 00:29:57.907: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-e13a9b4f-1372-4b1f-9f69-60d21476a8ae" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-e13a9b4f-1372-4b1f-9f69-60d21476a8ae" "/tmp/local-volume-test-e13a9b4f-1372-4b1f-9f69-60d21476a8ae"] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node2-qxwqf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:29:57.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-f29047ff-414d-4f04-986f-dc484f44ed30" May 7 00:29:58.015: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-f29047ff-414d-4f04-986f-dc484f44ed30" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-f29047ff-414d-4f04-986f-dc484f44ed30" "/tmp/local-volume-test-f29047ff-414d-4f04-986f-dc484f44ed30"] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node2-qxwqf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:29:58.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-35f81125-ad07-4b84-814d-75ad34aee74d" May 7 00:29:58.126: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-35f81125-ad07-4b84-814d-75ad34aee74d" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-35f81125-ad07-4b84-814d-75ad34aee74d" "/tmp/local-volume-test-35f81125-ad07-4b84-814d-75ad34aee74d"] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node2-qxwqf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:29:58.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Create 20 PVs STEP: Start a goroutine to recycle unbound PVs [It] should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:531 STEP: Creating 7 pods periodically STEP: Waiting for all pods to complete successfully May 7 00:30:03.431: INFO: Deleting pod pod-6801c7f5-b057-4fa6-9c1c-57c3ccab07eb May 7 00:30:03.437: INFO: Deleting PersistentVolumeClaim "pvc-rq5hq" May 7 00:30:03.441: INFO: Deleting PersistentVolumeClaim "pvc-4dvgg" May 7 00:30:03.445: INFO: Deleting PersistentVolumeClaim "pvc-wjz6j" May 7 00:30:03.449: INFO: 1/28 pods finished STEP: Delete "local-pvbtrz5" and create a new PV for same local volume storage STEP: Delete "local-pvtlwr7" and create a new PV for same local volume storage STEP: Delete "local-pvkjtsb" and create a new PV for same local volume storage May 7 00:30:05.431: INFO: Deleting pod pod-6763e112-b2ab-4492-a041-0be38a9b62d0 May 7 00:30:05.439: INFO: Deleting PersistentVolumeClaim "pvc-ltk92" May 7 00:30:05.442: INFO: Deleting PersistentVolumeClaim "pvc-hz8zh" May 7 00:30:05.448: INFO: Deleting PersistentVolumeClaim "pvc-zrdpg" May 7 00:30:05.452: INFO: 2/28 pods finished STEP: Delete "local-pvfdr9r" and create a new PV for same local volume storage STEP: Delete "local-pv9ttqd" and create a new PV for same local volume storage STEP: Delete "local-pvq5bgm" and create a new PV for same local volume storage STEP: Delete "local-pvq5bgm" and create a new PV for same local volume storage May 7 00:30:06.431: INFO: Deleting pod pod-66d5e6b1-da35-4b8a-ab68-03040aed5a4b May 7 00:30:06.438: INFO: Deleting PersistentVolumeClaim "pvc-dcwkj" May 7 00:30:06.441: INFO: Deleting PersistentVolumeClaim "pvc-gncgd" May 7 00:30:06.445: INFO: Deleting PersistentVolumeClaim "pvc-l2hwh" May 7 00:30:06.448: INFO: 3/28 pods finished STEP: Delete "local-pv5bj5l" and create a new PV for same local volume storage STEP: Delete "local-pvq65f6" and create a new PV for same local volume storage STEP: Delete "local-pvlzbf4" and create a new PV for same local volume storage May 7 00:30:07.430: INFO: Deleting pod pod-bd3f34ed-598e-4cd5-89a6-101e21efb47e May 7 00:30:07.440: INFO: Deleting PersistentVolumeClaim "pvc-58fnv" May 7 00:30:07.444: INFO: Deleting PersistentVolumeClaim "pvc-9tr7b" May 7 00:30:07.448: INFO: Deleting PersistentVolumeClaim "pvc-2qkqp" May 7 00:30:07.452: INFO: 4/28 pods finished STEP: Delete "local-pv7nbxj" and create a new PV for same local volume storage STEP: Delete "local-pvhtcnv" and create a new PV for same local volume storage STEP: Delete "local-pvmq85k" and create a new PV for same local volume storage May 7 00:30:09.432: INFO: Deleting pod pod-a64ee32c-7c6b-4434-aac2-62245a91690d May 7 00:30:09.440: INFO: Deleting PersistentVolumeClaim "pvc-nxwv6" May 7 00:30:09.444: INFO: Deleting PersistentVolumeClaim "pvc-td4cd" May 7 00:30:09.448: INFO: Deleting PersistentVolumeClaim "pvc-797b5" May 7 00:30:09.451: INFO: 5/28 pods finished STEP: Delete "local-pv28x6l" and create a new PV for same local volume storage STEP: Delete "local-pv6dkqk" and create a new PV for same local volume storage STEP: Delete "local-pvzkqsg" and create a new PV for same local volume storage May 7 00:30:12.430: INFO: Deleting pod pod-07decc93-0764-4abc-bd03-b7331e55d6db May 7 00:30:12.438: INFO: Deleting PersistentVolumeClaim "pvc-s54mp" May 7 00:30:12.441: INFO: Deleting PersistentVolumeClaim "pvc-q8csr" May 7 00:30:12.445: INFO: Deleting PersistentVolumeClaim "pvc-bsk2f" May 7 00:30:12.448: INFO: 6/28 pods finished May 7 00:30:12.448: INFO: Deleting pod pod-514fd789-72ad-4eb2-89c0-b385bc529a2c STEP: Delete "local-pvbdqr5" and create a new PV for same local volume storage May 7 00:30:12.455: INFO: Deleting PersistentVolumeClaim "pvc-vkzzl" May 7 00:30:12.459: INFO: Deleting PersistentVolumeClaim "pvc-l5bk6" May 7 00:30:12.463: INFO: Deleting PersistentVolumeClaim "pvc-qsmwm" STEP: Delete "local-pvjpjfd" and create a new PV for same local volume storage May 7 00:30:12.466: INFO: 7/28 pods finished STEP: Delete "local-pvxlj5w" and create a new PV for same local volume storage STEP: Delete "local-pvzww82" and create a new PV for same local volume storage STEP: Delete "local-pv8ps5f" and create a new PV for same local volume storage STEP: Delete "local-pvd6nkn" and create a new PV for same local volume storage May 7 00:30:16.433: INFO: Deleting pod pod-441e920a-d575-402d-8b2c-ea5aafac4be1 May 7 00:30:16.442: INFO: Deleting PersistentVolumeClaim "pvc-fj98p" May 7 00:30:16.445: INFO: Deleting PersistentVolumeClaim "pvc-l8cb6" May 7 00:30:16.449: INFO: Deleting PersistentVolumeClaim "pvc-zp5mf" May 7 00:30:16.453: INFO: 8/28 pods finished May 7 00:30:16.453: INFO: Deleting pod pod-ecfbfbb0-f04b-4213-88eb-34521ca585b7 May 7 00:30:16.460: INFO: Deleting PersistentVolumeClaim "pvc-fq4vw" STEP: Delete "local-pvpv8wd" and create a new PV for same local volume storage May 7 00:30:16.464: INFO: Deleting PersistentVolumeClaim "pvc-xmqpq" May 7 00:30:16.468: INFO: Deleting PersistentVolumeClaim "pvc-w9lvm" May 7 00:30:16.471: INFO: 9/28 pods finished STEP: Delete "local-pvkp85k" and create a new PV for same local volume storage STEP: Delete "local-pvxw9ph" and create a new PV for same local volume storage STEP: Delete "local-pvsslht" and create a new PV for same local volume storage STEP: Delete "local-pv2nlnq" and create a new PV for same local volume storage STEP: Delete "local-pv2mfll" and create a new PV for same local volume storage May 7 00:30:18.431: INFO: Deleting pod pod-a477dedf-9fd7-4e54-895d-1b00565045cb May 7 00:30:18.439: INFO: Deleting PersistentVolumeClaim "pvc-h6wnw" May 7 00:30:18.442: INFO: Deleting PersistentVolumeClaim "pvc-mqptc" May 7 00:30:18.446: INFO: Deleting PersistentVolumeClaim "pvc-2j4lp" May 7 00:30:18.449: INFO: 10/28 pods finished May 7 00:30:18.449: INFO: Deleting pod pod-a917eacf-885e-47ee-9a1a-2d9d7bb7c0df May 7 00:30:18.458: INFO: Deleting PersistentVolumeClaim "pvc-dh6dv" STEP: Delete "local-pvr58g4" and create a new PV for same local volume storage May 7 00:30:18.461: INFO: Deleting PersistentVolumeClaim "pvc-b6rwl" May 7 00:30:18.465: INFO: Deleting PersistentVolumeClaim "pvc-v6wfn" May 7 00:30:18.469: INFO: 11/28 pods finished STEP: Delete "local-pvkx9gn" and create a new PV for same local volume storage STEP: Delete "local-pvwjqqt" and create a new PV for same local volume storage STEP: Delete "local-pv8cnz9" and create a new PV for same local volume storage STEP: Delete "local-pvhmr7l" and create a new PV for same local volume storage STEP: Delete "local-pv57njz" and create a new PV for same local volume storage May 7 00:30:22.430: INFO: Deleting pod pod-6c7d513d-a497-4598-875d-8eed94e42f91 May 7 00:30:22.437: INFO: Deleting PersistentVolumeClaim "pvc-hz2bg" May 7 00:30:22.440: INFO: Deleting PersistentVolumeClaim "pvc-hl4xh" May 7 00:30:22.444: INFO: Deleting PersistentVolumeClaim "pvc-c2qbl" May 7 00:30:22.449: INFO: 12/28 pods finished STEP: Delete "local-pvx8v86" and create a new PV for same local volume storage STEP: Delete "local-pvnpfrz" and create a new PV for same local volume storage STEP: Delete "local-pvsgzzk" and create a new PV for same local volume storage May 7 00:30:25.431: INFO: Deleting pod pod-eb593ae1-8607-4aad-aef0-d7841a109d76 May 7 00:30:25.439: INFO: Deleting PersistentVolumeClaim "pvc-jj7sz" May 7 00:30:25.444: INFO: Deleting PersistentVolumeClaim "pvc-j6l9f" May 7 00:30:25.447: INFO: Deleting PersistentVolumeClaim "pvc-8ssw9" May 7 00:30:25.451: INFO: 13/28 pods finished STEP: Delete "local-pv6nvw6" and create a new PV for same local volume storage STEP: Delete "local-pvfghj4" and create a new PV for same local volume storage STEP: Delete "local-pvds2s8" and create a new PV for same local volume storage May 7 00:30:26.432: INFO: Deleting pod pod-b22758a2-468b-46d6-9bb2-b6301209fae5 May 7 00:30:26.440: INFO: Deleting PersistentVolumeClaim "pvc-w7rvm" May 7 00:30:26.443: INFO: Deleting PersistentVolumeClaim "pvc-ctsrl" May 7 00:30:26.447: INFO: Deleting PersistentVolumeClaim "pvc-64ccf" May 7 00:30:26.450: INFO: 14/28 pods finished STEP: Delete "local-pvzvjfg" and create a new PV for same local volume storage STEP: Delete "local-pv6dvtt" and create a new PV for same local volume storage STEP: Delete "local-pv84zqd" and create a new PV for same local volume storage May 7 00:30:28.430: INFO: Deleting pod pod-1e82bb71-43c2-460c-9a2a-2a8486d72bf5 May 7 00:30:28.438: INFO: Deleting PersistentVolumeClaim "pvc-njpgz" May 7 00:30:28.441: INFO: Deleting PersistentVolumeClaim "pvc-9prws" May 7 00:30:28.445: INFO: Deleting PersistentVolumeClaim "pvc-qgzrc" May 7 00:30:28.449: INFO: 15/28 pods finished STEP: Delete "local-pvvs6gx" and create a new PV for same local volume storage STEP: Delete "local-pvmz6dq" and create a new PV for same local volume storage STEP: Delete "local-pvsbmst" and create a new PV for same local volume storage May 7 00:30:29.429: INFO: Deleting pod pod-0b423115-f759-4f32-8140-04f173967a97 May 7 00:30:29.436: INFO: Deleting PersistentVolumeClaim "pvc-cxxtr" May 7 00:30:29.440: INFO: Deleting PersistentVolumeClaim "pvc-5ptd4" May 7 00:30:29.443: INFO: Deleting PersistentVolumeClaim "pvc-l7zxd" May 7 00:30:29.447: INFO: 16/28 pods finished STEP: Delete "local-pvlvd6v" and create a new PV for same local volume storage STEP: Delete "local-pvjkqvn" and create a new PV for same local volume storage STEP: Delete "local-pvzcg65" and create a new PV for same local volume storage May 7 00:30:31.430: INFO: Deleting pod pod-810600e5-b168-4505-87dc-105521798a3c May 7 00:30:31.438: INFO: Deleting PersistentVolumeClaim "pvc-8tcjd" May 7 00:30:31.441: INFO: Deleting PersistentVolumeClaim "pvc-fwksz" May 7 00:30:31.445: INFO: Deleting PersistentVolumeClaim "pvc-97wfp" May 7 00:30:31.449: INFO: 17/28 pods finished STEP: Delete "local-pvmj27g" and create a new PV for same local volume storage STEP: Delete "local-pvnrrgg" and create a new PV for same local volume storage STEP: Delete "local-pvhdgmg" and create a new PV for same local volume storage May 7 00:30:32.429: INFO: Deleting pod pod-f7830f6d-f094-4757-8d1b-6124bd7755b6 May 7 00:30:32.437: INFO: Deleting PersistentVolumeClaim "pvc-92pw5" May 7 00:30:32.441: INFO: Deleting PersistentVolumeClaim "pvc-42rkn" May 7 00:30:32.445: INFO: Deleting PersistentVolumeClaim "pvc-q6l9v" May 7 00:30:32.448: INFO: 18/28 pods finished STEP: Delete "local-pvhrl52" and create a new PV for same local volume storage STEP: Delete "local-pvrb6vl" and create a new PV for same local volume storage STEP: Delete "local-pvtvm54" and create a new PV for same local volume storage May 7 00:30:36.434: INFO: Deleting pod pod-11b08c62-2729-409d-833f-1d65deb3b93d May 7 00:30:36.442: INFO: Deleting PersistentVolumeClaim "pvc-pvktb" May 7 00:30:36.446: INFO: Deleting PersistentVolumeClaim "pvc-g5822" May 7 00:30:36.449: INFO: Deleting PersistentVolumeClaim "pvc-zs95m" May 7 00:30:36.453: INFO: 19/28 pods finished STEP: Delete "local-pvbqmfs" and create a new PV for same local volume storage STEP: Delete "local-pvqbg4n" and create a new PV for same local volume storage STEP: Delete "local-pvt4dpt" and create a new PV for same local volume storage May 7 00:30:37.430: INFO: Deleting pod pod-22d9fbf9-2f70-4e63-a209-0ce9c1e71d72 May 7 00:30:37.438: INFO: Deleting PersistentVolumeClaim "pvc-npswl" May 7 00:30:37.441: INFO: Deleting PersistentVolumeClaim "pvc-s7vs4" May 7 00:30:37.445: INFO: Deleting PersistentVolumeClaim "pvc-fst6x" May 7 00:30:37.449: INFO: 20/28 pods finished STEP: Delete "local-pv54mdf" and create a new PV for same local volume storage STEP: Delete "local-pvb4j2k" and create a new PV for same local volume storage STEP: Delete "local-pvh4d6s" and create a new PV for same local volume storage May 7 00:30:39.431: INFO: Deleting pod pod-3fe61a71-f35d-4f8e-b789-0c8ad4f64b65 May 7 00:30:39.441: INFO: Deleting PersistentVolumeClaim "pvc-8vjpb" May 7 00:30:39.445: INFO: Deleting PersistentVolumeClaim "pvc-b27r8" May 7 00:30:39.450: INFO: Deleting PersistentVolumeClaim "pvc-k5gc9" May 7 00:30:39.454: INFO: 21/28 pods finished STEP: Delete "local-pv74qfd" and create a new PV for same local volume storage STEP: Delete "local-pv9jrhx" and create a new PV for same local volume storage STEP: Delete "local-pvc9xth" and create a new PV for same local volume storage May 7 00:30:41.429: INFO: Deleting pod pod-a2869cf7-6cd8-4fa9-9349-eb42e5b87a46 May 7 00:30:41.437: INFO: Deleting PersistentVolumeClaim "pvc-dgvkx" May 7 00:30:41.440: INFO: Deleting PersistentVolumeClaim "pvc-dcjsc" May 7 00:30:41.444: INFO: Deleting PersistentVolumeClaim "pvc-zn79n" May 7 00:30:41.447: INFO: 22/28 pods finished May 7 00:30:41.447: INFO: Deleting pod pod-c94024f9-6547-434a-8fee-308dde485bd3 STEP: Delete "local-pvfjtkl" and create a new PV for same local volume storage May 7 00:30:41.454: INFO: Deleting PersistentVolumeClaim "pvc-dcl6h" May 7 00:30:41.458: INFO: Deleting PersistentVolumeClaim "pvc-ghz8c" May 7 00:30:41.462: INFO: Deleting PersistentVolumeClaim "pvc-jz99s" May 7 00:30:41.466: INFO: 23/28 pods finished STEP: Delete "local-pv6d7r9" and create a new PV for same local volume storage STEP: Delete "local-pvp9zdf" and create a new PV for same local volume storage STEP: Delete "local-pvrhv5x" and create a new PV for same local volume storage STEP: Delete "local-pvr2cml" and create a new PV for same local volume storage STEP: Delete "local-pvph8kj" and create a new PV for same local volume storage May 7 00:30:43.431: INFO: Deleting pod pod-5dfe6a73-c98f-4c95-8a9c-08563339e2db May 7 00:30:43.438: INFO: Deleting PersistentVolumeClaim "pvc-mb7ng" May 7 00:30:43.442: INFO: Deleting PersistentVolumeClaim "pvc-wxr72" May 7 00:30:43.445: INFO: Deleting PersistentVolumeClaim "pvc-blw9n" May 7 00:30:43.450: INFO: 24/28 pods finished STEP: Delete "local-pv2wc8d" and create a new PV for same local volume storage STEP: Delete "local-pv5n7qh" and create a new PV for same local volume storage STEP: Delete "local-pvctxwj" and create a new PV for same local volume storage May 7 00:30:44.431: INFO: Deleting pod pod-300a15fe-e639-47a9-aff7-e15cd59db10b May 7 00:30:44.439: INFO: Deleting PersistentVolumeClaim "pvc-rmxxt" May 7 00:30:44.443: INFO: Deleting PersistentVolumeClaim "pvc-b5qmz" May 7 00:30:44.447: INFO: Deleting PersistentVolumeClaim "pvc-9fmx4" May 7 00:30:44.451: INFO: 25/28 pods finished STEP: Delete "local-pvsc95r" and create a new PV for same local volume storage STEP: Delete "local-pvqxfcw" and create a new PV for same local volume storage STEP: Delete "local-pvbjr6n" and create a new PV for same local volume storage May 7 00:30:47.430: INFO: Deleting pod pod-48d2739e-1a61-421b-840f-533257cb6f45 May 7 00:30:47.438: INFO: Deleting PersistentVolumeClaim "pvc-mkw5n" May 7 00:30:47.442: INFO: Deleting PersistentVolumeClaim "pvc-5wxzh" May 7 00:30:47.445: INFO: Deleting PersistentVolumeClaim "pvc-7d72j" May 7 00:30:47.449: INFO: 26/28 pods finished STEP: Delete "local-pv8hm8w" and create a new PV for same local volume storage STEP: Delete "local-pvpfnc5" and create a new PV for same local volume storage STEP: Delete "local-pvjf84r" and create a new PV for same local volume storage May 7 00:30:48.429: INFO: Deleting pod pod-3cb3ce38-2ccd-4400-8132-80a4935cf4c8 May 7 00:30:48.438: INFO: Deleting PersistentVolumeClaim "pvc-86wdp" May 7 00:30:48.443: INFO: Deleting PersistentVolumeClaim "pvc-lztlj" May 7 00:30:48.447: INFO: Deleting PersistentVolumeClaim "pvc-dgp4h" May 7 00:30:48.450: INFO: 27/28 pods finished STEP: Delete "local-pvd685c" and create a new PV for same local volume storage STEP: Delete "local-pvr4pss" and create a new PV for same local volume storage STEP: Delete "local-pvdsg4b" and create a new PV for same local volume storage May 7 00:30:49.430: INFO: Deleting pod pod-927e0776-e855-4862-94c2-609630728d39 May 7 00:30:49.437: INFO: Deleting PersistentVolumeClaim "pvc-xxt47" May 7 00:30:49.441: INFO: Deleting PersistentVolumeClaim "pvc-hr9kb" May 7 00:30:49.445: INFO: Deleting PersistentVolumeClaim "pvc-7bphs" May 7 00:30:49.448: INFO: 28/28 pods finished [AfterEach] Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:519 STEP: Stop and wait for recycle goroutine to finish STEP: Clean all PVs STEP: Cleaning up 10 local volumes on node "node1" STEP: Cleaning up PVC and PV May 7 00:30:49.448: INFO: pvc is nil May 7 00:30:49.448: INFO: Deleting PersistentVolume "local-pvvrfsl" STEP: Cleaning up PVC and PV May 7 00:30:49.452: INFO: pvc is nil May 7 00:30:49.452: INFO: Deleting PersistentVolume "local-pvzqtfd" STEP: Cleaning up PVC and PV May 7 00:30:49.456: INFO: pvc is nil May 7 00:30:49.456: INFO: Deleting PersistentVolume "local-pvqf7d4" STEP: Cleaning up PVC and PV May 7 00:30:49.460: INFO: pvc is nil May 7 00:30:49.460: INFO: Deleting PersistentVolume "local-pvmljrr" STEP: Cleaning up PVC and PV May 7 00:30:49.463: INFO: pvc is nil May 7 00:30:49.463: INFO: Deleting PersistentVolume "local-pvd8wmm" STEP: Cleaning up PVC and PV May 7 00:30:49.466: INFO: pvc is nil May 7 00:30:49.466: INFO: Deleting PersistentVolume "local-pvnp4qx" STEP: Cleaning up PVC and PV May 7 00:30:49.470: INFO: pvc is nil May 7 00:30:49.470: INFO: Deleting PersistentVolume "local-pv7956v" STEP: Cleaning up PVC and PV May 7 00:30:49.473: INFO: pvc is nil May 7 00:30:49.473: INFO: Deleting PersistentVolume "local-pv9jn9n" STEP: Cleaning up PVC and PV May 7 00:30:49.476: INFO: pvc is nil May 7 00:30:49.476: INFO: Deleting PersistentVolume "local-pvrmkjt" STEP: Cleaning up PVC and PV May 7 00:30:49.480: INFO: pvc is nil May 7 00:30:49.480: INFO: Deleting PersistentVolume "local-pvrq5hj" STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-f3d1f0d7-8706-4dce-8b5f-8e8dcf49a376" May 7 00:30:49.483: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-f3d1f0d7-8706-4dce-8b5f-8e8dcf49a376"] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node1-4dw8d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:30:49.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 7 00:30:49.580: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f3d1f0d7-8706-4dce-8b5f-8e8dcf49a376] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node1-4dw8d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:30:49.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-6ae804f0-ac40-4bb7-8060-e21ac839a47c" May 7 00:30:49.661: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-6ae804f0-ac40-4bb7-8060-e21ac839a47c"] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node1-4dw8d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:30:49.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 7 00:30:49.770: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6ae804f0-ac40-4bb7-8060-e21ac839a47c] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node1-4dw8d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:30:49.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-9a5beff0-efce-46f7-9f7d-7ac1b836d963" May 7 00:30:49.849: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-9a5beff0-efce-46f7-9f7d-7ac1b836d963"] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node1-4dw8d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:30:49.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 7 00:30:49.956: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-9a5beff0-efce-46f7-9f7d-7ac1b836d963] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node1-4dw8d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:30:49.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-d3f01fbe-1d0c-4b09-aac7-7375cf23feb6" May 7 00:30:50.045: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-d3f01fbe-1d0c-4b09-aac7-7375cf23feb6"] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node1-4dw8d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:30:50.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 7 00:30:50.131: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-d3f01fbe-1d0c-4b09-aac7-7375cf23feb6] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node1-4dw8d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:30:50.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-f91597e1-bad4-4e4a-a82e-966aeeaacbb1" May 7 00:30:50.237: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-f91597e1-bad4-4e4a-a82e-966aeeaacbb1"] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node1-4dw8d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:30:50.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 7 00:30:50.325: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f91597e1-bad4-4e4a-a82e-966aeeaacbb1] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node1-4dw8d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:30:50.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-91ab0886-9c2b-4080-a7d6-78983895d9a8" May 7 00:30:50.411: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-91ab0886-9c2b-4080-a7d6-78983895d9a8"] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node1-4dw8d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:30:50.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 7 00:30:50.532: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-91ab0886-9c2b-4080-a7d6-78983895d9a8] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node1-4dw8d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:30:50.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-2026210a-b023-4419-978b-966593d4a9f7" May 7 00:30:50.626: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-2026210a-b023-4419-978b-966593d4a9f7"] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node1-4dw8d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:30:50.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 7 00:30:50.711: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-2026210a-b023-4419-978b-966593d4a9f7] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node1-4dw8d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:30:50.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-cf1b6dc9-fd73-414c-8c22-7e54160655f9" May 7 00:30:50.810: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-cf1b6dc9-fd73-414c-8c22-7e54160655f9"] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node1-4dw8d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:30:50.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 7 00:30:50.903: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-cf1b6dc9-fd73-414c-8c22-7e54160655f9] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node1-4dw8d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:30:50.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-f47a0d81-00dc-461e-b591-b0df5376411e" May 7 00:30:50.981: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-f47a0d81-00dc-461e-b591-b0df5376411e"] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node1-4dw8d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:30:50.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 7 00:30:51.105: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f47a0d81-00dc-461e-b591-b0df5376411e] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node1-4dw8d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:30:51.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-6c93ea2a-2cc3-46f3-8f6d-918c446bb3eb" May 7 00:30:51.190: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-6c93ea2a-2cc3-46f3-8f6d-918c446bb3eb"] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node1-4dw8d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:30:51.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 7 00:30:51.310: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6c93ea2a-2cc3-46f3-8f6d-918c446bb3eb] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node1-4dw8d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:30:51.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up 10 local volumes on node "node2" STEP: Cleaning up PVC and PV May 7 00:30:51.420: INFO: pvc is nil May 7 00:30:51.420: INFO: Deleting PersistentVolume "local-pv7wjlm" STEP: Cleaning up PVC and PV May 7 00:30:51.426: INFO: pvc is nil May 7 00:30:51.426: INFO: Deleting PersistentVolume "local-pvbs4tn" STEP: Cleaning up PVC and PV May 7 00:30:51.429: INFO: pvc is nil May 7 00:30:51.429: INFO: Deleting PersistentVolume "local-pvnbmwl" STEP: Cleaning up PVC and PV May 7 00:30:51.433: INFO: pvc is nil May 7 00:30:51.433: INFO: Deleting PersistentVolume "local-pvfpz6r" STEP: Cleaning up PVC and PV May 7 00:30:51.437: INFO: pvc is nil May 7 00:30:51.437: INFO: Deleting PersistentVolume "local-pvthb4b" STEP: Cleaning up PVC and PV May 7 00:30:51.441: INFO: pvc is nil May 7 00:30:51.441: INFO: Deleting PersistentVolume "local-pvq6xzg" STEP: Cleaning up PVC and PV May 7 00:30:51.445: INFO: pvc is nil May 7 00:30:51.445: INFO: Deleting PersistentVolume "local-pvxtk2f" STEP: Cleaning up PVC and PV May 7 00:30:51.449: INFO: pvc is nil May 7 00:30:51.449: INFO: Deleting PersistentVolume "local-pvxmrkp" STEP: Cleaning up PVC and PV May 7 00:30:51.452: INFO: pvc is nil May 7 00:30:51.452: INFO: Deleting PersistentVolume "local-pvfwdtt" STEP: Cleaning up PVC and PV May 7 00:30:51.456: INFO: pvc is nil May 7 00:30:51.456: INFO: Deleting PersistentVolume "local-pvp9fhz" STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-83557c97-b64e-4b61-bb5b-dfe7954d82b0" May 7 00:30:51.460: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-83557c97-b64e-4b61-bb5b-dfe7954d82b0"] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node2-qxwqf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:30:51.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 7 00:30:51.553: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-83557c97-b64e-4b61-bb5b-dfe7954d82b0] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node2-qxwqf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:30:51.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-852ad5f5-a98e-4c78-ba46-273d4b330cd2" May 7 00:30:51.632: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-852ad5f5-a98e-4c78-ba46-273d4b330cd2"] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node2-qxwqf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:30:51.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 7 00:30:51.719: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-852ad5f5-a98e-4c78-ba46-273d4b330cd2] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node2-qxwqf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:30:51.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-0b8f0a1a-bf86-48ec-9de0-0983fbbed05d" May 7 00:30:51.797: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-0b8f0a1a-bf86-48ec-9de0-0983fbbed05d"] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node2-qxwqf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:30:51.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 7 00:30:51.895: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0b8f0a1a-bf86-48ec-9de0-0983fbbed05d] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node2-qxwqf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:30:51.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-915cc1b4-bdf7-4e56-a446-69c7b106c3ac" May 7 00:30:51.985: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-915cc1b4-bdf7-4e56-a446-69c7b106c3ac"] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node2-qxwqf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:30:51.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 7 00:30:52.076: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-915cc1b4-bdf7-4e56-a446-69c7b106c3ac] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node2-qxwqf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:30:52.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-506924ac-14b2-4e6f-aa2c-581116d02764" May 7 00:30:52.163: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-506924ac-14b2-4e6f-aa2c-581116d02764"] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node2-qxwqf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:30:52.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 7 00:30:52.259: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-506924ac-14b2-4e6f-aa2c-581116d02764] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node2-qxwqf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:30:52.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-9f31a35c-4f90-4e3c-9708-be85fe548fe8" May 7 00:30:52.347: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-9f31a35c-4f90-4e3c-9708-be85fe548fe8"] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node2-qxwqf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:30:52.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 7 00:30:52.458: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-9f31a35c-4f90-4e3c-9708-be85fe548fe8] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node2-qxwqf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:30:52.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-ab39713f-533f-4e55-b647-ac40024f72c2" May 7 00:30:52.560: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-ab39713f-533f-4e55-b647-ac40024f72c2"] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node2-qxwqf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:30:52.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 7 00:30:52.654: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ab39713f-533f-4e55-b647-ac40024f72c2] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node2-qxwqf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:30:52.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-e13a9b4f-1372-4b1f-9f69-60d21476a8ae" May 7 00:30:52.754: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-e13a9b4f-1372-4b1f-9f69-60d21476a8ae"] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node2-qxwqf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:30:52.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 7 00:30:52.854: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e13a9b4f-1372-4b1f-9f69-60d21476a8ae] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node2-qxwqf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:30:52.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-f29047ff-414d-4f04-986f-dc484f44ed30" May 7 00:30:52.937: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-f29047ff-414d-4f04-986f-dc484f44ed30"] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node2-qxwqf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:30:52.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 7 00:30:53.031: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f29047ff-414d-4f04-986f-dc484f44ed30] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node2-qxwqf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:30:53.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-35f81125-ad07-4b84-814d-75ad34aee74d" May 7 00:30:53.124: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-35f81125-ad07-4b84-814d-75ad34aee74d"] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node2-qxwqf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:30:53.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 7 00:30:53.216: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-35f81125-ad07-4b84-814d-75ad34aee74d] Namespace:persistent-local-volumes-test-5754 PodName:hostexec-node2-qxwqf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:30:53.216: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 7 00:30:53.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5754" for this suite. • [SLOW TEST:63.387 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:441 should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:531 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","total":21,"completed":1,"skipped":550,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create prometheus metrics for volume provisioning errors [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:147 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 7 00:30:53.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 7 00:30:53.386: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 7 00:30:53.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-4095" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.037 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create prometheus metrics for volume provisioning errors [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:147 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:657 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 7 00:30:53.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:634 [It] all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:657 STEP: Create a PVC STEP: Create 50 pods to use this PVC STEP: Wait for all pods are running [AfterEach] Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:648 STEP: Clean PV local-pv4tktk [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 7 00:32:22.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7346" for this suite. • [SLOW TEST:89.544 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:629 all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:657 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running","total":21,"completed":2,"skipped":1868,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create metrics for total time taken in volume operations in P/V Controller /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:261 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 7 00:32:22.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 7 00:32:22.980: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 7 00:32:22.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-5768" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.035 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create metrics for total time taken in volume operations in P/V Controller [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:261 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] One pod requesting one prebound PVC should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 7 00:32:22.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 May 7 00:32:25.031: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-8828 PodName:hostexec-node1-cn7ff ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:32:25.031: INFO: >>> kubeConfig: /root/.kube/config May 7 00:32:25.124: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l May 7 00:32:25.124: INFO: exec node1: stdout: "0\n" May 7 00:32:25.124: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" May 7 00:32:25.124: INFO: exec node1: exit code: 0 May 7 00:32:25.124: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 7 00:32:25.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8828" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.148 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1254 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Set fsGroup for local volume should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 7 00:32:25.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 May 7 00:32:47.188: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-1646 PodName:hostexec-node2-cgkkc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:32:47.188: INFO: >>> kubeConfig: /root/.kube/config May 7 00:32:47.911: INFO: exec node2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l May 7 00:32:47.911: INFO: exec node2: stdout: "0\n" May 7 00:32:47.911: INFO: exec node2: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" May 7 00:32:47.911: INFO: exec node2: exit code: 0 May 7 00:32:47.911: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 7 00:32:47.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1646" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [22.783 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1254 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create volume metrics with the correct PVC ref /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:204 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 7 00:32:47.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 7 00:32:47.953: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 7 00:32:47.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-1466" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.036 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create volume metrics with the correct PVC ref [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:204 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create unbound pv count metrics for pvc controller after creating pv only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:485 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 7 00:32:47.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 7 00:32:48.002: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 7 00:32:48.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-4001" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.048 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create unbound pv count metrics for pvc controller after creating pv only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:485 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 7 00:32:48.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 May 7 00:32:52.064: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-6750 PodName:hostexec-node1-m5cmh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:32:52.064: INFO: >>> kubeConfig: /root/.kube/config May 7 00:32:52.162: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l May 7 00:32:52.162: INFO: exec node1: stdout: "0\n" May 7 00:32:52.162: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" May 7 00:32:52.162: INFO: exec node1: exit code: 0 May 7 00:32:52.162: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 7 00:32:52.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6750" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [4.155 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1254 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Pod Disks [Serial] attach on previously attached volumes should work /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:458 [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 7 00:32:52.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 [It] [Serial] attach on previously attached volumes should work /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:458 May 7 00:32:52.206: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 7 00:32:52.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-9308" for this suite. S [SKIPPING] [0.038 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Serial] attach on previously attached volumes should work [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:458 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:459 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create metrics for total number of volumes in A/D Controller /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:322 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 7 00:32:52.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 7 00:32:52.243: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 7 00:32:52.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3527" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create metrics for total number of volumes in A/D Controller [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:322 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create prometheus metrics for volume provisioning and attach/detach /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:101 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 7 00:32:52.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 7 00:32:52.275: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 7 00:32:52.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3909" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create prometheus metrics for volume provisioning and attach/detach [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:101 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 7 00:32:52.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 May 7 00:32:54.331: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-4245 PodName:hostexec-node1-rprfm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:32:54.331: INFO: >>> kubeConfig: /root/.kube/config May 7 00:32:54.411: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l May 7 00:32:54.411: INFO: exec node1: stdout: "0\n" May 7 00:32:54.411: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" May 7 00:32:54.411: INFO: exec node1: exit code: 0 May 7 00:32:54.411: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 7 00:32:54.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4245" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.135 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1254 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create bound pv/pvc count metrics for pvc controller after creating both pv and pvc /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:503 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 7 00:32:54.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 7 00:32:54.446: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 7 00:32:54.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-7595" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create bound pv/pvc count metrics for pvc controller after creating both pv and pvc /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:503 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create unbound pvc count metrics for pvc controller after creating pvc only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:494 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 7 00:32:54.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 7 00:32:54.479: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 7 00:32:54.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-5468" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create unbound pvc count metrics for pvc controller after creating pvc only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:494 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 7 00:32:54.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 May 7 00:33:22.556: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-4605 PodName:hostexec-node2-k65gs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:33:22.557: INFO: >>> kubeConfig: /root/.kube/config May 7 00:33:22.658: INFO: exec node2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l May 7 00:33:22.658: INFO: exec node2: stdout: "0\n" May 7 00:33:22.659: INFO: exec node2: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" May 7 00:33:22.659: INFO: exec node2: exit code: 0 May 7 00:33:22.659: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 7 00:33:22.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4605" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [28.179 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1254 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create volume metrics in Volume Manager /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:292 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 7 00:33:22.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 7 00:33:22.701: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 7 00:33:22.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-6248" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create volume metrics in Volume Manager [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:292 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create total pv count metrics for with plugin and volume mode labels after creating pv /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:513 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 7 00:33:22.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 7 00:33:22.731: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 7 00:33:22.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-9460" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create total pv count metrics for with plugin and volume mode labels after creating pv /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:513 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] One pod requesting one prebound PVC should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 7 00:33:22.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 May 7 00:33:24.788: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-9369 PodName:hostexec-node1-8q6s8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:33:24.788: INFO: >>> kubeConfig: /root/.kube/config May 7 00:33:24.884: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l May 7 00:33:24.884: INFO: exec node1: stdout: "0\n" May 7 00:33:24.884: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" May 7 00:33:24.884: INFO: exec node1: exit code: 0 May 7 00:33:24.884: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 7 00:33:24.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9369" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.151 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1254 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 7 00:33:24.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 May 7 00:33:28.943: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-2564 PodName:hostexec-node1-2smxg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 7 00:33:28.943: INFO: >>> kubeConfig: /root/.kube/config May 7 00:33:29.054: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l May 7 00:33:29.054: INFO: exec node1: stdout: "0\n" May 7 00:33:29.054: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" May 7 00:33:29.054: INFO: exec node1: exit code: 0 May 7 00:33:29.054: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 7 00:33:29.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2564" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [4.169 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1254 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create none metrics for pvc controller before creating any PV or PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:481 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 7 00:33:29.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 7 00:33:29.093: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 7 00:33:29.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-6059" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.035 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create none metrics for pvc controller before creating any PV or PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:481 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 7 00:33:29.114: INFO: Running AfterSuite actions on all nodes May 7 00:33:29.114: INFO: Running AfterSuite actions on node 1 May 7 00:33:29.114: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_storage_serial/junit_01.xml {"msg":"Test Suite completed","total":21,"completed":2,"skipped":5771,"failed":0} Ran 2 of 5773 Specs in 219.288 seconds SUCCESS! -- 2 Passed | 0 Failed | 0 Pending | 5771 Skipped PASS Ginkgo ran 1 suite in 3m40.739115393s Test Suite Passed