I0614 18:09:18.056697 16 test_context.go:457] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0614 18:09:18.056901 16 e2e.go:129] Starting e2e run "48b9fd52-e6d7-4038-a86d-4d24eea45843" on Ginkgo node 1 {"msg":"Test Suite starting","total":18,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1623694156 - Will randomize all specs Will run 18 of 5668 specs Jun 14 18:09:18.175: INFO: >>> kubeConfig: /root/.kube/config Jun 14 18:09:18.180: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 14 18:09:18.209: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 14 18:09:18.260: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 14 18:09:18.260: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jun 14 18:09:18.260: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 14 18:09:18.270: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) Jun 14 18:09:18.270: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jun 14 18:09:18.270: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds' (0 seconds elapsed) Jun 14 18:09:18.270: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 14 18:09:18.270: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'tune-sysctls' (0 seconds elapsed) Jun 14 18:09:18.270: INFO: e2e test version: v1.20.7 Jun 14 18:09:18.272: INFO: kube-apiserver version: v1.20.7 Jun 14 18:09:18.272: INFO: >>> kubeConfig: /root/.kube/config Jun 14 18:09:18.279: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:251 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 18:09:18.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test Jun 14 18:09:18.329: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 14 18:09:18.338: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 Jun 14 18:09:20.360: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-8319 PodName:hostexec-leguer-worker-5x4xs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:09:20.360: INFO: >>> kubeConfig: /root/.kube/config Jun 14 18:09:20.542: INFO: exec leguer-worker: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Jun 14 18:09:20.542: INFO: exec leguer-worker: stdout: "0\n" Jun 14 18:09:20.542: INFO: exec leguer-worker: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Jun 14 18:09:20.542: INFO: exec leguer-worker: exit code: 0 Jun 14 18:09:20.542: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 18:09:20.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8319" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.274 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 Two pods mounting a local volume one after the other [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:251 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:517 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 18:09:20.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:441 STEP: Setting up 10 local volumes on node "leguer-worker" STEP: Creating tmpfs mount point on node "leguer-worker" at path "/tmp/local-volume-test-c3658d95-4843-4508-ab2d-e7e778eb58a0" Jun 14 18:09:22.612: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-c3658d95-4843-4508-ab2d-e7e778eb58a0" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-c3658d95-4843-4508-ab2d-e7e778eb58a0" "/tmp/local-volume-test-c3658d95-4843-4508-ab2d-e7e778eb58a0"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker-6hd7h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:09:22.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "leguer-worker" at path "/tmp/local-volume-test-6777da7c-6bf7-410e-8af9-22456ca5e281" Jun 14 18:09:22.778: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-6777da7c-6bf7-410e-8af9-22456ca5e281" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-6777da7c-6bf7-410e-8af9-22456ca5e281" "/tmp/local-volume-test-6777da7c-6bf7-410e-8af9-22456ca5e281"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker-6hd7h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:09:22.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "leguer-worker" at path "/tmp/local-volume-test-1d760373-3d7f-4a18-aad2-9a86df000ffc" Jun 14 18:09:22.920: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-1d760373-3d7f-4a18-aad2-9a86df000ffc" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-1d760373-3d7f-4a18-aad2-9a86df000ffc" "/tmp/local-volume-test-1d760373-3d7f-4a18-aad2-9a86df000ffc"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker-6hd7h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:09:22.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "leguer-worker" at path "/tmp/local-volume-test-54e6a9b9-fe27-4805-9c5d-82175f58082a" Jun 14 18:09:23.051: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-54e6a9b9-fe27-4805-9c5d-82175f58082a" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-54e6a9b9-fe27-4805-9c5d-82175f58082a" "/tmp/local-volume-test-54e6a9b9-fe27-4805-9c5d-82175f58082a"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker-6hd7h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:09:23.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "leguer-worker" at path "/tmp/local-volume-test-8b85f24f-2fff-47d2-9173-76fa690d66f6" Jun 14 18:09:23.199: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-8b85f24f-2fff-47d2-9173-76fa690d66f6" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-8b85f24f-2fff-47d2-9173-76fa690d66f6" "/tmp/local-volume-test-8b85f24f-2fff-47d2-9173-76fa690d66f6"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker-6hd7h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:09:23.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "leguer-worker" at path "/tmp/local-volume-test-e0ab8f44-42ef-40aa-bcff-c9a54d954b38" Jun 14 18:09:23.336: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-e0ab8f44-42ef-40aa-bcff-c9a54d954b38" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-e0ab8f44-42ef-40aa-bcff-c9a54d954b38" "/tmp/local-volume-test-e0ab8f44-42ef-40aa-bcff-c9a54d954b38"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker-6hd7h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:09:23.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "leguer-worker" at path "/tmp/local-volume-test-b7067f7d-57ab-48f0-87e3-ae7961e765e3" Jun 14 18:09:23.493: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-b7067f7d-57ab-48f0-87e3-ae7961e765e3" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-b7067f7d-57ab-48f0-87e3-ae7961e765e3" "/tmp/local-volume-test-b7067f7d-57ab-48f0-87e3-ae7961e765e3"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker-6hd7h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:09:23.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "leguer-worker" at path "/tmp/local-volume-test-3ce36e24-5413-48b6-8dcb-a856c5715248" Jun 14 18:09:23.638: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-3ce36e24-5413-48b6-8dcb-a856c5715248" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-3ce36e24-5413-48b6-8dcb-a856c5715248" "/tmp/local-volume-test-3ce36e24-5413-48b6-8dcb-a856c5715248"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker-6hd7h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:09:23.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "leguer-worker" at path "/tmp/local-volume-test-0181751b-c8a4-4d17-91a5-947da1662690" Jun 14 18:09:23.784: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-0181751b-c8a4-4d17-91a5-947da1662690" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-0181751b-c8a4-4d17-91a5-947da1662690" "/tmp/local-volume-test-0181751b-c8a4-4d17-91a5-947da1662690"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker-6hd7h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:09:23.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "leguer-worker" at path "/tmp/local-volume-test-fe16022e-25a9-4a7a-aa5c-9e4500d4bfa1" Jun 14 18:09:23.933: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-fe16022e-25a9-4a7a-aa5c-9e4500d4bfa1" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-fe16022e-25a9-4a7a-aa5c-9e4500d4bfa1" "/tmp/local-volume-test-fe16022e-25a9-4a7a-aa5c-9e4500d4bfa1"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker-6hd7h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:09:23.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Setting up 10 local volumes on node "leguer-worker2" STEP: Creating tmpfs mount point on node "leguer-worker2" at path "/tmp/local-volume-test-0bde586d-5ff8-42a9-8c31-45b3f6f78b08" Jun 14 18:09:26.103: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-0bde586d-5ff8-42a9-8c31-45b3f6f78b08" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-0bde586d-5ff8-42a9-8c31-45b3f6f78b08" "/tmp/local-volume-test-0bde586d-5ff8-42a9-8c31-45b3f6f78b08"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker2-kbm5f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:09:26.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "leguer-worker2" at path "/tmp/local-volume-test-ab57fd99-28a2-4cff-8725-e883aeee5be5" Jun 14 18:09:26.254: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-ab57fd99-28a2-4cff-8725-e883aeee5be5" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-ab57fd99-28a2-4cff-8725-e883aeee5be5" "/tmp/local-volume-test-ab57fd99-28a2-4cff-8725-e883aeee5be5"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker2-kbm5f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:09:26.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "leguer-worker2" at path "/tmp/local-volume-test-ede2590a-6280-4d6b-ac43-fddb177a0e81" Jun 14 18:09:26.403: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-ede2590a-6280-4d6b-ac43-fddb177a0e81" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-ede2590a-6280-4d6b-ac43-fddb177a0e81" "/tmp/local-volume-test-ede2590a-6280-4d6b-ac43-fddb177a0e81"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker2-kbm5f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:09:26.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "leguer-worker2" at path "/tmp/local-volume-test-fb06c3dc-00bd-4d57-abb3-09fb67eec007" Jun 14 18:09:26.562: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-fb06c3dc-00bd-4d57-abb3-09fb67eec007" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-fb06c3dc-00bd-4d57-abb3-09fb67eec007" "/tmp/local-volume-test-fb06c3dc-00bd-4d57-abb3-09fb67eec007"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker2-kbm5f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:09:26.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "leguer-worker2" at path "/tmp/local-volume-test-2da75aba-2347-4eda-8b10-e63d9fdd44ad" Jun 14 18:09:26.707: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-2da75aba-2347-4eda-8b10-e63d9fdd44ad" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-2da75aba-2347-4eda-8b10-e63d9fdd44ad" "/tmp/local-volume-test-2da75aba-2347-4eda-8b10-e63d9fdd44ad"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker2-kbm5f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:09:26.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "leguer-worker2" at path "/tmp/local-volume-test-35bcd7c7-3779-4691-a008-0227e02699fb" Jun 14 18:09:26.841: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-35bcd7c7-3779-4691-a008-0227e02699fb" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-35bcd7c7-3779-4691-a008-0227e02699fb" "/tmp/local-volume-test-35bcd7c7-3779-4691-a008-0227e02699fb"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker2-kbm5f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:09:26.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "leguer-worker2" at path "/tmp/local-volume-test-b3abf9d6-927a-4a10-b40c-88e9ad923eed" Jun 14 18:09:26.993: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-b3abf9d6-927a-4a10-b40c-88e9ad923eed" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-b3abf9d6-927a-4a10-b40c-88e9ad923eed" "/tmp/local-volume-test-b3abf9d6-927a-4a10-b40c-88e9ad923eed"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker2-kbm5f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:09:26.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "leguer-worker2" at path "/tmp/local-volume-test-daec99ca-ab4b-4561-a1f4-b0fd26e50d01" Jun 14 18:09:27.140: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-daec99ca-ab4b-4561-a1f4-b0fd26e50d01" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-daec99ca-ab4b-4561-a1f4-b0fd26e50d01" "/tmp/local-volume-test-daec99ca-ab4b-4561-a1f4-b0fd26e50d01"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker2-kbm5f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:09:27.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "leguer-worker2" at path "/tmp/local-volume-test-2c034de8-0875-445d-aa20-c4c089ceb741" Jun 14 18:09:27.292: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-2c034de8-0875-445d-aa20-c4c089ceb741" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-2c034de8-0875-445d-aa20-c4c089ceb741" "/tmp/local-volume-test-2c034de8-0875-445d-aa20-c4c089ceb741"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker2-kbm5f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:09:27.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "leguer-worker2" at path "/tmp/local-volume-test-7226670a-105e-49da-85c7-31a8e4413fcc" Jun 14 18:09:27.432: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-7226670a-105e-49da-85c7-31a8e4413fcc" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-7226670a-105e-49da-85c7-31a8e4413fcc" "/tmp/local-volume-test-7226670a-105e-49da-85c7-31a8e4413fcc"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker2-kbm5f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:09:27.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Create 20 PVs STEP: Start a goroutine to recycle unbound PVs [It] should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:517 STEP: Creating 7 pods periodically STEP: Waiting for all pods to complete successfully Jun 14 18:09:32.766: INFO: Deleting pod pod-a371a525-716a-4552-946d-0655d99949e1 Jun 14 18:09:32.928: INFO: Deleting PersistentVolumeClaim "pvc-6tdck" Jun 14 18:09:32.933: INFO: Deleting PersistentVolumeClaim "pvc-rmbm9" Jun 14 18:09:32.939: INFO: Deleting PersistentVolumeClaim "pvc-4cr2s" Jun 14 18:09:32.944: INFO: 1/28 pods finished STEP: Delete "local-pvhknrl" and create a new PV for same local volume storage STEP: Delete "local-pvhknrl" and create a new PV for same local volume storage STEP: Delete "local-pvlgvqt" and create a new PV for same local volume storage STEP: Delete "local-pvlgvqt" and create a new PV for same local volume storage STEP: Delete "local-pvwgkdn" and create a new PV for same local volume storage STEP: Delete "local-pvwgkdn" and create a new PV for same local volume storage Jun 14 18:09:33.764: INFO: Deleting pod pod-02af3edf-ae10-4415-b469-36fe08a59564 Jun 14 18:09:33.770: INFO: Deleting PersistentVolumeClaim "pvc-qn5v7" Jun 14 18:09:33.781: INFO: Deleting PersistentVolumeClaim "pvc-lt7sq" Jun 14 18:09:33.784: INFO: Deleting PersistentVolumeClaim "pvc-jj4dn" Jun 14 18:09:33.787: INFO: 2/28 pods finished STEP: Delete "local-pvbhhs6" and create a new PV for same local volume storage STEP: Delete "local-pvbhhs6" and create a new PV for same local volume storage STEP: Delete "local-pvbk2c6" and create a new PV for same local volume storage STEP: Delete "local-pvbk2c6" and create a new PV for same local volume storage STEP: Delete "local-pvhrvx8" and create a new PV for same local volume storage STEP: Delete "local-pvhrvx8" and create a new PV for same local volume storage Jun 14 18:09:34.765: INFO: Deleting pod pod-3b264baa-c392-4fa3-851b-3dd1e77b3ac2 Jun 14 18:09:34.774: INFO: Deleting PersistentVolumeClaim "pvc-9ht4x" Jun 14 18:09:34.779: INFO: Deleting PersistentVolumeClaim "pvc-x77fv" Jun 14 18:09:34.785: INFO: Deleting PersistentVolumeClaim "pvc-jfm7w" Jun 14 18:09:34.790: INFO: 3/28 pods finished Jun 14 18:09:34.790: INFO: Deleting pod pod-9af7f045-bba9-48ae-bf64-ecb03cd4fd0e Jun 14 18:09:34.797: INFO: Deleting PersistentVolumeClaim "pvc-smwsj" STEP: Delete "local-pv2bfbp" and create a new PV for same local volume storage Jun 14 18:09:34.801: INFO: Deleting PersistentVolumeClaim "pvc-jrcgw" Jun 14 18:09:34.805: INFO: Deleting PersistentVolumeClaim "pvc-4qc4r" Jun 14 18:09:34.809: INFO: 4/28 pods finished STEP: Delete "local-pv2bfbp" and create a new PV for same local volume storage STEP: Delete "local-pvs95pd" and create a new PV for same local volume storage STEP: Delete "local-pvs95pd" and create a new PV for same local volume storage STEP: Delete "local-pv4vrgn" and create a new PV for same local volume storage STEP: Delete "local-pv4vrgn" and create a new PV for same local volume storage STEP: Delete "local-pvrgx4m" and create a new PV for same local volume storage STEP: Delete "local-pvvkvnn" and create a new PV for same local volume storage STEP: Delete "local-pvw75pt" and create a new PV for same local volume storage Jun 14 18:09:36.806: INFO: Deleting pod pod-0dd7b3e2-1130-4b7f-8a5f-990511e35341 Jun 14 18:09:36.813: INFO: Deleting PersistentVolumeClaim "pvc-n8lfp" Jun 14 18:09:36.817: INFO: Deleting PersistentVolumeClaim "pvc-kt4vr" Jun 14 18:09:36.821: INFO: Deleting PersistentVolumeClaim "pvc-7w42x" Jun 14 18:09:36.825: INFO: 5/28 pods finished Jun 14 18:09:36.825: INFO: Deleting pod pod-71a0eab0-329e-4646-9a65-8b7559573be8 Jun 14 18:09:36.832: INFO: Deleting PersistentVolumeClaim "pvc-z8gkg" Jun 14 18:09:36.836: INFO: Deleting PersistentVolumeClaim "pvc-qg2jc" Jun 14 18:09:36.840: INFO: Deleting PersistentVolumeClaim "pvc-kndz8" Jun 14 18:09:36.844: INFO: 6/28 pods finished STEP: Delete "local-pvws7xk" and create a new PV for same local volume storage STEP: Delete "local-pvws7xk" and create a new PV for same local volume storage STEP: Delete "local-pvt2m5l" and create a new PV for same local volume storage STEP: Delete "local-pvt2m5l" and create a new PV for same local volume storage STEP: Delete "local-pvnjmpz" and create a new PV for same local volume storage STEP: Delete "local-pvnjmpz" and create a new PV for same local volume storage STEP: Delete "local-pv4bmxn" and create a new PV for same local volume storage STEP: Delete "local-pv4bmxn" and create a new PV for same local volume storage STEP: Delete "local-pvmfxrq" and create a new PV for same local volume storage STEP: Delete "local-pvmfxrq" and create a new PV for same local volume storage STEP: Delete "local-pvsrvfn" and create a new PV for same local volume storage STEP: Delete "local-pvsrvfn" and create a new PV for same local volume storage Jun 14 18:09:42.765: INFO: Deleting pod pod-aad5fb75-8ba3-424a-9eb9-78892a81c21a Jun 14 18:09:42.774: INFO: Deleting PersistentVolumeClaim "pvc-t7rjr" Jun 14 18:09:42.779: INFO: Deleting PersistentVolumeClaim "pvc-sbn9r" Jun 14 18:09:42.783: INFO: Deleting PersistentVolumeClaim "pvc-sxhqt" Jun 14 18:09:42.787: INFO: 7/28 pods finished STEP: Delete "local-pv4tpm7" and create a new PV for same local volume storage STEP: Delete "local-pv4tpm7" and create a new PV for same local volume storage STEP: Delete "local-pvwdnpc" and create a new PV for same local volume storage STEP: Delete "local-pvwdnpc" and create a new PV for same local volume storage STEP: Delete "local-pv6r8rt" and create a new PV for same local volume storage Jun 14 18:09:43.766: INFO: Deleting pod pod-651ff3d0-23e7-47cd-828f-a322bc8add55 Jun 14 18:09:43.776: INFO: Deleting PersistentVolumeClaim "pvc-nwm6j" Jun 14 18:09:43.781: INFO: Deleting PersistentVolumeClaim "pvc-8bfpg" Jun 14 18:09:43.786: INFO: Deleting PersistentVolumeClaim "pvc-mmprs" Jun 14 18:09:43.791: INFO: 8/28 pods finished Jun 14 18:09:43.791: INFO: Deleting pod pod-fb7f6932-43e7-4062-a2ab-59088a5b2a39 Jun 14 18:09:43.798: INFO: Deleting PersistentVolumeClaim "pvc-7zkgl" Jun 14 18:09:43.803: INFO: Deleting PersistentVolumeClaim "pvc-n7dzn" STEP: Delete "local-pvfprrg" and create a new PV for same local volume storage Jun 14 18:09:43.807: INFO: Deleting PersistentVolumeClaim "pvc-ndtv6" Jun 14 18:09:43.810: INFO: 9/28 pods finished STEP: Delete "local-pvfprrg" and create a new PV for same local volume storage STEP: Delete "local-pvnv5w9" and create a new PV for same local volume storage STEP: Delete "local-pvnv5w9" and create a new PV for same local volume storage STEP: Delete "local-pv62948" and create a new PV for same local volume storage STEP: Delete "local-pvwc4wj" and create a new PV for same local volume storage STEP: Delete "local-pvh9cjq" and create a new PV for same local volume storage STEP: Delete "local-pvmg2mv" and create a new PV for same local volume storage Jun 14 18:09:44.765: INFO: Deleting pod pod-ca1171f3-8fa8-49b2-926c-6ed4d20b5421 Jun 14 18:09:44.775: INFO: Deleting PersistentVolumeClaim "pvc-9fjv2" Jun 14 18:09:44.784: INFO: Deleting PersistentVolumeClaim "pvc-fzz2f" Jun 14 18:09:44.792: INFO: Deleting PersistentVolumeClaim "pvc-8zjq7" Jun 14 18:09:44.797: INFO: 10/28 pods finished STEP: Delete "local-pv9jwvj" and create a new PV for same local volume storage STEP: Delete "local-pv9jwvj" and create a new PV for same local volume storage STEP: Delete "local-pvq582m" and create a new PV for same local volume storage STEP: Delete "local-pvq582m" and create a new PV for same local volume storage STEP: Delete "local-pvp2842" and create a new PV for same local volume storage STEP: Delete "local-pvp2842" and create a new PV for same local volume storage Jun 14 18:09:46.765: INFO: Deleting pod pod-6fb9ed66-0ff8-4dd5-8ace-8ed88903ba6c Jun 14 18:09:46.775: INFO: Deleting PersistentVolumeClaim "pvc-p5jj7" Jun 14 18:09:46.781: INFO: Deleting PersistentVolumeClaim "pvc-bwckc" Jun 14 18:09:46.787: INFO: Deleting PersistentVolumeClaim "pvc-6zdsz" Jun 14 18:09:46.792: INFO: 11/28 pods finished Jun 14 18:09:46.792: INFO: Deleting pod pod-ec4fc3bb-b37d-4ed0-a62c-45b371995e6e Jun 14 18:09:46.800: INFO: Deleting PersistentVolumeClaim "pvc-dzwmb" Jun 14 18:09:46.851: INFO: Deleting PersistentVolumeClaim "pvc-7wmx2" STEP: Delete "local-pvkzntv" and create a new PV for same local volume storage Jun 14 18:09:46.855: INFO: Deleting PersistentVolumeClaim "pvc-cjxlm" Jun 14 18:09:46.859: INFO: 12/28 pods finished STEP: Delete "local-pvkzntv" and create a new PV for same local volume storage STEP: Delete "local-pv8nrsl" and create a new PV for same local volume storage STEP: Delete "local-pv8nrsl" and create a new PV for same local volume storage STEP: Delete "local-pv7jbs2" and create a new PV for same local volume storage STEP: Delete "local-pv7jbs2" and create a new PV for same local volume storage STEP: Delete "local-pv8lg7c" and create a new PV for same local volume storage STEP: Delete "local-pvhwdcp" and create a new PV for same local volume storage STEP: Delete "local-pv4dncm" and create a new PV for same local volume storage Jun 14 18:09:48.765: INFO: Deleting pod pod-29501e23-9b9e-467f-9222-c13101acf081 Jun 14 18:09:48.775: INFO: Deleting PersistentVolumeClaim "pvc-p8rcf" Jun 14 18:09:48.779: INFO: Deleting PersistentVolumeClaim "pvc-rp42x" Jun 14 18:09:48.784: INFO: Deleting PersistentVolumeClaim "pvc-7vqvf" Jun 14 18:09:48.789: INFO: 13/28 pods finished STEP: Delete "local-pvvctzf" and create a new PV for same local volume storage STEP: Delete "local-pvvctzf" and create a new PV for same local volume storage STEP: Delete "local-pvkdmnv" and create a new PV for same local volume storage STEP: Delete "local-pvkdmnv" and create a new PV for same local volume storage STEP: Delete "local-pvx85tb" and create a new PV for same local volume storage STEP: Delete "local-pvx85tb" and create a new PV for same local volume storage Jun 14 18:09:53.766: INFO: Deleting pod pod-a90b2a44-75d6-41f9-a947-978ad4db95bc Jun 14 18:09:53.775: INFO: Deleting PersistentVolumeClaim "pvc-dg9m4" Jun 14 18:09:53.780: INFO: Deleting PersistentVolumeClaim "pvc-xzk2p" Jun 14 18:09:53.786: INFO: Deleting PersistentVolumeClaim "pvc-hhh5m" Jun 14 18:09:53.790: INFO: 14/28 pods finished STEP: Delete "local-pvsnq6m" and create a new PV for same local volume storage STEP: Delete "local-pvsnq6m" and create a new PV for same local volume storage STEP: Delete "local-pvjzdzq" and create a new PV for same local volume storage STEP: Delete "local-pvjzdzq" and create a new PV for same local volume storage STEP: Delete "local-pv8lkz4" and create a new PV for same local volume storage STEP: Delete "local-pv8lkz4" and create a new PV for same local volume storage Jun 14 18:09:54.766: INFO: Deleting pod pod-189a7362-b375-4a9b-babd-4f1e78109f5f Jun 14 18:09:54.775: INFO: Deleting PersistentVolumeClaim "pvc-64grc" Jun 14 18:09:54.780: INFO: Deleting PersistentVolumeClaim "pvc-q26gd" Jun 14 18:09:54.785: INFO: Deleting PersistentVolumeClaim "pvc-qcrjv" Jun 14 18:09:54.790: INFO: 15/28 pods finished STEP: Delete "local-pvjlfpf" and create a new PV for same local volume storage STEP: Delete "local-pvjlfpf" and create a new PV for same local volume storage STEP: Delete "local-pv5sxhv" and create a new PV for same local volume storage STEP: Delete "local-pvf4mj6" and create a new PV for same local volume storage Jun 14 18:09:55.765: INFO: Deleting pod pod-86e5b04b-f000-4acd-92ee-d8ceac5730eb Jun 14 18:09:55.774: INFO: Deleting PersistentVolumeClaim "pvc-d7cm7" Jun 14 18:09:55.780: INFO: Deleting PersistentVolumeClaim "pvc-whdc8" Jun 14 18:09:55.785: INFO: Deleting PersistentVolumeClaim "pvc-gx4ld" Jun 14 18:09:55.790: INFO: 16/28 pods finished Jun 14 18:09:55.790: INFO: Deleting pod pod-af50596a-3493-4d20-9797-92fe17fd8146 Jun 14 18:09:55.798: INFO: Deleting PersistentVolumeClaim "pvc-qn6mt" Jun 14 18:09:55.802: INFO: Deleting PersistentVolumeClaim "pvc-zhcgr" STEP: Delete "local-pv2kjhx" and create a new PV for same local volume storage Jun 14 18:09:55.806: INFO: Deleting PersistentVolumeClaim "pvc-9z6np" Jun 14 18:09:55.811: INFO: 17/28 pods finished STEP: Delete "local-pv2kjhx" and create a new PV for same local volume storage STEP: Delete "local-pv6kl7j" and create a new PV for same local volume storage STEP: Delete "local-pv6kl7j" and create a new PV for same local volume storage STEP: Delete "local-pvv84ph" and create a new PV for same local volume storage STEP: Delete "local-pvv84ph" and create a new PV for same local volume storage STEP: Delete "local-pv8529p" and create a new PV for same local volume storage STEP: Delete "local-pv5lnhz" and create a new PV for same local volume storage STEP: Delete "local-pvwhfvl" and create a new PV for same local volume storage Jun 14 18:09:57.770: INFO: Deleting pod pod-b935137b-8142-46b0-be50-d9b4019aa06e Jun 14 18:09:57.780: INFO: Deleting PersistentVolumeClaim "pvc-s42tp" Jun 14 18:09:57.786: INFO: Deleting PersistentVolumeClaim "pvc-tmwxr" Jun 14 18:09:57.792: INFO: Deleting PersistentVolumeClaim "pvc-dwjxc" Jun 14 18:09:57.796: INFO: 18/28 pods finished STEP: Delete "local-pvb42np" and create a new PV for same local volume storage STEP: Delete "local-pvb42np" and create a new PV for same local volume storage STEP: Delete "local-pvrldgl" and create a new PV for same local volume storage STEP: Delete "local-pvs9bn5" and create a new PV for same local volume storage Jun 14 18:09:58.765: INFO: Deleting pod pod-2ec4817e-e748-4272-aaa9-dff9d4be8a6c Jun 14 18:09:58.794: INFO: Deleting PersistentVolumeClaim "pvc-247j5" Jun 14 18:09:58.811: INFO: Deleting PersistentVolumeClaim "pvc-q52zf" Jun 14 18:09:58.814: INFO: Deleting PersistentVolumeClaim "pvc-9cqz4" Jun 14 18:09:58.817: INFO: 19/28 pods finished STEP: Delete "local-pvrpqcr" and create a new PV for same local volume storage STEP: Delete "local-pvrpqcr" and create a new PV for same local volume storage STEP: Delete "local-pvm4dj8" and create a new PV for same local volume storage STEP: Delete "local-pvm4dj8" and create a new PV for same local volume storage STEP: Delete "local-pvmlrxg" and create a new PV for same local volume storage STEP: Delete "local-pvmlrxg" and create a new PV for same local volume storage Jun 14 18:10:04.766: INFO: Deleting pod pod-43cc431f-e84c-4c5b-8c87-e2f6a9d11a80 Jun 14 18:10:04.774: INFO: Deleting PersistentVolumeClaim "pvc-rvjvw" Jun 14 18:10:04.779: INFO: Deleting PersistentVolumeClaim "pvc-bmf2s" Jun 14 18:10:04.784: INFO: Deleting PersistentVolumeClaim "pvc-c82sp" Jun 14 18:10:04.788: INFO: 20/28 pods finished Jun 14 18:10:04.788: INFO: Deleting pod pod-6891cd2b-df14-487e-8bab-73ca7311318e Jun 14 18:10:04.796: INFO: Deleting PersistentVolumeClaim "pvc-wqsjn" Jun 14 18:10:04.800: INFO: Deleting PersistentVolumeClaim "pvc-mgqzz" STEP: Delete "local-pv22zf8" and create a new PV for same local volume storage Jun 14 18:10:04.804: INFO: Deleting PersistentVolumeClaim "pvc-kz5cf" Jun 14 18:10:04.812: INFO: 21/28 pods finished STEP: Delete "local-pv22zf8" and create a new PV for same local volume storage STEP: Delete "local-pvp6b7v" and create a new PV for same local volume storage STEP: Delete "local-pvp6b7v" and create a new PV for same local volume storage STEP: Delete "local-pvdh8s7" and create a new PV for same local volume storage STEP: Delete "local-pvdh8s7" and create a new PV for same local volume storage STEP: Delete "local-pvxpppn" and create a new PV for same local volume storage STEP: Delete "local-pvbx4v6" and create a new PV for same local volume storage STEP: Delete "local-pvg2pxk" and create a new PV for same local volume storage Jun 14 18:10:05.766: INFO: Deleting pod pod-0570ac5d-0738-4705-9a3c-1b21c69af16c Jun 14 18:10:05.774: INFO: Deleting PersistentVolumeClaim "pvc-f49h7" Jun 14 18:10:05.779: INFO: Deleting PersistentVolumeClaim "pvc-9lckm" Jun 14 18:10:05.785: INFO: Deleting PersistentVolumeClaim "pvc-8crpq" Jun 14 18:10:05.790: INFO: 22/28 pods finished Jun 14 18:10:05.790: INFO: Deleting pod pod-561c4f5f-d152-48a8-9dc7-fd902300438f Jun 14 18:10:05.797: INFO: Deleting PersistentVolumeClaim "pvc-9ddwk" Jun 14 18:10:05.801: INFO: Deleting PersistentVolumeClaim "pvc-pfnvx" STEP: Delete "local-pvm9787" and create a new PV for same local volume storage Jun 14 18:10:05.805: INFO: Deleting PersistentVolumeClaim "pvc-nm55l" Jun 14 18:10:05.809: INFO: 23/28 pods finished STEP: Delete "local-pvm9787" and create a new PV for same local volume storage STEP: Delete "local-pvbmr2d" and create a new PV for same local volume storage STEP: Delete "local-pvbmr2d" and create a new PV for same local volume storage STEP: Delete "local-pvnrpsw" and create a new PV for same local volume storage STEP: Delete "local-pvnrpsw" and create a new PV for same local volume storage STEP: Delete "local-pv5v7d8" and create a new PV for same local volume storage STEP: Delete "local-pv5v7d8" and create a new PV for same local volume storage STEP: Delete "local-pvmj5cs" and create a new PV for same local volume storage STEP: Delete "local-pvmj5cs" and create a new PV for same local volume storage STEP: Delete "local-pvdqdld" and create a new PV for same local volume storage STEP: Delete "local-pvdqdld" and create a new PV for same local volume storage Jun 14 18:10:06.765: INFO: Deleting pod pod-ad41f350-cefe-4d07-b805-ab770c71de13 Jun 14 18:10:06.774: INFO: Deleting PersistentVolumeClaim "pvc-9nc2r" Jun 14 18:10:06.781: INFO: Deleting PersistentVolumeClaim "pvc-n5cc6" Jun 14 18:10:06.786: INFO: Deleting PersistentVolumeClaim "pvc-mqhtg" Jun 14 18:10:06.790: INFO: 24/28 pods finished STEP: Delete "local-pvnzc7w" and create a new PV for same local volume storage STEP: Delete "local-pvnzc7w" and create a new PV for same local volume storage STEP: Delete "local-pvcm6hj" and create a new PV for same local volume storage STEP: Delete "local-pvcm6hj" and create a new PV for same local volume storage STEP: Delete "local-pvmtdx7" and create a new PV for same local volume storage STEP: Delete "local-pvmtdx7" and create a new PV for same local volume storage Jun 14 18:10:07.765: INFO: Deleting pod pod-8ebdd2cc-a614-4279-8770-1ac632963e78 Jun 14 18:10:07.775: INFO: Deleting PersistentVolumeClaim "pvc-zgr48" Jun 14 18:10:07.782: INFO: Deleting PersistentVolumeClaim "pvc-xlh4w" Jun 14 18:10:07.787: INFO: Deleting PersistentVolumeClaim "pvc-djcxw" Jun 14 18:10:07.792: INFO: 25/28 pods finished STEP: Delete "local-pv5b2q4" and create a new PV for same local volume storage STEP: Delete "local-pv5b2q4" and create a new PV for same local volume storage STEP: Delete "local-pvf6bs4" and create a new PV for same local volume storage STEP: Delete "local-pvf6bs4" and create a new PV for same local volume storage STEP: Delete "local-pvqw26r" and create a new PV for same local volume storage STEP: Delete "local-pvqw26r" and create a new PV for same local volume storage Jun 14 18:10:13.765: INFO: Deleting pod pod-68632117-cf13-4f0d-909e-a10971356bb1 Jun 14 18:10:13.777: INFO: Deleting PersistentVolumeClaim "pvc-k2z78" Jun 14 18:10:13.782: INFO: Deleting PersistentVolumeClaim "pvc-q8kt2" Jun 14 18:10:13.790: INFO: Deleting PersistentVolumeClaim "pvc-t589q" Jun 14 18:10:13.795: INFO: 26/28 pods finished STEP: Delete "local-pvb4wv4" and create a new PV for same local volume storage STEP: Delete "local-pvb4wv4" and create a new PV for same local volume storage STEP: Delete "local-pv85w2n" and create a new PV for same local volume storage STEP: Delete "local-pvwdkx9" and create a new PV for same local volume storage STEP: Delete "local-pvwdkx9" and create a new PV for same local volume storage Jun 14 18:10:14.765: INFO: Deleting pod pod-18b34f8d-9f44-4a49-bdbe-761074b27efd Jun 14 18:10:14.775: INFO: Deleting PersistentVolumeClaim "pvc-5nk85" Jun 14 18:10:14.784: INFO: Deleting PersistentVolumeClaim "pvc-bpmqf" Jun 14 18:10:14.789: INFO: Deleting PersistentVolumeClaim "pvc-s5x89" Jun 14 18:10:14.793: INFO: 27/28 pods finished STEP: Delete "local-pvc9v89" and create a new PV for same local volume storage STEP: Delete "local-pvc9v89" and create a new PV for same local volume storage STEP: Delete "local-pv6nqrt" and create a new PV for same local volume storage STEP: Delete "local-pv6nqrt" and create a new PV for same local volume storage STEP: Delete "local-pvdmxpm" and create a new PV for same local volume storage STEP: Delete "local-pvdmxpm" and create a new PV for same local volume storage Jun 14 18:10:19.766: INFO: Deleting pod pod-e1a4633c-5880-46f4-9c50-97fea9562f95 Jun 14 18:10:19.782: INFO: Deleting PersistentVolumeClaim "pvc-h59rw" Jun 14 18:10:19.791: INFO: Deleting PersistentVolumeClaim "pvc-z4ghq" Jun 14 18:10:19.797: INFO: Deleting PersistentVolumeClaim "pvc-v47pq" Jun 14 18:10:19.801: INFO: 28/28 pods finished [AfterEach] Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:505 STEP: Stop and wait for recycle goroutine to finish STEP: Clean all PVs STEP: Cleaning up 10 local volumes on node "leguer-worker2" STEP: Cleaning up PVC and PV Jun 14 18:10:19.801: INFO: pvc is nil Jun 14 18:10:19.801: INFO: Deleting PersistentVolume "local-pvmbn4b" STEP: Cleaning up PVC and PV Jun 14 18:10:19.805: INFO: pvc is nil Jun 14 18:10:19.805: INFO: Deleting PersistentVolume "local-pvvztng" STEP: Cleaning up PVC and PV Jun 14 18:10:19.810: INFO: pvc is nil Jun 14 18:10:19.810: INFO: Deleting PersistentVolume "local-pv6fnx6" STEP: Cleaning up PVC and PV Jun 14 18:10:19.814: INFO: pvc is nil Jun 14 18:10:19.814: INFO: Deleting PersistentVolume "local-pv6pgw2" STEP: Cleaning up PVC and PV Jun 14 18:10:19.818: INFO: pvc is nil Jun 14 18:10:19.818: INFO: Deleting PersistentVolume "local-pvj6r9h" STEP: Cleaning up PVC and PV Jun 14 18:10:19.822: INFO: pvc is nil Jun 14 18:10:19.822: INFO: Deleting PersistentVolume "local-pvkk5cf" STEP: Cleaning up PVC and PV Jun 14 18:10:19.826: INFO: pvc is nil Jun 14 18:10:19.826: INFO: Deleting PersistentVolume "local-pvm8fwv" STEP: Cleaning up PVC and PV Jun 14 18:10:19.830: INFO: pvc is nil Jun 14 18:10:19.830: INFO: Deleting PersistentVolume "local-pvmbqdc" STEP: Cleaning up PVC and PV Jun 14 18:10:19.834: INFO: pvc is nil Jun 14 18:10:19.834: INFO: Deleting PersistentVolume "local-pv9qzzk" STEP: Cleaning up PVC and PV Jun 14 18:10:19.838: INFO: pvc is nil Jun 14 18:10:19.838: INFO: Deleting PersistentVolume "local-pvgch8h" STEP: Unmount tmpfs mount point on node "leguer-worker2" at path "/tmp/local-volume-test-0bde586d-5ff8-42a9-8c31-45b3f6f78b08" Jun 14 18:10:19.842: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-0bde586d-5ff8-42a9-8c31-45b3f6f78b08"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker2-kbm5f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:19.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 14 18:10:20.008: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0bde586d-5ff8-42a9-8c31-45b3f6f78b08] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker2-kbm5f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:20.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "leguer-worker2" at path "/tmp/local-volume-test-ab57fd99-28a2-4cff-8725-e883aeee5be5" Jun 14 18:10:20.134: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-ab57fd99-28a2-4cff-8725-e883aeee5be5"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker2-kbm5f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:20.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 14 18:10:20.269: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ab57fd99-28a2-4cff-8725-e883aeee5be5] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker2-kbm5f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:20.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "leguer-worker2" at path "/tmp/local-volume-test-ede2590a-6280-4d6b-ac43-fddb177a0e81" Jun 14 18:10:20.425: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-ede2590a-6280-4d6b-ac43-fddb177a0e81"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker2-kbm5f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:20.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 14 18:10:20.558: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ede2590a-6280-4d6b-ac43-fddb177a0e81] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker2-kbm5f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:20.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "leguer-worker2" at path "/tmp/local-volume-test-fb06c3dc-00bd-4d57-abb3-09fb67eec007" Jun 14 18:10:20.697: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-fb06c3dc-00bd-4d57-abb3-09fb67eec007"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker2-kbm5f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:20.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 14 18:10:20.846: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-fb06c3dc-00bd-4d57-abb3-09fb67eec007] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker2-kbm5f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:20.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "leguer-worker2" at path "/tmp/local-volume-test-2da75aba-2347-4eda-8b10-e63d9fdd44ad" Jun 14 18:10:20.984: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-2da75aba-2347-4eda-8b10-e63d9fdd44ad"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker2-kbm5f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:20.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 14 18:10:21.125: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-2da75aba-2347-4eda-8b10-e63d9fdd44ad] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker2-kbm5f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:21.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "leguer-worker2" at path "/tmp/local-volume-test-35bcd7c7-3779-4691-a008-0227e02699fb" Jun 14 18:10:21.278: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-35bcd7c7-3779-4691-a008-0227e02699fb"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker2-kbm5f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:21.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 14 18:10:21.422: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-35bcd7c7-3779-4691-a008-0227e02699fb] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker2-kbm5f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:21.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "leguer-worker2" at path "/tmp/local-volume-test-b3abf9d6-927a-4a10-b40c-88e9ad923eed" Jun 14 18:10:21.555: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-b3abf9d6-927a-4a10-b40c-88e9ad923eed"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker2-kbm5f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:21.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 14 18:10:21.697: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-b3abf9d6-927a-4a10-b40c-88e9ad923eed] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker2-kbm5f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:21.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "leguer-worker2" at path "/tmp/local-volume-test-daec99ca-ab4b-4561-a1f4-b0fd26e50d01" Jun 14 18:10:21.838: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-daec99ca-ab4b-4561-a1f4-b0fd26e50d01"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker2-kbm5f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:21.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 14 18:10:22.002: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-daec99ca-ab4b-4561-a1f4-b0fd26e50d01] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker2-kbm5f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:22.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "leguer-worker2" at path "/tmp/local-volume-test-2c034de8-0875-445d-aa20-c4c089ceb741" Jun 14 18:10:22.146: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-2c034de8-0875-445d-aa20-c4c089ceb741"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker2-kbm5f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:22.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 14 18:10:22.295: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-2c034de8-0875-445d-aa20-c4c089ceb741] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker2-kbm5f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:22.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "leguer-worker2" at path "/tmp/local-volume-test-7226670a-105e-49da-85c7-31a8e4413fcc" Jun 14 18:10:22.434: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-7226670a-105e-49da-85c7-31a8e4413fcc"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker2-kbm5f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:22.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 14 18:10:22.563: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-7226670a-105e-49da-85c7-31a8e4413fcc] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker2-kbm5f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:22.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up 10 local volumes on node "leguer-worker" STEP: Cleaning up PVC and PV Jun 14 18:10:22.702: INFO: pvc is nil Jun 14 18:10:22.702: INFO: Deleting PersistentVolume "local-pvg52p8" STEP: Cleaning up PVC and PV Jun 14 18:10:22.709: INFO: pvc is nil Jun 14 18:10:22.709: INFO: Deleting PersistentVolume "local-pvpv8lk" STEP: Cleaning up PVC and PV Jun 14 18:10:22.722: INFO: pvc is nil Jun 14 18:10:22.722: INFO: Deleting PersistentVolume "local-pvwsmxl" STEP: Cleaning up PVC and PV Jun 14 18:10:22.727: INFO: pvc is nil Jun 14 18:10:22.727: INFO: Deleting PersistentVolume "local-pv2gn2r" STEP: Cleaning up PVC and PV Jun 14 18:10:22.732: INFO: pvc is nil Jun 14 18:10:22.732: INFO: Deleting PersistentVolume "local-pvkp7zt" STEP: Cleaning up PVC and PV Jun 14 18:10:22.736: INFO: pvc is nil Jun 14 18:10:22.736: INFO: Deleting PersistentVolume "local-pv6wmxh" STEP: Cleaning up PVC and PV Jun 14 18:10:22.741: INFO: pvc is nil Jun 14 18:10:22.741: INFO: Deleting PersistentVolume "local-pvknk27" STEP: Cleaning up PVC and PV Jun 14 18:10:22.746: INFO: pvc is nil Jun 14 18:10:22.746: INFO: Deleting PersistentVolume "local-pvnrhss" STEP: Cleaning up PVC and PV Jun 14 18:10:22.750: INFO: pvc is nil Jun 14 18:10:22.750: INFO: Deleting PersistentVolume "local-pv768ws" STEP: Cleaning up PVC and PV Jun 14 18:10:22.755: INFO: pvc is nil Jun 14 18:10:22.755: INFO: Deleting PersistentVolume "local-pvfcxq4" STEP: Unmount tmpfs mount point on node "leguer-worker" at path "/tmp/local-volume-test-c3658d95-4843-4508-ab2d-e7e778eb58a0" Jun 14 18:10:22.759: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-c3658d95-4843-4508-ab2d-e7e778eb58a0"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker-6hd7h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:22.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 14 18:10:22.902: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c3658d95-4843-4508-ab2d-e7e778eb58a0] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker-6hd7h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:22.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "leguer-worker" at path "/tmp/local-volume-test-6777da7c-6bf7-410e-8af9-22456ca5e281" Jun 14 18:10:23.041: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-6777da7c-6bf7-410e-8af9-22456ca5e281"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker-6hd7h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:23.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 14 18:10:23.191: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6777da7c-6bf7-410e-8af9-22456ca5e281] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker-6hd7h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:23.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "leguer-worker" at path "/tmp/local-volume-test-1d760373-3d7f-4a18-aad2-9a86df000ffc" Jun 14 18:10:23.337: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-1d760373-3d7f-4a18-aad2-9a86df000ffc"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker-6hd7h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:23.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 14 18:10:23.477: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-1d760373-3d7f-4a18-aad2-9a86df000ffc] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker-6hd7h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:23.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "leguer-worker" at path "/tmp/local-volume-test-54e6a9b9-fe27-4805-9c5d-82175f58082a" Jun 14 18:10:23.629: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-54e6a9b9-fe27-4805-9c5d-82175f58082a"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker-6hd7h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:23.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 14 18:10:23.782: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-54e6a9b9-fe27-4805-9c5d-82175f58082a] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker-6hd7h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:23.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "leguer-worker" at path "/tmp/local-volume-test-8b85f24f-2fff-47d2-9173-76fa690d66f6" Jun 14 18:10:23.913: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-8b85f24f-2fff-47d2-9173-76fa690d66f6"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker-6hd7h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:23.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 14 18:10:24.052: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8b85f24f-2fff-47d2-9173-76fa690d66f6] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker-6hd7h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:24.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "leguer-worker" at path "/tmp/local-volume-test-e0ab8f44-42ef-40aa-bcff-c9a54d954b38" Jun 14 18:10:24.211: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-e0ab8f44-42ef-40aa-bcff-c9a54d954b38"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker-6hd7h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:24.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 14 18:10:24.352: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e0ab8f44-42ef-40aa-bcff-c9a54d954b38] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker-6hd7h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:24.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "leguer-worker" at path "/tmp/local-volume-test-b7067f7d-57ab-48f0-87e3-ae7961e765e3" Jun 14 18:10:24.487: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-b7067f7d-57ab-48f0-87e3-ae7961e765e3"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker-6hd7h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:24.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 14 18:10:24.633: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-b7067f7d-57ab-48f0-87e3-ae7961e765e3] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker-6hd7h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:24.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "leguer-worker" at path "/tmp/local-volume-test-3ce36e24-5413-48b6-8dcb-a856c5715248" Jun 14 18:10:24.782: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-3ce36e24-5413-48b6-8dcb-a856c5715248"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker-6hd7h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:24.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 14 18:10:24.921: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-3ce36e24-5413-48b6-8dcb-a856c5715248] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker-6hd7h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:24.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "leguer-worker" at path "/tmp/local-volume-test-0181751b-c8a4-4d17-91a5-947da1662690" Jun 14 18:10:25.064: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-0181751b-c8a4-4d17-91a5-947da1662690"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker-6hd7h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:25.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 14 18:10:25.198: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0181751b-c8a4-4d17-91a5-947da1662690] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker-6hd7h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:25.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "leguer-worker" at path "/tmp/local-volume-test-fe16022e-25a9-4a7a-aa5c-9e4500d4bfa1" Jun 14 18:10:25.334: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-fe16022e-25a9-4a7a-aa5c-9e4500d4bfa1"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker-6hd7h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:25.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 14 18:10:25.475: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-fe16022e-25a9-4a7a-aa5c-9e4500d4bfa1] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-leguer-worker-6hd7h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:25.475: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 18:10:25.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1714" for this suite. • [SLOW TEST:65.082 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:427 should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:517 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","total":18,"completed":1,"skipped":144,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create unbound pvc count metrics for pvc controller after creating pvc only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:493 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 18:10:25.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 14 18:10:25.685: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 18:10:25.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-1109" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.051 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:382 should create unbound pvc count metrics for pvc controller after creating pvc only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:493 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create volume metrics in Volume Manager /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:291 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 18:10:25.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 14 18:10:25.727: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 18:10:25.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-8716" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.041 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create volume metrics in Volume Manager [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:291 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:245 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 18:10:25.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 Jun 14 18:10:27.794: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-3508 PodName:hostexec-leguer-worker-ww52d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:27.794: INFO: >>> kubeConfig: /root/.kube/config Jun 14 18:10:27.967: INFO: exec leguer-worker: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Jun 14 18:10:27.967: INFO: exec leguer-worker: stdout: "0\n" Jun 14 18:10:27.967: INFO: exec leguer-worker: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Jun 14 18:10:27.967: INFO: exec leguer-worker: exit code: 0 Jun 14 18:10:27.967: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 18:10:27.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3508" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.240 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 Two pods mounting a local volume at the same time [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:245 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create unbound pv count metrics for pvc controller after creating pv only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:484 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 18:10:27.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 14 18:10:28.021: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 18:10:28.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-9889" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.048 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:382 should create unbound pv count metrics for pvc controller after creating pv only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:484 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] One pod requesting one prebound PVC should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:234 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 18:10:28.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 Jun 14 18:10:30.090: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-8653 PodName:hostexec-leguer-worker-v28hk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:10:30.090: INFO: >>> kubeConfig: /root/.kube/config Jun 14 18:10:30.266: INFO: exec leguer-worker: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Jun 14 18:10:30.266: INFO: exec leguer-worker: stdout: "0\n" Jun 14 18:10:30.266: INFO: exec leguer-worker: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Jun 14 18:10:30.266: INFO: exec leguer-worker: exit code: 0 Jun 14 18:10:30.266: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 18:10:30.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8653" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.245 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 One pod requesting one prebound PVC [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:234 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:642 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 18:10:30.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:619 [It] all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:642 STEP: Create a PVC STEP: Create 50 pods to use this PVC STEP: Wait for all pods are running [AfterEach] Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:633 STEP: Clean PV local-pv99dgg [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 18:11:45.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-763" for this suite. • [SLOW TEST:75.391 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:614 all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:642 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running","total":18,"completed":2,"skipped":2410,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create volume metrics with the correct PVC ref /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:203 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 18:11:45.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 14 18:11:45.736: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 18:11:45.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-4793" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.052 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create volume metrics with the correct PVC ref [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:203 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create metrics for total number of volumes in A/D Controller /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:321 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 18:11:45.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 14 18:11:45.779: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 18:11:45.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-9395" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.041 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create metrics for total number of volumes in A/D Controller [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:321 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create bound pv/pvc count metrics for pvc controller after creating both pv and pvc /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:502 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 18:11:45.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 14 18:11:45.826: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 18:11:45.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-6373" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.041 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:382 should create bound pv/pvc count metrics for pvc controller after creating both pv and pvc /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:502 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Pod Disks [Serial] attach on previously attached volumes should work /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:457 [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 18:11:45.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 [It] [Serial] attach on previously attached volumes should work /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:457 Jun 14 18:11:45.876: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 18:11:45.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-97" for this suite. S [SKIPPING] [0.049 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Serial] attach on previously attached volumes should work [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:457 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:458 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create total pv count metrics for with plugin and volume mode labels after creating pv /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:512 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 18:11:45.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 14 18:11:45.923: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 18:11:45.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-4639" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.042 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:382 should create total pv count metrics for with plugin and volume mode labels after creating pv /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:512 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create none metrics for pvc controller before creating any PV or PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:480 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 18:11:45.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 14 18:11:45.973: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 18:11:45.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-379" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.041 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:382 should create none metrics for pvc controller before creating any PV or PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:480 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] One pod requesting one prebound PVC should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:228 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 18:11:45.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 Jun 14 18:11:52.039: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-3215 PodName:hostexec-leguer-worker-xsrb7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:11:52.039: INFO: >>> kubeConfig: /root/.kube/config Jun 14 18:11:52.285: INFO: exec leguer-worker: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Jun 14 18:11:52.286: INFO: exec leguer-worker: stdout: "0\n" Jun 14 18:11:52.286: INFO: exec leguer-worker: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Jun 14 18:11:52.286: INFO: exec leguer-worker: exit code: 0 Jun 14 18:11:52.286: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 18:11:52.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3215" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [6.305 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 One pod requesting one prebound PVC [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:228 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create metrics for total time taken in volume operations in P/V Controller /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:260 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 18:11:52.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 14 18:11:52.323: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 18:11:52.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-6147" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.040 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create metrics for total time taken in volume operations in P/V Controller [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:260 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create prometheus metrics for volume provisioning and attach/detach /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:100 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 18:11:52.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 14 18:11:52.368: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 18:11:52.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-2793" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.042 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create prometheus metrics for volume provisioning and attach/detach [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:100 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:282 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 18:11:52.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 Jun 14 18:11:58.430: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-8607 PodName:hostexec-leguer-worker-47j2x ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 14 18:11:58.430: INFO: >>> kubeConfig: /root/.kube/config Jun 14 18:11:58.577: INFO: exec leguer-worker: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Jun 14 18:11:58.577: INFO: exec leguer-worker: stdout: "0\n" Jun 14 18:11:58.577: INFO: exec leguer-worker: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Jun 14 18:11:58.577: INFO: exec leguer-worker: exit code: 0 Jun 14 18:11:58.577: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 18:11:58.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8607" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [6.203 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:256 should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:282 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSJun 14 18:11:58.594: INFO: Running AfterSuite actions on all nodes Jun 14 18:11:58.594: INFO: Running AfterSuite actions on node 1 Jun 14 18:11:58.594: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_storage_serial/junit_01.xml {"msg":"Test Suite completed","total":18,"completed":2,"skipped":5666,"failed":0} Ran 2 of 5668 Specs in 160.424 seconds SUCCESS! -- 2 Passed | 0 Failed | 0 Pending | 5666 Skipped PASS Ginkgo ran 1 suite in 2m42.115817017s Test Suite Passed