I0827 14:40:38.205583 17 test_context.go:457] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0827 14:40:38.205760 17 e2e.go:129] Starting e2e run "c49906dc-42d9-4706-83fd-9ec328a74578" on Ginkgo node 1 {"msg":"Test Suite starting","total":21,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1630075236 - Will randomize all specs Will run 21 of 5668 specs Aug 27 14:40:38.342: INFO: >>> kubeConfig: /root/.kube/config Aug 27 14:40:38.346: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 27 14:40:38.373: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 27 14:40:38.417: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 27 14:40:38.417: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 27 14:40:38.417: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 27 14:40:38.431: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 27 14:40:38.431: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 27 14:40:38.431: INFO: e2e test version: v1.20.10 Aug 27 14:40:38.433: INFO: kube-apiserver version: v1.20.7 Aug 27 14:40:38.433: INFO: >>> kubeConfig: /root/.kube/config Aug 27 14:40:38.439: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:251 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 14:40:38.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test Aug 27 14:40:38.499: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Aug 27 14:40:38.509: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 Aug 27 14:40:40.532: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-7423 PodName:hostexec-capi-leguer-md-0-555f949c67-5brzb-9gc8s ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:40:40.532: INFO: >>> kubeConfig: /root/.kube/config Aug 27 14:40:40.706: INFO: exec capi-leguer-md-0-555f949c67-5brzb: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Aug 27 14:40:40.706: INFO: exec capi-leguer-md-0-555f949c67-5brzb: stdout: "0\n" Aug 27 14:40:40.706: INFO: exec capi-leguer-md-0-555f949c67-5brzb: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Aug 27 14:40:40.706: INFO: exec capi-leguer-md-0-555f949c67-5brzb: exit code: 0 Aug 27 14:40:40.706: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 14:40:40.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7423" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.277 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 Two pods mounting a local volume one after the other [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:251 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create volume metrics with the correct PVC ref /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:203 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 14:40:40.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Aug 27 14:40:40.762: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 14:40:40.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-6455" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.052 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create volume metrics with the correct PVC ref [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:203 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create prometheus metrics for volume provisioning errors [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:146 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 14:40:40.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Aug 27 14:40:40.802: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 14:40:40.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-8507" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.040 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create prometheus metrics for volume provisioning errors [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:146 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create total pv count metrics for with plugin and volume mode labels after creating pv /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:512 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 14:40:40.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Aug 27 14:40:40.844: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 14:40:40.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-8651" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.040 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:382 should create total pv count metrics for with plugin and volume mode labels after creating pv /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:512 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:642 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 14:40:40.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:619 [It] all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:642 STEP: Create a PVC STEP: Create 50 pods to use this PVC STEP: Wait for all pods are running [AfterEach] Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:633 STEP: Clean PV local-pv7kbtb [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 14:41:56.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7857" for this suite. • [SLOW TEST:75.386 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:614 all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:642 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running","total":21,"completed":1,"skipped":831,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:517 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 14:41:56.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:441 STEP: Setting up 10 local volumes on node "capi-leguer-md-0-555f949c67-5brzb" STEP: Creating tmpfs mount point on node "capi-leguer-md-0-555f949c67-5brzb" at path "/tmp/local-volume-test-393271e0-e092-427d-bedb-95a5c8dc6b58" Aug 27 14:42:18.313: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-393271e0-e092-427d-bedb-95a5c8dc6b58" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-393271e0-e092-427d-bedb-95a5c8dc6b58" "/tmp/local-volume-test-393271e0-e092-427d-bedb-95a5c8dc6b58"] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-5brzb-xksff ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:42:18.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "capi-leguer-md-0-555f949c67-5brzb" at path "/tmp/local-volume-test-fedbc7dc-a68e-4696-b2b2-b7a71330eb3c" Aug 27 14:42:18.441: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-fedbc7dc-a68e-4696-b2b2-b7a71330eb3c" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-fedbc7dc-a68e-4696-b2b2-b7a71330eb3c" "/tmp/local-volume-test-fedbc7dc-a68e-4696-b2b2-b7a71330eb3c"] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-5brzb-xksff ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:42:18.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "capi-leguer-md-0-555f949c67-5brzb" at path "/tmp/local-volume-test-3a197f2e-536b-4eef-836d-1fbeb41cdb15" Aug 27 14:42:18.567: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-3a197f2e-536b-4eef-836d-1fbeb41cdb15" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-3a197f2e-536b-4eef-836d-1fbeb41cdb15" "/tmp/local-volume-test-3a197f2e-536b-4eef-836d-1fbeb41cdb15"] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-5brzb-xksff ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:42:18.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "capi-leguer-md-0-555f949c67-5brzb" at path "/tmp/local-volume-test-507200cd-4526-4bff-a0c4-b3fe2f04d73f" Aug 27 14:42:18.699: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-507200cd-4526-4bff-a0c4-b3fe2f04d73f" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-507200cd-4526-4bff-a0c4-b3fe2f04d73f" "/tmp/local-volume-test-507200cd-4526-4bff-a0c4-b3fe2f04d73f"] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-5brzb-xksff ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:42:18.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "capi-leguer-md-0-555f949c67-5brzb" at path "/tmp/local-volume-test-0219837c-1f93-4140-90d6-1c8ea570f6f3" Aug 27 14:42:18.824: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-0219837c-1f93-4140-90d6-1c8ea570f6f3" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-0219837c-1f93-4140-90d6-1c8ea570f6f3" "/tmp/local-volume-test-0219837c-1f93-4140-90d6-1c8ea570f6f3"] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-5brzb-xksff ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:42:18.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "capi-leguer-md-0-555f949c67-5brzb" at path "/tmp/local-volume-test-edb15d58-b565-49c7-8fff-400531ba66c0" Aug 27 14:42:18.949: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-edb15d58-b565-49c7-8fff-400531ba66c0" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-edb15d58-b565-49c7-8fff-400531ba66c0" "/tmp/local-volume-test-edb15d58-b565-49c7-8fff-400531ba66c0"] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-5brzb-xksff ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:42:18.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "capi-leguer-md-0-555f949c67-5brzb" at path "/tmp/local-volume-test-7a577d5d-5fbd-4ddb-b1f0-e78236e48703" Aug 27 14:42:19.070: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-7a577d5d-5fbd-4ddb-b1f0-e78236e48703" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-7a577d5d-5fbd-4ddb-b1f0-e78236e48703" "/tmp/local-volume-test-7a577d5d-5fbd-4ddb-b1f0-e78236e48703"] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-5brzb-xksff ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:42:19.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "capi-leguer-md-0-555f949c67-5brzb" at path "/tmp/local-volume-test-baff706e-b3d8-4285-a592-b0912679fe35" Aug 27 14:42:19.233: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-baff706e-b3d8-4285-a592-b0912679fe35" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-baff706e-b3d8-4285-a592-b0912679fe35" "/tmp/local-volume-test-baff706e-b3d8-4285-a592-b0912679fe35"] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-5brzb-xksff ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:42:19.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "capi-leguer-md-0-555f949c67-5brzb" at path "/tmp/local-volume-test-878183af-18fb-48e9-a4ec-d5c944afd31d" Aug 27 14:42:19.324: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-878183af-18fb-48e9-a4ec-d5c944afd31d" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-878183af-18fb-48e9-a4ec-d5c944afd31d" "/tmp/local-volume-test-878183af-18fb-48e9-a4ec-d5c944afd31d"] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-5brzb-xksff ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:42:19.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "capi-leguer-md-0-555f949c67-5brzb" at path "/tmp/local-volume-test-c135b31b-bd35-40c1-be4d-de04e0b72618" Aug 27 14:42:19.446: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-c135b31b-bd35-40c1-be4d-de04e0b72618" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-c135b31b-bd35-40c1-be4d-de04e0b72618" "/tmp/local-volume-test-c135b31b-bd35-40c1-be4d-de04e0b72618"] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-5brzb-xksff ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:42:19.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Setting up 10 local volumes on node "capi-leguer-md-0-555f949c67-tw45m" STEP: Creating tmpfs mount point on node "capi-leguer-md-0-555f949c67-tw45m" at path "/tmp/local-volume-test-1e7911e5-958f-4813-b42f-9e264f60e55e" Aug 27 14:42:21.592: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-1e7911e5-958f-4813-b42f-9e264f60e55e" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-1e7911e5-958f-4813-b42f-9e264f60e55e" "/tmp/local-volume-test-1e7911e5-958f-4813-b42f-9e264f60e55e"] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-tw45m-tv7lr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:42:21.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "capi-leguer-md-0-555f949c67-tw45m" at path "/tmp/local-volume-test-526950ad-d58e-41fc-ab6e-b87713be4fbd" Aug 27 14:42:21.693: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-526950ad-d58e-41fc-ab6e-b87713be4fbd" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-526950ad-d58e-41fc-ab6e-b87713be4fbd" "/tmp/local-volume-test-526950ad-d58e-41fc-ab6e-b87713be4fbd"] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-tw45m-tv7lr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:42:21.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "capi-leguer-md-0-555f949c67-tw45m" at path "/tmp/local-volume-test-6cd63dee-521a-41f2-8a8f-1fd32bcd4b9f" Aug 27 14:42:21.826: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-6cd63dee-521a-41f2-8a8f-1fd32bcd4b9f" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-6cd63dee-521a-41f2-8a8f-1fd32bcd4b9f" "/tmp/local-volume-test-6cd63dee-521a-41f2-8a8f-1fd32bcd4b9f"] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-tw45m-tv7lr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:42:21.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "capi-leguer-md-0-555f949c67-tw45m" at path "/tmp/local-volume-test-6fc7ba27-06a8-4fac-a366-27c335460226" Aug 27 14:42:21.958: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-6fc7ba27-06a8-4fac-a366-27c335460226" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-6fc7ba27-06a8-4fac-a366-27c335460226" "/tmp/local-volume-test-6fc7ba27-06a8-4fac-a366-27c335460226"] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-tw45m-tv7lr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:42:21.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "capi-leguer-md-0-555f949c67-tw45m" at path "/tmp/local-volume-test-2fcf94db-3188-44f2-90a9-b627fc6a4b0c" Aug 27 14:42:22.082: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-2fcf94db-3188-44f2-90a9-b627fc6a4b0c" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-2fcf94db-3188-44f2-90a9-b627fc6a4b0c" "/tmp/local-volume-test-2fcf94db-3188-44f2-90a9-b627fc6a4b0c"] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-tw45m-tv7lr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:42:22.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "capi-leguer-md-0-555f949c67-tw45m" at path "/tmp/local-volume-test-448fff87-92dd-4a18-bfe4-cea0c5364a88" Aug 27 14:42:22.214: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-448fff87-92dd-4a18-bfe4-cea0c5364a88" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-448fff87-92dd-4a18-bfe4-cea0c5364a88" "/tmp/local-volume-test-448fff87-92dd-4a18-bfe4-cea0c5364a88"] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-tw45m-tv7lr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:42:22.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "capi-leguer-md-0-555f949c67-tw45m" at path "/tmp/local-volume-test-523e32cb-0fd1-421e-9e1f-9434462396a9" Aug 27 14:42:22.340: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-523e32cb-0fd1-421e-9e1f-9434462396a9" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-523e32cb-0fd1-421e-9e1f-9434462396a9" "/tmp/local-volume-test-523e32cb-0fd1-421e-9e1f-9434462396a9"] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-tw45m-tv7lr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:42:22.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "capi-leguer-md-0-555f949c67-tw45m" at path "/tmp/local-volume-test-6f793844-1ef9-46fb-9c9e-3c92011b0c16" Aug 27 14:42:22.464: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-6f793844-1ef9-46fb-9c9e-3c92011b0c16" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-6f793844-1ef9-46fb-9c9e-3c92011b0c16" "/tmp/local-volume-test-6f793844-1ef9-46fb-9c9e-3c92011b0c16"] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-tw45m-tv7lr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:42:22.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "capi-leguer-md-0-555f949c67-tw45m" at path "/tmp/local-volume-test-4793dadc-cc6b-41a5-88c7-621b2aa2feee" Aug 27 14:42:22.548: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-4793dadc-cc6b-41a5-88c7-621b2aa2feee" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-4793dadc-cc6b-41a5-88c7-621b2aa2feee" "/tmp/local-volume-test-4793dadc-cc6b-41a5-88c7-621b2aa2feee"] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-tw45m-tv7lr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:42:22.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "capi-leguer-md-0-555f949c67-tw45m" at path "/tmp/local-volume-test-467d7fac-db9b-4bf2-84e4-10a58d21e81b" Aug 27 14:42:22.678: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-467d7fac-db9b-4bf2-84e4-10a58d21e81b" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-467d7fac-db9b-4bf2-84e4-10a58d21e81b" "/tmp/local-volume-test-467d7fac-db9b-4bf2-84e4-10a58d21e81b"] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-tw45m-tv7lr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:42:22.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Create 20 PVs STEP: Start a goroutine to recycle unbound PVs [It] should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:517 STEP: Creating 7 pods periodically STEP: Waiting for all pods to complete successfully Aug 27 14:42:28.996: INFO: Deleting pod pod-d208f65f-3a17-432e-a10e-687b73eeae1b Aug 27 14:42:29.006: INFO: Deleting PersistentVolumeClaim "pvc-5gtln" Aug 27 14:42:29.011: INFO: Deleting PersistentVolumeClaim "pvc-tltc7" Aug 27 14:42:29.015: INFO: Deleting PersistentVolumeClaim "pvc-978sg" Aug 27 14:42:29.019: INFO: 1/28 pods finished STEP: Delete "local-pvjwgps" and create a new PV for same local volume storage STEP: Delete "local-pvjwgps" and create a new PV for same local volume storage STEP: Delete "local-pvkxcb9" and create a new PV for same local volume storage STEP: Delete "local-pv8rht9" and create a new PV for same local volume storage Aug 27 14:42:29.996: INFO: Deleting pod pod-c4a4cd3d-4224-486b-a289-8a8d9c7d3d60 Aug 27 14:42:30.003: INFO: Deleting PersistentVolumeClaim "pvc-d7jmx" Aug 27 14:42:30.008: INFO: Deleting PersistentVolumeClaim "pvc-lgq46" Aug 27 14:42:30.012: INFO: Deleting PersistentVolumeClaim "pvc-5wpph" Aug 27 14:42:30.016: INFO: 2/28 pods finished STEP: Delete "local-pvjplxz" and create a new PV for same local volume storage STEP: Delete "local-pvjplxz" and create a new PV for same local volume storage STEP: Delete "local-pvnzxph" and create a new PV for same local volume storage STEP: Delete "local-pv8bph7" and create a new PV for same local volume storage Aug 27 14:42:30.996: INFO: Deleting pod pod-d9b74c9b-1fb1-4522-9182-f816089c1dd6 Aug 27 14:42:31.004: INFO: Deleting PersistentVolumeClaim "pvc-nxzxm" Aug 27 14:42:31.009: INFO: Deleting PersistentVolumeClaim "pvc-l85qz" Aug 27 14:42:31.013: INFO: Deleting PersistentVolumeClaim "pvc-qr2zs" Aug 27 14:42:31.018: INFO: 3/28 pods finished STEP: Delete "local-pv2vl74" and create a new PV for same local volume storage STEP: Delete "local-pvt2xml" and create a new PV for same local volume storage STEP: Delete "local-pvwdvq9" and create a new PV for same local volume storage Aug 27 14:42:35.996: INFO: Deleting pod pod-a629222f-8f70-4e34-9ecc-69912bc851d7 Aug 27 14:42:36.004: INFO: Deleting PersistentVolumeClaim "pvc-z9fjn" Aug 27 14:42:36.009: INFO: Deleting PersistentVolumeClaim "pvc-kwmzg" Aug 27 14:42:36.015: INFO: Deleting PersistentVolumeClaim "pvc-rv79s" Aug 27 14:42:36.020: INFO: 4/28 pods finished STEP: Delete "local-pvvx5jl" and create a new PV for same local volume storage STEP: Delete "local-pvvx5jl" and create a new PV for same local volume storage STEP: Delete "local-pvhrnj2" and create a new PV for same local volume storage STEP: Delete "local-pvx6jvd" and create a new PV for same local volume storage Aug 27 14:42:41.996: INFO: Deleting pod pod-3e9ca3d3-47cd-4f3c-82f2-b89c9b563db7 Aug 27 14:42:42.006: INFO: Deleting PersistentVolumeClaim "pvc-lgjtt" Aug 27 14:42:42.011: INFO: Deleting PersistentVolumeClaim "pvc-wh8z6" Aug 27 14:42:42.019: INFO: Deleting PersistentVolumeClaim "pvc-qghgn" Aug 27 14:42:42.023: INFO: 5/28 pods finished STEP: Delete "local-pvtl4lm" and create a new PV for same local volume storage STEP: Delete "local-pvtl4lm" and create a new PV for same local volume storage STEP: Delete "local-pvvkp5t" and create a new PV for same local volume storage STEP: Delete "local-pvvkp5t" and create a new PV for same local volume storage STEP: Delete "local-pvhcc9r" and create a new PV for same local volume storage STEP: Delete "local-pvhcc9r" and create a new PV for same local volume storage Aug 27 14:42:43.996: INFO: Deleting pod pod-2e54ee42-8dee-4087-831e-9f61a848d60f Aug 27 14:42:44.006: INFO: Deleting PersistentVolumeClaim "pvc-6glbj" Aug 27 14:42:44.010: INFO: Deleting PersistentVolumeClaim "pvc-l6xsv" Aug 27 14:42:44.018: INFO: Deleting PersistentVolumeClaim "pvc-wzjh5" Aug 27 14:42:44.022: INFO: 6/28 pods finished STEP: Delete "local-pvpqmb8" and create a new PV for same local volume storage STEP: Delete "local-pvhpwg4" and create a new PV for same local volume storage STEP: Delete "local-pvpmstp" and create a new PV for same local volume storage Aug 27 14:42:45.996: INFO: Deleting pod pod-15a6f61c-117a-4bdb-ad52-fa8647eb67b2 Aug 27 14:42:46.004: INFO: Deleting PersistentVolumeClaim "pvc-bzwd6" Aug 27 14:42:46.009: INFO: Deleting PersistentVolumeClaim "pvc-ftc9h" Aug 27 14:42:46.016: INFO: Deleting PersistentVolumeClaim "pvc-xf49b" Aug 27 14:42:46.020: INFO: 7/28 pods finished STEP: Delete "local-pv76tgb" and create a new PV for same local volume storage STEP: Delete "local-pvqzxlh" and create a new PV for same local volume storage STEP: Delete "local-pvdsvh5" and create a new PV for same local volume storage STEP: Delete "local-pv7kbtb" and create a new PV for same local volume storage Aug 27 14:42:47.996: INFO: Deleting pod pod-7387f56c-7fc0-45b5-b910-461da1aa2a9b Aug 27 14:42:48.003: INFO: Deleting PersistentVolumeClaim "pvc-5kwz8" Aug 27 14:42:48.008: INFO: Deleting PersistentVolumeClaim "pvc-dnqqj" Aug 27 14:42:48.015: INFO: Deleting PersistentVolumeClaim "pvc-7x8fr" Aug 27 14:42:48.021: INFO: 8/28 pods finished Aug 27 14:42:48.021: INFO: Deleting pod pod-9ca572ba-8469-4f91-ada6-d64526c8a020 Aug 27 14:42:48.035: INFO: Deleting PersistentVolumeClaim "pvc-hr46j" STEP: Delete "local-pv59xcz" and create a new PV for same local volume storage Aug 27 14:42:48.042: INFO: Deleting PersistentVolumeClaim "pvc-lp8vd" Aug 27 14:42:48.048: INFO: Deleting PersistentVolumeClaim "pvc-5csfq" Aug 27 14:42:48.052: INFO: 9/28 pods finished STEP: Delete "local-pv59xcz" and create a new PV for same local volume storage STEP: Delete "local-pvc8mjc" and create a new PV for same local volume storage STEP: Delete "local-pvc8mjc" and create a new PV for same local volume storage STEP: Delete "local-pvnnkrl" and create a new PV for same local volume storage STEP: Delete "local-pvnnkrl" and create a new PV for same local volume storage STEP: Delete "local-pvk6gj5" and create a new PV for same local volume storage STEP: Delete "local-pvhn56z" and create a new PV for same local volume storage STEP: Delete "local-pvr5z2m" and create a new PV for same local volume storage Aug 27 14:42:48.996: INFO: Deleting pod pod-a3b881b5-98e2-4f5e-b5a8-27af92f5baf6 Aug 27 14:42:49.007: INFO: Deleting PersistentVolumeClaim "pvc-bdtnv" Aug 27 14:42:49.011: INFO: Deleting PersistentVolumeClaim "pvc-mwvzc" Aug 27 14:42:49.016: INFO: Deleting PersistentVolumeClaim "pvc-4tt42" Aug 27 14:42:49.020: INFO: 10/28 pods finished STEP: Delete "local-pv9pd9j" and create a new PV for same local volume storage STEP: Delete "local-pv9pd9j" and create a new PV for same local volume storage STEP: Delete "local-pv7hj6n" and create a new PV for same local volume storage STEP: Delete "local-pv84c6c" and create a new PV for same local volume storage Aug 27 14:42:51.996: INFO: Deleting pod pod-bb964c6d-862f-45cb-bdd8-3692836b824a Aug 27 14:42:52.005: INFO: Deleting PersistentVolumeClaim "pvc-2tqvw" Aug 27 14:42:52.010: INFO: Deleting PersistentVolumeClaim "pvc-bzppp" Aug 27 14:42:52.020: INFO: Deleting PersistentVolumeClaim "pvc-vjmhk" Aug 27 14:42:52.025: INFO: 11/28 pods finished STEP: Delete "local-pv25lkn" and create a new PV for same local volume storage STEP: Delete "local-pv25lkn" and create a new PV for same local volume storage STEP: Delete "local-pvtfkrh" and create a new PV for same local volume storage STEP: Delete "local-pvtfkrh" and create a new PV for same local volume storage STEP: Delete "local-pvncszw" and create a new PV for same local volume storage Aug 27 14:42:53.997: INFO: Deleting pod pod-5973c9ff-df49-470e-bb75-d60f59b46dc5 Aug 27 14:42:54.005: INFO: Deleting PersistentVolumeClaim "pvc-6nzq5" Aug 27 14:42:54.010: INFO: Deleting PersistentVolumeClaim "pvc-4n78w" Aug 27 14:42:54.015: INFO: Deleting PersistentVolumeClaim "pvc-tqlz7" Aug 27 14:42:54.061: INFO: 12/28 pods finished STEP: Delete "local-pvzgwpk" and create a new PV for same local volume storage STEP: Delete "local-pvzgwpk" and create a new PV for same local volume storage STEP: Delete "local-pvgztzq" and create a new PV for same local volume storage STEP: Delete "local-pvgztzq" and create a new PV for same local volume storage STEP: Delete "local-pvkjsjs" and create a new PV for same local volume storage STEP: Delete "local-pvkjsjs" and create a new PV for same local volume storage Aug 27 14:42:57.997: INFO: Deleting pod pod-29eb5041-0ed4-4887-abea-3e9477c052bc Aug 27 14:42:58.004: INFO: Deleting PersistentVolumeClaim "pvc-f9rlr" Aug 27 14:42:58.008: INFO: Deleting PersistentVolumeClaim "pvc-mbz56" Aug 27 14:42:58.016: INFO: Deleting PersistentVolumeClaim "pvc-wl4gw" Aug 27 14:42:58.021: INFO: 13/28 pods finished Aug 27 14:42:58.021: INFO: Deleting pod pod-baef6040-d3bb-41d5-a8fd-395e990c75b1 Aug 27 14:42:58.031: INFO: Deleting PersistentVolumeClaim "pvc-2z8fl" STEP: Delete "local-pvn9tfs" and create a new PV for same local volume storage Aug 27 14:42:58.034: INFO: Deleting PersistentVolumeClaim "pvc-7w4d6" Aug 27 14:42:58.038: INFO: Deleting PersistentVolumeClaim "pvc-rlzgd" Aug 27 14:42:58.042: INFO: 14/28 pods finished STEP: Delete "local-pvn9tfs" and create a new PV for same local volume storage STEP: Delete "local-pvl5ghk" and create a new PV for same local volume storage STEP: Delete "local-pvl5ghk" and create a new PV for same local volume storage STEP: Delete "local-pv64csl" and create a new PV for same local volume storage STEP: Delete "local-pvd9fnp" and create a new PV for same local volume storage STEP: Delete "local-pvhg8h5" and create a new PV for same local volume storage STEP: Delete "local-pv9pwpj" and create a new PV for same local volume storage Aug 27 14:42:59.996: INFO: Deleting pod pod-2d19bf6d-7fc8-48a1-baab-7fbadc909893 Aug 27 14:43:00.004: INFO: Deleting PersistentVolumeClaim "pvc-qftwz" Aug 27 14:43:00.009: INFO: Deleting PersistentVolumeClaim "pvc-bdz28" Aug 27 14:43:00.014: INFO: Deleting PersistentVolumeClaim "pvc-5w9sq" Aug 27 14:43:00.018: INFO: 15/28 pods finished STEP: Delete "local-pvq6cqh" and create a new PV for same local volume storage STEP: Delete "local-pvq6cqh" and create a new PV for same local volume storage STEP: Delete "local-pv6q9k5" and create a new PV for same local volume storage STEP: Delete "local-pv6q9k5" and create a new PV for same local volume storage STEP: Delete "local-pvd62nv" and create a new PV for same local volume storage Aug 27 14:43:00.996: INFO: Deleting pod pod-15263e3e-d14b-4e95-875b-8fd68c1ea0a8 Aug 27 14:43:01.005: INFO: Deleting PersistentVolumeClaim "pvc-fghv8" Aug 27 14:43:01.009: INFO: Deleting PersistentVolumeClaim "pvc-bsw7z" Aug 27 14:43:01.013: INFO: Deleting PersistentVolumeClaim "pvc-6gdvz" Aug 27 14:43:01.017: INFO: 16/28 pods finished STEP: Delete "local-pv8pm5n" and create a new PV for same local volume storage STEP: Delete "local-pv8pm5n" and create a new PV for same local volume storage STEP: Delete "local-pv4nfmp" and create a new PV for same local volume storage STEP: Delete "local-pvqvlh4" and create a new PV for same local volume storage Aug 27 14:43:02.996: INFO: Deleting pod pod-8c555e5f-6542-454b-8e30-ec8a3997c757 Aug 27 14:43:03.004: INFO: Deleting PersistentVolumeClaim "pvc-bsbxl" Aug 27 14:43:03.008: INFO: Deleting PersistentVolumeClaim "pvc-qrk77" Aug 27 14:43:03.016: INFO: Deleting PersistentVolumeClaim "pvc-tdc29" Aug 27 14:43:03.021: INFO: 17/28 pods finished STEP: Delete "local-pv9wd45" and create a new PV for same local volume storage STEP: Delete "local-pv9wd45" and create a new PV for same local volume storage STEP: Delete "local-pv65pd4" and create a new PV for same local volume storage STEP: Delete "local-pvvsv6v" and create a new PV for same local volume storage Aug 27 14:43:04.996: INFO: Deleting pod pod-5eb70061-bd01-44f0-9957-3ac9484345d9 Aug 27 14:43:05.003: INFO: Deleting PersistentVolumeClaim "pvc-jn8kv" Aug 27 14:43:05.007: INFO: Deleting PersistentVolumeClaim "pvc-dzg7m" Aug 27 14:43:05.019: INFO: Deleting PersistentVolumeClaim "pvc-jfdgn" Aug 27 14:43:05.023: INFO: 18/28 pods finished STEP: Delete "local-pv9lzq7" and create a new PV for same local volume storage STEP: Delete "local-pv9lzq7" and create a new PV for same local volume storage STEP: Delete "local-pvt28nl" and create a new PV for same local volume storage STEP: Delete "local-pvt28nl" and create a new PV for same local volume storage STEP: Delete "local-pv8ptjn" and create a new PV for same local volume storage STEP: Delete "local-pv8ptjn" and create a new PV for same local volume storage Aug 27 14:43:06.996: INFO: Deleting pod pod-8af8d13a-a7c6-49fb-a12e-c18e757664a1 Aug 27 14:43:07.006: INFO: Deleting PersistentVolumeClaim "pvc-6lbxz" Aug 27 14:43:07.011: INFO: Deleting PersistentVolumeClaim "pvc-jnktf" Aug 27 14:43:07.015: INFO: Deleting PersistentVolumeClaim "pvc-kw5s2" Aug 27 14:43:07.020: INFO: 19/28 pods finished STEP: Delete "local-pvvcd78" and create a new PV for same local volume storage STEP: Delete "local-pvvcd78" and create a new PV for same local volume storage STEP: Delete "local-pvf7slw" and create a new PV for same local volume storage STEP: Delete "local-pvf7slw" and create a new PV for same local volume storage STEP: Delete "local-pvt7h7p" and create a new PV for same local volume storage STEP: Delete "local-pvt7h7p" and create a new PV for same local volume storage Aug 27 14:43:07.996: INFO: Deleting pod pod-7b62c868-1e0a-468f-abd5-bd63f2aa8fa2 Aug 27 14:43:08.004: INFO: Deleting PersistentVolumeClaim "pvc-tdmtz" Aug 27 14:43:08.007: INFO: Deleting PersistentVolumeClaim "pvc-25tbv" Aug 27 14:43:08.014: INFO: Deleting PersistentVolumeClaim "pvc-s7f4j" Aug 27 14:43:08.018: INFO: 20/28 pods finished STEP: Delete "local-pvt6lcp" and create a new PV for same local volume storage STEP: Delete "local-pvt6lcp" and create a new PV for same local volume storage STEP: Delete "local-pvnb5mr" and create a new PV for same local volume storage STEP: Delete "local-pv47bgs" and create a new PV for same local volume storage Aug 27 14:43:08.997: INFO: Deleting pod pod-1e90cc9f-cdb5-47e9-b0b1-0e8aa1761e6e Aug 27 14:43:09.005: INFO: Deleting PersistentVolumeClaim "pvc-4pph5" Aug 27 14:43:09.009: INFO: Deleting PersistentVolumeClaim "pvc-n9hdt" Aug 27 14:43:09.015: INFO: Deleting PersistentVolumeClaim "pvc-r8zjk" Aug 27 14:43:09.019: INFO: 21/28 pods finished STEP: Delete "local-pvclkrh" and create a new PV for same local volume storage STEP: Delete "local-pvclkrh" and create a new PV for same local volume storage STEP: Delete "local-pvrqmj7" and create a new PV for same local volume storage STEP: Delete "local-pvc2f2x" and create a new PV for same local volume storage Aug 27 14:43:11.997: INFO: Deleting pod pod-fe5b4cf6-0158-4868-822c-ea666ca8c7ed Aug 27 14:43:12.006: INFO: Deleting PersistentVolumeClaim "pvc-s6fv4" Aug 27 14:43:12.011: INFO: Deleting PersistentVolumeClaim "pvc-9t9zm" Aug 27 14:43:12.024: INFO: Deleting PersistentVolumeClaim "pvc-kccb7" Aug 27 14:43:12.029: INFO: 22/28 pods finished STEP: Delete "local-pv6s2hw" and create a new PV for same local volume storage STEP: Delete "local-pv7qgmv" and create a new PV for same local volume storage STEP: Delete "local-pvcqm2s" and create a new PV for same local volume storage Aug 27 14:43:12.996: INFO: Deleting pod pod-14ead229-337c-47fe-ba05-d29365638344 Aug 27 14:43:13.007: INFO: Deleting PersistentVolumeClaim "pvc-7hl9z" Aug 27 14:43:13.011: INFO: Deleting PersistentVolumeClaim "pvc-7vpm5" Aug 27 14:43:13.020: INFO: Deleting PersistentVolumeClaim "pvc-6xp6w" Aug 27 14:43:13.025: INFO: 23/28 pods finished STEP: Delete "local-pvrsgdv" and create a new PV for same local volume storage STEP: Delete "local-pvrsgdv" and create a new PV for same local volume storage STEP: Delete "local-pvznvnk" and create a new PV for same local volume storage STEP: Delete "local-pvz8tsj" and create a new PV for same local volume storage Aug 27 14:43:14.996: INFO: Deleting pod pod-8e1b9741-e3d4-4309-9fac-732f86fa98c3 Aug 27 14:43:15.005: INFO: Deleting PersistentVolumeClaim "pvc-ptk6k" Aug 27 14:43:15.010: INFO: Deleting PersistentVolumeClaim "pvc-b22g8" Aug 27 14:43:15.016: INFO: Deleting PersistentVolumeClaim "pvc-jlhxn" Aug 27 14:43:15.021: INFO: 24/28 pods finished STEP: Delete "local-pvkksgx" and create a new PV for same local volume storage STEP: Delete "local-pvkksgx" and create a new PV for same local volume storage STEP: Delete "local-pv6f424" and create a new PV for same local volume storage STEP: Delete "local-pv4tgpv" and create a new PV for same local volume storage Aug 27 14:43:16.996: INFO: Deleting pod pod-f8507ff5-4b10-4f86-8667-74d4b682c01f Aug 27 14:43:17.005: INFO: Deleting PersistentVolumeClaim "pvc-hxrvt" Aug 27 14:43:17.015: INFO: Deleting PersistentVolumeClaim "pvc-5gtrs" Aug 27 14:43:17.024: INFO: Deleting PersistentVolumeClaim "pvc-ghnx9" Aug 27 14:43:17.028: INFO: 25/28 pods finished STEP: Delete "local-pv2j6rc" and create a new PV for same local volume storage STEP: Delete "local-pv99tt7" and create a new PV for same local volume storage STEP: Delete "local-pv99tt7" and create a new PV for same local volume storage STEP: Delete "local-pvqbpj9" and create a new PV for same local volume storage Aug 27 14:43:17.996: INFO: Deleting pod pod-41c92e49-462f-4a83-8187-46bbd58e0294 Aug 27 14:43:18.005: INFO: Deleting PersistentVolumeClaim "pvc-8cw5m" Aug 27 14:43:18.010: INFO: Deleting PersistentVolumeClaim "pvc-8sjjg" Aug 27 14:43:18.018: INFO: Deleting PersistentVolumeClaim "pvc-9lv6p" Aug 27 14:43:18.023: INFO: 26/28 pods finished STEP: Delete "local-pvjbw2c" and create a new PV for same local volume storage STEP: Delete "local-pvjbw2c" and create a new PV for same local volume storage STEP: Delete "local-pvtk5wr" and create a new PV for same local volume storage STEP: Delete "local-pvdfrgw" and create a new PV for same local volume storage Aug 27 14:43:18.996: INFO: Deleting pod pod-0c833695-ea47-4c91-a184-a7257b29fbcb Aug 27 14:43:19.005: INFO: Deleting PersistentVolumeClaim "pvc-jh2tr" Aug 27 14:43:19.010: INFO: Deleting PersistentVolumeClaim "pvc-wt4nx" Aug 27 14:43:19.016: INFO: Deleting PersistentVolumeClaim "pvc-mzq6j" Aug 27 14:43:19.021: INFO: 27/28 pods finished STEP: Delete "local-pvw6t9x" and create a new PV for same local volume storage STEP: Delete "local-pvlszdv" and create a new PV for same local volume storage STEP: Delete "local-pvxbcj9" and create a new PV for same local volume storage Aug 27 14:43:19.996: INFO: Deleting pod pod-6308d210-090d-412f-94c0-ab871e621d5f Aug 27 14:43:20.009: INFO: Deleting PersistentVolumeClaim "pvc-sj7mv" Aug 27 14:43:20.013: INFO: Deleting PersistentVolumeClaim "pvc-d9c69" Aug 27 14:43:20.022: INFO: Deleting PersistentVolumeClaim "pvc-n9chg" Aug 27 14:43:20.026: INFO: 28/28 pods finished [AfterEach] Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:505 STEP: Stop and wait for recycle goroutine to finish STEP: Clean all PVs STEP: Cleaning up 10 local volumes on node "capi-leguer-md-0-555f949c67-5brzb" STEP: Cleaning up PVC and PV Aug 27 14:43:20.026: INFO: pvc is nil Aug 27 14:43:20.026: INFO: Deleting PersistentVolume "local-pvmc67m" STEP: Cleaning up PVC and PV Aug 27 14:43:20.030: INFO: pvc is nil Aug 27 14:43:20.030: INFO: Deleting PersistentVolume "local-pvl79rs" STEP: Cleaning up PVC and PV Aug 27 14:43:20.035: INFO: pvc is nil Aug 27 14:43:20.035: INFO: Deleting PersistentVolume "local-pvpjf7q" STEP: Cleaning up PVC and PV Aug 27 14:43:20.039: INFO: pvc is nil Aug 27 14:43:20.039: INFO: Deleting PersistentVolume "local-pvb6wcn" STEP: Cleaning up PVC and PV Aug 27 14:43:20.042: INFO: pvc is nil Aug 27 14:43:20.042: INFO: Deleting PersistentVolume "local-pvzqdjf" STEP: Cleaning up PVC and PV Aug 27 14:43:20.045: INFO: pvc is nil Aug 27 14:43:20.045: INFO: Deleting PersistentVolume "local-pvqmhlr" STEP: Cleaning up PVC and PV Aug 27 14:43:20.049: INFO: pvc is nil Aug 27 14:43:20.049: INFO: Deleting PersistentVolume "local-pv2fs8n" STEP: Cleaning up PVC and PV Aug 27 14:43:20.052: INFO: pvc is nil Aug 27 14:43:20.052: INFO: Deleting PersistentVolume "local-pv2gzkz" STEP: Cleaning up PVC and PV Aug 27 14:43:20.056: INFO: pvc is nil Aug 27 14:43:20.056: INFO: Deleting PersistentVolume "local-pvjxpfl" STEP: Cleaning up PVC and PV Aug 27 14:43:20.060: INFO: pvc is nil Aug 27 14:43:20.060: INFO: Deleting PersistentVolume "local-pv5558q" STEP: Unmount tmpfs mount point on node "capi-leguer-md-0-555f949c67-5brzb" at path "/tmp/local-volume-test-393271e0-e092-427d-bedb-95a5c8dc6b58" Aug 27 14:43:20.063: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-393271e0-e092-427d-bedb-95a5c8dc6b58"] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-5brzb-xksff ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:20.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Aug 27 14:43:20.211: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-393271e0-e092-427d-bedb-95a5c8dc6b58] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-5brzb-xksff ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:20.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "capi-leguer-md-0-555f949c67-5brzb" at path "/tmp/local-volume-test-fedbc7dc-a68e-4696-b2b2-b7a71330eb3c" Aug 27 14:43:20.336: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-fedbc7dc-a68e-4696-b2b2-b7a71330eb3c"] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-5brzb-xksff ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:20.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Aug 27 14:43:20.420: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-fedbc7dc-a68e-4696-b2b2-b7a71330eb3c] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-5brzb-xksff ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:20.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "capi-leguer-md-0-555f949c67-5brzb" at path "/tmp/local-volume-test-3a197f2e-536b-4eef-836d-1fbeb41cdb15" Aug 27 14:43:20.544: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-3a197f2e-536b-4eef-836d-1fbeb41cdb15"] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-5brzb-xksff ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:20.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Aug 27 14:43:20.630: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-3a197f2e-536b-4eef-836d-1fbeb41cdb15] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-5brzb-xksff ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:20.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "capi-leguer-md-0-555f949c67-5brzb" at path "/tmp/local-volume-test-507200cd-4526-4bff-a0c4-b3fe2f04d73f" Aug 27 14:43:20.759: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-507200cd-4526-4bff-a0c4-b3fe2f04d73f"] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-5brzb-xksff ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:20.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Aug 27 14:43:20.839: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-507200cd-4526-4bff-a0c4-b3fe2f04d73f] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-5brzb-xksff ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:20.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "capi-leguer-md-0-555f949c67-5brzb" at path "/tmp/local-volume-test-0219837c-1f93-4140-90d6-1c8ea570f6f3" Aug 27 14:43:20.957: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-0219837c-1f93-4140-90d6-1c8ea570f6f3"] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-5brzb-xksff ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:20.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Aug 27 14:43:21.040: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0219837c-1f93-4140-90d6-1c8ea570f6f3] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-5brzb-xksff ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:21.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "capi-leguer-md-0-555f949c67-5brzb" at path "/tmp/local-volume-test-edb15d58-b565-49c7-8fff-400531ba66c0" Aug 27 14:43:21.162: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-edb15d58-b565-49c7-8fff-400531ba66c0"] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-5brzb-xksff ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:21.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Aug 27 14:43:21.278: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-edb15d58-b565-49c7-8fff-400531ba66c0] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-5brzb-xksff ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:21.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "capi-leguer-md-0-555f949c67-5brzb" at path "/tmp/local-volume-test-7a577d5d-5fbd-4ddb-b1f0-e78236e48703" Aug 27 14:43:21.404: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-7a577d5d-5fbd-4ddb-b1f0-e78236e48703"] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-5brzb-xksff ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:21.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Aug 27 14:43:21.484: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-7a577d5d-5fbd-4ddb-b1f0-e78236e48703] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-5brzb-xksff ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:21.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "capi-leguer-md-0-555f949c67-5brzb" at path "/tmp/local-volume-test-baff706e-b3d8-4285-a592-b0912679fe35" Aug 27 14:43:21.596: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-baff706e-b3d8-4285-a592-b0912679fe35"] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-5brzb-xksff ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:21.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Aug 27 14:43:21.748: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-baff706e-b3d8-4285-a592-b0912679fe35] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-5brzb-xksff ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:21.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "capi-leguer-md-0-555f949c67-5brzb" at path "/tmp/local-volume-test-878183af-18fb-48e9-a4ec-d5c944afd31d" Aug 27 14:43:21.884: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-878183af-18fb-48e9-a4ec-d5c944afd31d"] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-5brzb-xksff ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:21.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Aug 27 14:43:22.024: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-878183af-18fb-48e9-a4ec-d5c944afd31d] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-5brzb-xksff ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:22.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "capi-leguer-md-0-555f949c67-5brzb" at path "/tmp/local-volume-test-c135b31b-bd35-40c1-be4d-de04e0b72618" Aug 27 14:43:22.142: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-c135b31b-bd35-40c1-be4d-de04e0b72618"] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-5brzb-xksff ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:22.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Aug 27 14:43:22.258: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c135b31b-bd35-40c1-be4d-de04e0b72618] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-5brzb-xksff ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:22.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up 10 local volumes on node "capi-leguer-md-0-555f949c67-tw45m" STEP: Cleaning up PVC and PV Aug 27 14:43:22.337: INFO: pvc is nil Aug 27 14:43:22.337: INFO: Deleting PersistentVolume "local-pvcxq5q" STEP: Cleaning up PVC and PV Aug 27 14:43:22.343: INFO: pvc is nil Aug 27 14:43:22.343: INFO: Deleting PersistentVolume "local-pvz9qvq" STEP: Cleaning up PVC and PV Aug 27 14:43:22.356: INFO: pvc is nil Aug 27 14:43:22.356: INFO: Deleting PersistentVolume "local-pvkb9np" STEP: Cleaning up PVC and PV Aug 27 14:43:22.360: INFO: pvc is nil Aug 27 14:43:22.360: INFO: Deleting PersistentVolume "local-pv7h2cg" STEP: Cleaning up PVC and PV Aug 27 14:43:22.365: INFO: pvc is nil Aug 27 14:43:22.365: INFO: Deleting PersistentVolume "local-pvvd7dk" STEP: Cleaning up PVC and PV Aug 27 14:43:22.369: INFO: pvc is nil Aug 27 14:43:22.369: INFO: Deleting PersistentVolume "local-pvw5zsg" STEP: Cleaning up PVC and PV Aug 27 14:43:22.372: INFO: pvc is nil Aug 27 14:43:22.372: INFO: Deleting PersistentVolume "local-pv7zfbb" STEP: Cleaning up PVC and PV Aug 27 14:43:22.376: INFO: pvc is nil Aug 27 14:43:22.376: INFO: Deleting PersistentVolume "local-pvclhsj" STEP: Cleaning up PVC and PV Aug 27 14:43:22.379: INFO: pvc is nil Aug 27 14:43:22.379: INFO: Deleting PersistentVolume "local-pvdfsn9" STEP: Cleaning up PVC and PV Aug 27 14:43:22.383: INFO: pvc is nil Aug 27 14:43:22.383: INFO: Deleting PersistentVolume "local-pvlb86d" STEP: Unmount tmpfs mount point on node "capi-leguer-md-0-555f949c67-tw45m" at path "/tmp/local-volume-test-1e7911e5-958f-4813-b42f-9e264f60e55e" Aug 27 14:43:22.387: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-1e7911e5-958f-4813-b42f-9e264f60e55e"] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-tw45m-tv7lr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:22.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Aug 27 14:43:22.467: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-1e7911e5-958f-4813-b42f-9e264f60e55e] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-tw45m-tv7lr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:22.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "capi-leguer-md-0-555f949c67-tw45m" at path "/tmp/local-volume-test-526950ad-d58e-41fc-ab6e-b87713be4fbd" Aug 27 14:43:22.583: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-526950ad-d58e-41fc-ab6e-b87713be4fbd"] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-tw45m-tv7lr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:22.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Aug 27 14:43:22.698: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-526950ad-d58e-41fc-ab6e-b87713be4fbd] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-tw45m-tv7lr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:22.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "capi-leguer-md-0-555f949c67-tw45m" at path "/tmp/local-volume-test-6cd63dee-521a-41f2-8a8f-1fd32bcd4b9f" Aug 27 14:43:22.820: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-6cd63dee-521a-41f2-8a8f-1fd32bcd4b9f"] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-tw45m-tv7lr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:22.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Aug 27 14:43:22.947: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6cd63dee-521a-41f2-8a8f-1fd32bcd4b9f] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-tw45m-tv7lr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:22.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "capi-leguer-md-0-555f949c67-tw45m" at path "/tmp/local-volume-test-6fc7ba27-06a8-4fac-a366-27c335460226" Aug 27 14:43:23.029: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-6fc7ba27-06a8-4fac-a366-27c335460226"] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-tw45m-tv7lr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:23.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Aug 27 14:43:23.154: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6fc7ba27-06a8-4fac-a366-27c335460226] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-tw45m-tv7lr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:23.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "capi-leguer-md-0-555f949c67-tw45m" at path "/tmp/local-volume-test-2fcf94db-3188-44f2-90a9-b627fc6a4b0c" Aug 27 14:43:23.277: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-2fcf94db-3188-44f2-90a9-b627fc6a4b0c"] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-tw45m-tv7lr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:23.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Aug 27 14:43:23.400: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-2fcf94db-3188-44f2-90a9-b627fc6a4b0c] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-tw45m-tv7lr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:23.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "capi-leguer-md-0-555f949c67-tw45m" at path "/tmp/local-volume-test-448fff87-92dd-4a18-bfe4-cea0c5364a88" Aug 27 14:43:23.519: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-448fff87-92dd-4a18-bfe4-cea0c5364a88"] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-tw45m-tv7lr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:23.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Aug 27 14:43:23.646: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-448fff87-92dd-4a18-bfe4-cea0c5364a88] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-tw45m-tv7lr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:23.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "capi-leguer-md-0-555f949c67-tw45m" at path "/tmp/local-volume-test-523e32cb-0fd1-421e-9e1f-9434462396a9" Aug 27 14:43:23.772: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-523e32cb-0fd1-421e-9e1f-9434462396a9"] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-tw45m-tv7lr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:23.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Aug 27 14:43:23.888: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-523e32cb-0fd1-421e-9e1f-9434462396a9] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-tw45m-tv7lr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:23.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "capi-leguer-md-0-555f949c67-tw45m" at path "/tmp/local-volume-test-6f793844-1ef9-46fb-9c9e-3c92011b0c16" Aug 27 14:43:23.993: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-6f793844-1ef9-46fb-9c9e-3c92011b0c16"] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-tw45m-tv7lr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:23.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Aug 27 14:43:24.076: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6f793844-1ef9-46fb-9c9e-3c92011b0c16] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-tw45m-tv7lr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:24.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "capi-leguer-md-0-555f949c67-tw45m" at path "/tmp/local-volume-test-4793dadc-cc6b-41a5-88c7-621b2aa2feee" Aug 27 14:43:24.195: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-4793dadc-cc6b-41a5-88c7-621b2aa2feee"] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-tw45m-tv7lr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:24.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Aug 27 14:43:24.341: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4793dadc-cc6b-41a5-88c7-621b2aa2feee] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-tw45m-tv7lr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:24.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "capi-leguer-md-0-555f949c67-tw45m" at path "/tmp/local-volume-test-467d7fac-db9b-4bf2-84e4-10a58d21e81b" Aug 27 14:43:24.456: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-467d7fac-db9b-4bf2-84e4-10a58d21e81b"] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-tw45m-tv7lr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:24.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Aug 27 14:43:24.564: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-467d7fac-db9b-4bf2-84e4-10a58d21e81b] Namespace:persistent-local-volumes-test-240 PodName:hostexec-capi-leguer-md-0-555f949c67-tw45m-tv7lr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:24.564: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 14:43:24.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-240" for this suite. • [SLOW TEST:88.461 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:427 should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:517 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","total":21,"completed":2,"skipped":893,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create none metrics for pvc controller before creating any PV or PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:480 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 14:43:24.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Aug 27 14:43:24.744: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 14:43:24.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-5708" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.044 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:382 should create none metrics for pvc controller before creating any PV or PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:480 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create metrics for total time taken in volume operations in P/V Controller /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:260 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 14:43:24.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Aug 27 14:43:24.785: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 14:43:24.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-9751" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.038 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create metrics for total time taken in volume operations in P/V Controller [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:260 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create bound pv/pvc count metrics for pvc controller after creating both pv and pvc /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:502 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 14:43:24.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Aug 27 14:43:24.825: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 14:43:24.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-5459" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.038 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:382 should create bound pv/pvc count metrics for pvc controller after creating both pv and pvc /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:502 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create metrics for total number of volumes in A/D Controller /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:321 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 14:43:24.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Aug 27 14:43:24.868: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 14:43:24.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-2635" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.037 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create metrics for total number of volumes in A/D Controller [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:321 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create volume metrics in Volume Manager /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:291 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 14:43:24.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Aug 27 14:43:24.904: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 14:43:24.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-5725" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.036 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create volume metrics in Volume Manager [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:291 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] One pod requesting one prebound PVC should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:234 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 14:43:24.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 Aug 27 14:43:26.970: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-7666 PodName:hostexec-capi-leguer-md-0-555f949c67-5brzb-r5k5l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:26.970: INFO: >>> kubeConfig: /root/.kube/config Aug 27 14:43:27.087: INFO: exec capi-leguer-md-0-555f949c67-5brzb: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Aug 27 14:43:27.087: INFO: exec capi-leguer-md-0-555f949c67-5brzb: stdout: "0\n" Aug 27 14:43:27.087: INFO: exec capi-leguer-md-0-555f949c67-5brzb: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Aug 27 14:43:27.087: INFO: exec capi-leguer-md-0-555f949c67-5brzb: exit code: 0 Aug 27 14:43:27.087: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 14:43:27.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7666" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.176 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 One pod requesting one prebound PVC [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:234 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create prometheus metrics for volume provisioning and attach/detach /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:100 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 14:43:27.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Aug 27 14:43:27.147: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 14:43:27.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-2541" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.052 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create prometheus metrics for volume provisioning and attach/detach [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:100 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create unbound pv count metrics for pvc controller after creating pv only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:484 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 14:43:27.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Aug 27 14:43:27.187: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 14:43:27.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-1413" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.038 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:382 should create unbound pv count metrics for pvc controller after creating pv only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:484 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:282 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 14:43:27.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 Aug 27 14:43:29.246: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-6862 PodName:hostexec-capi-leguer-md-0-555f949c67-5brzb-bwb7w ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:29.246: INFO: >>> kubeConfig: /root/.kube/config Aug 27 14:43:29.377: INFO: exec capi-leguer-md-0-555f949c67-5brzb: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Aug 27 14:43:29.377: INFO: exec capi-leguer-md-0-555f949c67-5brzb: stdout: "0\n" Aug 27 14:43:29.377: INFO: exec capi-leguer-md-0-555f949c67-5brzb: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Aug 27 14:43:29.377: INFO: exec capi-leguer-md-0-555f949c67-5brzb: exit code: 0 Aug 27 14:43:29.377: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 14:43:29.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6862" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.193 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:256 should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:282 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:245 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 14:43:29.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 Aug 27 14:43:31.443: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-6851 PodName:hostexec-capi-leguer-md-0-555f949c67-5brzb-45hll ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:31.443: INFO: >>> kubeConfig: /root/.kube/config Aug 27 14:43:31.568: INFO: exec capi-leguer-md-0-555f949c67-5brzb: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Aug 27 14:43:31.568: INFO: exec capi-leguer-md-0-555f949c67-5brzb: stdout: "0\n" Aug 27 14:43:31.568: INFO: exec capi-leguer-md-0-555f949c67-5brzb: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Aug 27 14:43:31.568: INFO: exec capi-leguer-md-0-555f949c67-5brzb: exit code: 0 Aug 27 14:43:31.568: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 14:43:31.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6851" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.186 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 Two pods mounting a local volume at the same time [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:245 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] One pod requesting one prebound PVC should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:228 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 14:43:31.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 Aug 27 14:43:33.644: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-1566 PodName:hostexec-capi-leguer-md-0-555f949c67-5brzb-xd652 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:33.645: INFO: >>> kubeConfig: /root/.kube/config Aug 27 14:43:33.761: INFO: exec capi-leguer-md-0-555f949c67-5brzb: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Aug 27 14:43:33.761: INFO: exec capi-leguer-md-0-555f949c67-5brzb: stdout: "0\n" Aug 27 14:43:33.761: INFO: exec capi-leguer-md-0-555f949c67-5brzb: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Aug 27 14:43:33.761: INFO: exec capi-leguer-md-0-555f949c67-5brzb: exit code: 0 Aug 27 14:43:33.761: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 14:43:33.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1566" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.183 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 One pod requesting one prebound PVC [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:228 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create unbound pvc count metrics for pvc controller after creating pvc only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:493 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 14:43:33.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Aug 27 14:43:33.823: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 14:43:33.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-7513" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.058 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:382 should create unbound pvc count metrics for pvc controller after creating pvc only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:493 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Set fsGroup for local volume should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:263 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 14:43:33.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 Aug 27 14:43:35.900: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-1818 PodName:hostexec-capi-leguer-md-0-555f949c67-5brzb-lhdks ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:35.900: INFO: >>> kubeConfig: /root/.kube/config Aug 27 14:43:36.008: INFO: exec capi-leguer-md-0-555f949c67-5brzb: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Aug 27 14:43:36.008: INFO: exec capi-leguer-md-0-555f949c67-5brzb: stdout: "0\n" Aug 27 14:43:36.008: INFO: exec capi-leguer-md-0-555f949c67-5brzb: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Aug 27 14:43:36.008: INFO: exec capi-leguer-md-0-555f949c67-5brzb: exit code: 0 Aug 27 14:43:36.008: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 14:43:36.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1818" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.184 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:256 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:263 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:270 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 14:43:36.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 Aug 27 14:43:38.073: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-5415 PodName:hostexec-capi-leguer-md-0-555f949c67-5brzb-5btc7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Aug 27 14:43:38.073: INFO: >>> kubeConfig: /root/.kube/config Aug 27 14:43:38.184: INFO: exec capi-leguer-md-0-555f949c67-5brzb: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Aug 27 14:43:38.184: INFO: exec capi-leguer-md-0-555f949c67-5brzb: stdout: "0\n" Aug 27 14:43:38.184: INFO: exec capi-leguer-md-0-555f949c67-5brzb: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Aug 27 14:43:38.184: INFO: exec capi-leguer-md-0-555f949c67-5brzb: exit code: 0 Aug 27 14:43:38.184: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 14:43:38.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5415" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.176 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:256 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:270 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Pod Disks [Serial] attach on previously attached volumes should work /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:457 [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 14:43:38.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 [It] [Serial] attach on previously attached volumes should work /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:457 Aug 27 14:43:38.250: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 14:43:38.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-6488" for this suite. S [SKIPPING] [0.051 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Serial] attach on previously attached volumes should work [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:457 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:458 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSAug 27 14:43:38.266: INFO: Running AfterSuite actions on all nodes Aug 27 14:43:38.266: INFO: Running AfterSuite actions on node 1 Aug 27 14:43:38.266: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_storage_serial/junit_01.xml {"msg":"Test Suite completed","total":21,"completed":2,"skipped":5666,"failed":0} Ran 2 of 5668 Specs in 179.929 seconds SUCCESS! -- 2 Passed | 0 Failed | 0 Pending | 5666 Skipped PASS Ginkgo ran 1 suite in 3m1.576500568s Test Suite Passed