I0525 12:04:08.585802 17 e2e.go:129] Starting e2e run "905e99c1-3262-4458-a411-e81a77fbbb22" on Ginkgo node 1 {"msg":"Test Suite starting","total":18,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1621944247 - Will randomize all specs Will run 18 of 5771 specs May 25 12:04:08.675: INFO: >>> kubeConfig: /root/.kube/config May 25 12:04:08.679: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 25 12:04:08.707: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 25 12:04:08.758: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 25 12:04:08.758: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 25 12:04:08.758: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 25 12:04:08.771: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) May 25 12:04:08.771: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 25 12:04:08.771: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds' (0 seconds elapsed) May 25 12:04:08.771: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 25 12:04:08.771: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'tune-sysctls' (0 seconds elapsed) May 25 12:04:08.771: INFO: e2e test version: v1.21.1 May 25 12:04:08.772: INFO: kube-apiserver version: v1.21.1 May 25 12:04:08.772: INFO: >>> kubeConfig: /root/.kube/config May 25 12:04:08.782: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create prometheus metrics for volume provisioning and attach/detach /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:101 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 12:04:08.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv W0525 12:04:08.816499 17 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 25 12:04:08.816: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 25 12:04:08.825: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 25 12:04:08.828: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 12:04:08.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-9461" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.056 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create prometheus metrics for volume provisioning and attach/detach [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:101 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 12:04:08.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 May 25 12:04:10.887: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-4909 PodName:hostexec-v1.21-worker-c8kmr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:04:10.887: INFO: >>> kubeConfig: /root/.kube/config May 25 12:04:11.051: INFO: exec v1.21-worker: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l May 25 12:04:11.051: INFO: exec v1.21-worker: stdout: "0\n" May 25 12:04:11.051: INFO: exec v1.21-worker: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" May 25 12:04:11.051: INFO: exec v1.21-worker: exit code: 0 May 25 12:04:11.051: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 12:04:11.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4909" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.222 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1256 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 12:04:11.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 May 25 12:04:13.179: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-7368 PodName:hostexec-v1.21-worker2-bl756 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:04:13.179: INFO: >>> kubeConfig: /root/.kube/config May 25 12:04:13.331: INFO: exec v1.21-worker2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l May 25 12:04:13.331: INFO: exec v1.21-worker2: stdout: "0\n" May 25 12:04:13.331: INFO: exec v1.21-worker2: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" May 25 12:04:13.331: INFO: exec v1.21-worker2: exit code: 0 May 25 12:04:13.331: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 12:04:13.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7368" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.280 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1256 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create metrics for total time taken in volume operations in P/V Controller /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:261 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 12:04:13.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 25 12:04:13.414: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 12:04:13.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-6453" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.064 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create metrics for total time taken in volume operations in P/V Controller [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:261 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create none metrics for pvc controller before creating any PV or PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:481 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 12:04:13.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 25 12:04:13.457: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 12:04:13.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-5040" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.041 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create none metrics for pvc controller before creating any PV or PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:481 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create unbound pv count metrics for pvc controller after creating pv only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:485 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 12:04:13.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 25 12:04:13.892: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 12:04:13.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-2321" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.437 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create unbound pv count metrics for pvc controller after creating pv only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:485 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create volume metrics in Volume Manager /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:292 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 12:04:13.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 25 12:04:14.286: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 12:04:14.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-8083" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.577 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create volume metrics in Volume Manager [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:292 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Pod Disks [Serial] attach on previously attached volumes should work /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:458 [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 12:04:14.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 [It] [Serial] attach on previously attached volumes should work /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:458 May 25 12:04:14.993: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 12:04:14.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-9254" for this suite. S [SKIPPING] [0.508 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Serial] attach on previously attached volumes should work [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:458 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:459 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create unbound pvc count metrics for pvc controller after creating pvc only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:494 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 12:04:15.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 25 12:04:15.390: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 12:04:15.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-2261" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.396 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create unbound pvc count metrics for pvc controller after creating pvc only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:494 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create bound pv/pvc count metrics for pvc controller after creating both pv and pvc /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:503 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 12:04:15.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 25 12:04:16.081: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 12:04:16.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-6999" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.871 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create bound pv/pvc count metrics for pvc controller after creating both pv and pvc /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:503 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:657 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 12:04:16.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:634 [It] all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:657 STEP: Create a PVC STEP: Create 50 pods to use this PVC STEP: Wait for all pods are running [AfterEach] Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:648 STEP: Clean PV local-pvzmkkp [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 12:05:45.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-447" for this suite. • [SLOW TEST:89.705 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:629 all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:657 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running","total":18,"completed":1,"skipped":3511,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 12:05:45.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 May 25 12:05:52.043: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-5658 PodName:hostexec-v1.21-worker2-p6tz2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:05:52.043: INFO: >>> kubeConfig: /root/.kube/config May 25 12:05:52.199: INFO: exec v1.21-worker2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l May 25 12:05:52.199: INFO: exec v1.21-worker2: stdout: "0\n" May 25 12:05:52.199: INFO: exec v1.21-worker2: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" May 25 12:05:52.199: INFO: exec v1.21-worker2: exit code: 0 May 25 12:05:52.199: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 12:05:52.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5658" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [6.224 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1256 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create volume metrics with the correct PVC ref /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:204 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 12:05:52.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 25 12:05:52.396: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 12:05:52.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3073" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.193 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create volume metrics with the correct PVC ref [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:204 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] One pod requesting one prebound PVC should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 12:05:52.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 May 25 12:05:58.480: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-7693 PodName:hostexec-v1.21-worker-zlr6x ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:05:58.480: INFO: >>> kubeConfig: /root/.kube/config May 25 12:05:58.645: INFO: exec v1.21-worker: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l May 25 12:05:58.645: INFO: exec v1.21-worker: stdout: "0\n" May 25 12:05:58.645: INFO: exec v1.21-worker: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" May 25 12:05:58.645: INFO: exec v1.21-worker: exit code: 0 May 25 12:05:58.645: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 12:05:58.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7693" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [6.279 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1256 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create metrics for total number of volumes in A/D Controller /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:322 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 12:05:58.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 25 12:05:59.184: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 12:05:59.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3795" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.695 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create metrics for total number of volumes in A/D Controller [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:322 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] One pod requesting one prebound PVC should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 12:05:59.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 May 25 12:06:05.980: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-2296 PodName:hostexec-v1.21-worker-rcdvc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:06:05.980: INFO: >>> kubeConfig: /root/.kube/config May 25 12:06:06.423: INFO: exec v1.21-worker: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l May 25 12:06:06.423: INFO: exec v1.21-worker: stdout: "0\n" May 25 12:06:06.423: INFO: exec v1.21-worker: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" May 25 12:06:06.423: INFO: exec v1.21-worker: exit code: 0 May 25 12:06:06.423: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 12:06:06.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2296" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [7.194 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1256 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create total pv count metrics for with plugin and volume mode labels after creating pv /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:513 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 12:06:06.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 25 12:06:06.796: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 12:06:06.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-6731" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.217 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create total pv count metrics for with plugin and volume mode labels after creating pv /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:513 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:531 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 12:06:06.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:455 STEP: Setting up 10 local volumes on node "v1.21-worker" STEP: Creating tmpfs mount point on node "v1.21-worker" at path "/tmp/local-volume-test-ba777b98-3c47-4649-822f-07ca242b6e0d" May 25 12:06:08.922: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-ba777b98-3c47-4649-822f-07ca242b6e0d" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-ba777b98-3c47-4649-822f-07ca242b6e0d" "/tmp/local-volume-test-ba777b98-3c47-4649-822f-07ca242b6e0d"] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker-fcq2c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:06:08.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "v1.21-worker" at path "/tmp/local-volume-test-a6f5ba18-c850-4589-a70c-af7bc3a34fdb" May 25 12:06:09.098: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-a6f5ba18-c850-4589-a70c-af7bc3a34fdb" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-a6f5ba18-c850-4589-a70c-af7bc3a34fdb" "/tmp/local-volume-test-a6f5ba18-c850-4589-a70c-af7bc3a34fdb"] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker-fcq2c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:06:09.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "v1.21-worker" at path "/tmp/local-volume-test-3bce380d-47e1-4535-bb8b-4fbfb47c757e" May 25 12:06:09.240: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-3bce380d-47e1-4535-bb8b-4fbfb47c757e" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-3bce380d-47e1-4535-bb8b-4fbfb47c757e" "/tmp/local-volume-test-3bce380d-47e1-4535-bb8b-4fbfb47c757e"] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker-fcq2c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:06:09.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "v1.21-worker" at path "/tmp/local-volume-test-0e8b414e-b971-4667-8740-9150a78ed44d" May 25 12:06:09.373: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-0e8b414e-b971-4667-8740-9150a78ed44d" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-0e8b414e-b971-4667-8740-9150a78ed44d" "/tmp/local-volume-test-0e8b414e-b971-4667-8740-9150a78ed44d"] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker-fcq2c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:06:09.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "v1.21-worker" at path "/tmp/local-volume-test-5ac2104a-28be-44db-a3fd-4617256684d4" May 25 12:06:09.520: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-5ac2104a-28be-44db-a3fd-4617256684d4" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-5ac2104a-28be-44db-a3fd-4617256684d4" "/tmp/local-volume-test-5ac2104a-28be-44db-a3fd-4617256684d4"] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker-fcq2c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:06:09.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "v1.21-worker" at path "/tmp/local-volume-test-115089cf-9090-48a8-ac7a-6661cd0e6c38" May 25 12:06:09.672: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-115089cf-9090-48a8-ac7a-6661cd0e6c38" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-115089cf-9090-48a8-ac7a-6661cd0e6c38" "/tmp/local-volume-test-115089cf-9090-48a8-ac7a-6661cd0e6c38"] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker-fcq2c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:06:09.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "v1.21-worker" at path "/tmp/local-volume-test-54c91b64-9f15-4c5b-a325-7a314bfa76db" May 25 12:06:09.806: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-54c91b64-9f15-4c5b-a325-7a314bfa76db" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-54c91b64-9f15-4c5b-a325-7a314bfa76db" "/tmp/local-volume-test-54c91b64-9f15-4c5b-a325-7a314bfa76db"] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker-fcq2c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:06:09.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "v1.21-worker" at path "/tmp/local-volume-test-3d76a265-4f56-4b11-a4b0-8a2501594124" May 25 12:06:09.953: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-3d76a265-4f56-4b11-a4b0-8a2501594124" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-3d76a265-4f56-4b11-a4b0-8a2501594124" "/tmp/local-volume-test-3d76a265-4f56-4b11-a4b0-8a2501594124"] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker-fcq2c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:06:09.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "v1.21-worker" at path "/tmp/local-volume-test-2ad427c7-44d9-4e22-9585-f3ac16a4f6d3" May 25 12:06:10.093: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-2ad427c7-44d9-4e22-9585-f3ac16a4f6d3" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-2ad427c7-44d9-4e22-9585-f3ac16a4f6d3" "/tmp/local-volume-test-2ad427c7-44d9-4e22-9585-f3ac16a4f6d3"] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker-fcq2c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:06:10.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "v1.21-worker" at path "/tmp/local-volume-test-bf380325-d7e9-4f5c-b50a-88002faf02cd" May 25 12:06:10.232: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-bf380325-d7e9-4f5c-b50a-88002faf02cd" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-bf380325-d7e9-4f5c-b50a-88002faf02cd" "/tmp/local-volume-test-bf380325-d7e9-4f5c-b50a-88002faf02cd"] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker-fcq2c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:06:10.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Setting up 10 local volumes on node "v1.21-worker2" STEP: Creating tmpfs mount point on node "v1.21-worker2" at path "/tmp/local-volume-test-42a78681-28e8-49a4-b2ca-a87b98a550d5" May 25 12:06:32.382: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-42a78681-28e8-49a4-b2ca-a87b98a550d5" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-42a78681-28e8-49a4-b2ca-a87b98a550d5" "/tmp/local-volume-test-42a78681-28e8-49a4-b2ca-a87b98a550d5"] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker2-bjvmt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:06:32.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "v1.21-worker2" at path "/tmp/local-volume-test-41cf342f-4548-4eaa-858e-0fdae015d88f" May 25 12:06:32.537: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-41cf342f-4548-4eaa-858e-0fdae015d88f" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-41cf342f-4548-4eaa-858e-0fdae015d88f" "/tmp/local-volume-test-41cf342f-4548-4eaa-858e-0fdae015d88f"] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker2-bjvmt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:06:32.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "v1.21-worker2" at path "/tmp/local-volume-test-736f7210-1dde-49b6-af7d-d7bde721c546" May 25 12:06:32.681: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-736f7210-1dde-49b6-af7d-d7bde721c546" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-736f7210-1dde-49b6-af7d-d7bde721c546" "/tmp/local-volume-test-736f7210-1dde-49b6-af7d-d7bde721c546"] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker2-bjvmt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:06:32.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "v1.21-worker2" at path "/tmp/local-volume-test-48d7d251-28a8-4094-906c-52815d7ee469" May 25 12:06:32.831: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-48d7d251-28a8-4094-906c-52815d7ee469" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-48d7d251-28a8-4094-906c-52815d7ee469" "/tmp/local-volume-test-48d7d251-28a8-4094-906c-52815d7ee469"] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker2-bjvmt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:06:32.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "v1.21-worker2" at path "/tmp/local-volume-test-108d762a-9f8f-47a7-85e8-9a7d0c3ea22b" May 25 12:06:32.966: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-108d762a-9f8f-47a7-85e8-9a7d0c3ea22b" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-108d762a-9f8f-47a7-85e8-9a7d0c3ea22b" "/tmp/local-volume-test-108d762a-9f8f-47a7-85e8-9a7d0c3ea22b"] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker2-bjvmt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:06:32.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "v1.21-worker2" at path "/tmp/local-volume-test-c433520f-8434-4eed-9df5-1706f3152eeb" May 25 12:06:33.112: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-c433520f-8434-4eed-9df5-1706f3152eeb" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-c433520f-8434-4eed-9df5-1706f3152eeb" "/tmp/local-volume-test-c433520f-8434-4eed-9df5-1706f3152eeb"] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker2-bjvmt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:06:33.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "v1.21-worker2" at path "/tmp/local-volume-test-51b74732-2615-47e5-a7a0-3009afc514fa" May 25 12:06:33.244: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-51b74732-2615-47e5-a7a0-3009afc514fa" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-51b74732-2615-47e5-a7a0-3009afc514fa" "/tmp/local-volume-test-51b74732-2615-47e5-a7a0-3009afc514fa"] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker2-bjvmt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:06:33.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "v1.21-worker2" at path "/tmp/local-volume-test-f59a9074-cc6b-425f-9ac4-01a8bbd1743e" May 25 12:06:33.368: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-f59a9074-cc6b-425f-9ac4-01a8bbd1743e" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-f59a9074-cc6b-425f-9ac4-01a8bbd1743e" "/tmp/local-volume-test-f59a9074-cc6b-425f-9ac4-01a8bbd1743e"] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker2-bjvmt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:06:33.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "v1.21-worker2" at path "/tmp/local-volume-test-b64a52c1-ef51-4d22-a134-bb3e2bd753ed" May 25 12:06:33.522: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-b64a52c1-ef51-4d22-a134-bb3e2bd753ed" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-b64a52c1-ef51-4d22-a134-bb3e2bd753ed" "/tmp/local-volume-test-b64a52c1-ef51-4d22-a134-bb3e2bd753ed"] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker2-bjvmt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:06:33.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "v1.21-worker2" at path "/tmp/local-volume-test-4e3372c4-1821-4292-9a62-34ee41331d27" May 25 12:06:33.672: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-4e3372c4-1821-4292-9a62-34ee41331d27" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-4e3372c4-1821-4292-9a62-34ee41331d27" "/tmp/local-volume-test-4e3372c4-1821-4292-9a62-34ee41331d27"] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker2-bjvmt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:06:33.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Create 20 PVs STEP: Start a goroutine to recycle unbound PVs [It] should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:531 STEP: Creating 7 pods periodically STEP: Waiting for all pods to complete successfully May 25 12:06:38.997: INFO: Deleting pod pod-fcd3d984-59ab-4d48-825a-b81c163add44 May 25 12:06:39.006: INFO: Deleting PersistentVolumeClaim "pvc-bzhzp" May 25 12:06:39.011: INFO: Deleting PersistentVolumeClaim "pvc-9phcs" May 25 12:06:39.016: INFO: Deleting PersistentVolumeClaim "pvc-t95ss" May 25 12:06:39.020: INFO: 1/28 pods finished STEP: Delete "local-pvvw2gx" and create a new PV for same local volume storage STEP: Delete "local-pvvw2gx" and create a new PV for same local volume storage STEP: Delete "local-pv7kmhf" and create a new PV for same local volume storage STEP: Delete "local-pv7kmhf" and create a new PV for same local volume storage STEP: Delete "local-pvdxrrd" and create a new PV for same local volume storage May 25 12:06:40.003: INFO: Deleting pod pod-0f4ec2d2-8922-4ee8-8236-654dc50249d2 May 25 12:06:40.010: INFO: Deleting PersistentVolumeClaim "pvc-mc4m5" May 25 12:06:40.015: INFO: Deleting PersistentVolumeClaim "pvc-5b87s" May 25 12:06:40.020: INFO: Deleting PersistentVolumeClaim "pvc-d2pv9" May 25 12:06:40.025: INFO: 2/28 pods finished STEP: Delete "local-pvzqrjd" and create a new PV for same local volume storage STEP: Delete "local-pvttd8t" and create a new PV for same local volume storage STEP: Delete "local-pvjmwdq" and create a new PV for same local volume storage May 25 12:06:40.997: INFO: Deleting pod pod-745c46f4-889a-4ec6-8fc0-2a5fd0afde54 May 25 12:06:41.006: INFO: Deleting PersistentVolumeClaim "pvc-m2trl" May 25 12:06:41.010: INFO: Deleting PersistentVolumeClaim "pvc-kqc4n" May 25 12:06:41.014: INFO: Deleting PersistentVolumeClaim "pvc-9hbkp" May 25 12:06:41.019: INFO: 3/28 pods finished STEP: Delete "local-pv8kgjp" and create a new PV for same local volume storage STEP: Delete "local-pv8kgjp" and create a new PV for same local volume storage STEP: Delete "local-pv5k4md" and create a new PV for same local volume storage STEP: Delete "local-pvrqnxb" and create a new PV for same local volume storage May 25 12:06:41.995: INFO: Deleting pod pod-1962698e-67a6-4306-8124-1457e24e0a67 May 25 12:06:42.002: INFO: Deleting PersistentVolumeClaim "pvc-xt7wd" May 25 12:06:42.007: INFO: Deleting PersistentVolumeClaim "pvc-xfkvz" May 25 12:06:42.011: INFO: Deleting PersistentVolumeClaim "pvc-5xmbx" May 25 12:06:42.015: INFO: 4/28 pods finished STEP: Delete "local-pv92vzw" and create a new PV for same local volume storage STEP: Delete "local-pvscqbq" and create a new PV for same local volume storage STEP: Delete "local-pv88l9m" and create a new PV for same local volume storage May 25 12:06:42.996: INFO: Deleting pod pod-15e19298-32a8-4494-8746-f5c63a7419bc May 25 12:06:43.003: INFO: Deleting PersistentVolumeClaim "pvc-899hd" May 25 12:06:43.008: INFO: Deleting PersistentVolumeClaim "pvc-vlgvt" May 25 12:06:43.013: INFO: Deleting PersistentVolumeClaim "pvc-dnlcw" May 25 12:06:43.018: INFO: 5/28 pods finished STEP: Delete "local-pvfglbf" and create a new PV for same local volume storage STEP: Delete "local-pvfglbf" and create a new PV for same local volume storage STEP: Delete "local-pvz86ql" and create a new PV for same local volume storage STEP: Delete "local-pvz86ql" and create a new PV for same local volume storage STEP: Delete "local-pv7cmrb" and create a new PV for same local volume storage STEP: Delete "local-pv7cmrb" and create a new PV for same local volume storage May 25 12:06:44.997: INFO: Deleting pod pod-63773209-9fe3-4f85-b350-9c024215fdc3 May 25 12:06:45.005: INFO: Deleting PersistentVolumeClaim "pvc-c4vfd" May 25 12:06:45.010: INFO: Deleting PersistentVolumeClaim "pvc-t4zfh" May 25 12:06:45.015: INFO: Deleting PersistentVolumeClaim "pvc-7kw2q" May 25 12:06:45.025: INFO: 6/28 pods finished STEP: Delete "local-pvnvlg5" and create a new PV for same local volume storage STEP: Delete "local-pvnvlg5" and create a new PV for same local volume storage STEP: Delete "local-pvbxmsx" and create a new PV for same local volume storage STEP: Delete "local-pvhsv5d" and create a new PV for same local volume storage May 25 12:06:49.079: INFO: Deleting pod pod-0c52fe47-e939-4086-aeb2-cc91955eaa7d May 25 12:06:49.088: INFO: Deleting PersistentVolumeClaim "pvc-dmjs9" May 25 12:06:49.093: INFO: Deleting PersistentVolumeClaim "pvc-2xlpg" May 25 12:06:49.098: INFO: Deleting PersistentVolumeClaim "pvc-9x7xd" May 25 12:06:49.102: INFO: 7/28 pods finished STEP: Delete "local-pvpqzdx" and create a new PV for same local volume storage STEP: Delete "local-pvmtk42" and create a new PV for same local volume storage STEP: Delete "local-pvrk7td" and create a new PV for same local volume storage STEP: Delete "local-pvrk7td" and create a new PV for same local volume storage May 25 12:06:50.996: INFO: Deleting pod pod-b4e11edf-fd1f-48f3-8810-40d071c9ed4f May 25 12:06:51.005: INFO: Deleting PersistentVolumeClaim "pvc-7729t" May 25 12:06:51.010: INFO: Deleting PersistentVolumeClaim "pvc-nhmd7" May 25 12:06:51.015: INFO: Deleting PersistentVolumeClaim "pvc-85657" May 25 12:06:51.019: INFO: 8/28 pods finished STEP: Delete "local-pvtzx9z" and create a new PV for same local volume storage STEP: Delete "local-pvtzx9z" and create a new PV for same local volume storage STEP: Delete "local-pvss4kn" and create a new PV for same local volume storage STEP: Delete "local-pv9xgg8" and create a new PV for same local volume storage May 25 12:06:52.997: INFO: Deleting pod pod-6d0cb92d-b113-45f9-a2bb-e91299ff3c08 May 25 12:06:53.011: INFO: Deleting PersistentVolumeClaim "pvc-9fs4f" May 25 12:06:53.021: INFO: Deleting PersistentVolumeClaim "pvc-wwnj4" May 25 12:06:53.028: INFO: Deleting PersistentVolumeClaim "pvc-h4rbz" May 25 12:06:53.036: INFO: 9/28 pods finished May 25 12:06:53.036: INFO: Deleting pod pod-a1e16be2-a2c5-431f-80a6-086a8decc787 May 25 12:06:53.041: INFO: Deleting PersistentVolumeClaim "pvc-d6mw4" May 25 12:06:53.044: INFO: Deleting PersistentVolumeClaim "pvc-h97gp" May 25 12:06:53.047: INFO: Deleting PersistentVolumeClaim "pvc-87xnq" STEP: Delete "local-pv6lpdh" and create a new PV for same local volume storage May 25 12:06:53.051: INFO: 10/28 pods finished STEP: Delete "local-pvkw725" and create a new PV for same local volume storage STEP: Delete "local-pvn2mvh" and create a new PV for same local volume storage STEP: Delete "local-pv5b7xt" and create a new PV for same local volume storage STEP: Delete "local-pvsjqbx" and create a new PV for same local volume storage STEP: Delete "local-pvh84hn" and create a new PV for same local volume storage May 25 12:06:53.996: INFO: Deleting pod pod-faf68b2d-9c39-478c-b31c-1163dace4776 May 25 12:06:54.005: INFO: Deleting PersistentVolumeClaim "pvc-xmm45" May 25 12:06:54.009: INFO: Deleting PersistentVolumeClaim "pvc-fj76g" May 25 12:06:54.014: INFO: Deleting PersistentVolumeClaim "pvc-nstvf" May 25 12:06:54.019: INFO: 11/28 pods finished STEP: Delete "local-pvjp4jk" and create a new PV for same local volume storage STEP: Delete "local-pvx2dpc" and create a new PV for same local volume storage STEP: Delete "local-pv2vkv2" and create a new PV for same local volume storage May 25 12:06:55.997: INFO: Deleting pod pod-36e60585-83c1-404a-ba8b-e11362522501 May 25 12:06:56.006: INFO: Deleting PersistentVolumeClaim "pvc-nqc6w" May 25 12:06:56.010: INFO: Deleting PersistentVolumeClaim "pvc-xjrrw" May 25 12:06:56.015: INFO: Deleting PersistentVolumeClaim "pvc-g6vpz" May 25 12:06:56.020: INFO: 12/28 pods finished STEP: Delete "local-pv77ngv" and create a new PV for same local volume storage STEP: Delete "local-pv77ngv" and create a new PV for same local volume storage STEP: Delete "local-pv494rp" and create a new PV for same local volume storage STEP: Delete "local-pvsrk27" and create a new PV for same local volume storage May 25 12:07:00.081: INFO: Deleting pod pod-9437194b-ea4f-46d0-9ce4-5dfda1b0d168 May 25 12:07:00.481: INFO: Deleting PersistentVolumeClaim "pvc-xpkkc" May 25 12:07:00.878: INFO: Deleting PersistentVolumeClaim "pvc-l9fz8" May 25 12:07:00.887: INFO: Deleting PersistentVolumeClaim "pvc-nf2lz" May 25 12:07:00.984: INFO: 13/28 pods finished STEP: Delete "local-pvnzvct" and create a new PV for same local volume storage STEP: Delete "local-pv28j5h" and create a new PV for same local volume storage STEP: Delete "local-pvsf2jd" and create a new PV for same local volume storage May 25 12:07:03.181: INFO: Deleting pod pod-6c603a2e-6450-4224-90e3-083c74208550 May 25 12:07:03.202: INFO: Deleting PersistentVolumeClaim "pvc-qg5bw" May 25 12:07:03.206: INFO: Deleting PersistentVolumeClaim "pvc-sd4q8" May 25 12:07:03.211: INFO: Deleting PersistentVolumeClaim "pvc-89vv8" May 25 12:07:03.216: INFO: 14/28 pods finished STEP: Delete "local-pvgpvjb" and create a new PV for same local volume storage STEP: Delete "local-pvr4hhv" and create a new PV for same local volume storage STEP: Delete "local-pvdspcf" and create a new PV for same local volume storage May 25 12:07:03.998: INFO: Deleting pod pod-b628a5f3-7cc1-4d68-b8ca-3cf0719a84a8 May 25 12:07:04.014: INFO: Deleting PersistentVolumeClaim "pvc-bh5xm" May 25 12:07:04.018: INFO: Deleting PersistentVolumeClaim "pvc-cn2rs" May 25 12:07:04.022: INFO: Deleting PersistentVolumeClaim "pvc-sljsz" May 25 12:07:04.026: INFO: 15/28 pods finished STEP: Delete "local-pvhkhlz" and create a new PV for same local volume storage STEP: Delete "local-pvrkzlp" and create a new PV for same local volume storage STEP: Delete "local-pvr2jpx" and create a new PV for same local volume storage May 25 12:07:05.997: INFO: Deleting pod pod-4e1a60df-264d-4500-a6e1-7c5ff02abd70 May 25 12:07:06.007: INFO: Deleting PersistentVolumeClaim "pvc-t77x9" May 25 12:07:06.011: INFO: Deleting PersistentVolumeClaim "pvc-wxq8m" May 25 12:07:06.016: INFO: Deleting PersistentVolumeClaim "pvc-5dp2f" May 25 12:07:06.020: INFO: 16/28 pods finished May 25 12:07:06.020: INFO: Deleting pod pod-753e5780-2e1c-4d3d-b700-319659fa4064 May 25 12:07:06.027: INFO: Deleting PersistentVolumeClaim "pvc-w9fjm" STEP: Delete "local-pvvz9xq" and create a new PV for same local volume storage May 25 12:07:06.031: INFO: Deleting PersistentVolumeClaim "pvc-m6txv" May 25 12:07:06.035: INFO: Deleting PersistentVolumeClaim "pvc-vvsh2" May 25 12:07:06.039: INFO: 17/28 pods finished STEP: Delete "local-pvvz9xq" and create a new PV for same local volume storage STEP: Delete "local-pvf46kr" and create a new PV for same local volume storage STEP: Delete "local-pv4m84g" and create a new PV for same local volume storage STEP: Delete "local-pvjtvpc" and create a new PV for same local volume storage STEP: Delete "local-pv6ln88" and create a new PV for same local volume storage STEP: Delete "local-pvxkm96" and create a new PV for same local volume storage May 25 12:07:06.997: INFO: Deleting pod pod-59319775-3a8d-48c7-971b-ee83413c37a9 May 25 12:07:07.004: INFO: Deleting PersistentVolumeClaim "pvc-ddfdz" May 25 12:07:07.009: INFO: Deleting PersistentVolumeClaim "pvc-5mqbd" May 25 12:07:07.014: INFO: Deleting PersistentVolumeClaim "pvc-66gfz" May 25 12:07:07.019: INFO: 18/28 pods finished STEP: Delete "local-pv7ql72" and create a new PV for same local volume storage STEP: Delete "local-pv7ql72" and create a new PV for same local volume storage STEP: Delete "local-pv9jdnb" and create a new PV for same local volume storage STEP: Delete "local-pvvbh4z" and create a new PV for same local volume storage May 25 12:07:10.001: INFO: Deleting pod pod-46a2c74b-5cde-49d5-aed2-768446d27711 May 25 12:07:10.007: INFO: Deleting PersistentVolumeClaim "pvc-vb96z" May 25 12:07:10.010: INFO: Deleting PersistentVolumeClaim "pvc-zxrkk" May 25 12:07:10.014: INFO: Deleting PersistentVolumeClaim "pvc-dl9b7" May 25 12:07:10.018: INFO: 19/28 pods finished STEP: Delete "local-pvmql9l" and create a new PV for same local volume storage STEP: Delete "local-pvmql9l" and create a new PV for same local volume storage STEP: Delete "local-pvvbqrf" and create a new PV for same local volume storage STEP: Delete "local-pvvbqrf" and create a new PV for same local volume storage STEP: Delete "local-pvxfprh" and create a new PV for same local volume storage May 25 12:07:14.997: INFO: Deleting pod pod-09a94eff-0d94-4d2d-bb59-505ccf281708 May 25 12:07:15.007: INFO: Deleting PersistentVolumeClaim "pvc-5wcsk" May 25 12:07:15.012: INFO: Deleting PersistentVolumeClaim "pvc-9zs6w" May 25 12:07:15.017: INFO: Deleting PersistentVolumeClaim "pvc-d9pjj" May 25 12:07:15.022: INFO: 20/28 pods finished STEP: Delete "local-pv8rn2b" and create a new PV for same local volume storage STEP: Delete "local-pv8rn2b" and create a new PV for same local volume storage STEP: Delete "local-pvc9n24" and create a new PV for same local volume storage STEP: Delete "local-pvmwh6r" and create a new PV for same local volume storage May 25 12:07:16.997: INFO: Deleting pod pod-37a7fd1b-25fe-44e3-a6ed-cc2bb55bea7e May 25 12:07:17.006: INFO: Deleting PersistentVolumeClaim "pvc-g52d6" May 25 12:07:17.010: INFO: Deleting PersistentVolumeClaim "pvc-lmx2c" May 25 12:07:17.016: INFO: Deleting PersistentVolumeClaim "pvc-qkmzd" May 25 12:07:17.021: INFO: 21/28 pods finished May 25 12:07:17.021: INFO: Deleting pod pod-efbf6121-fbe2-4886-a8c4-70c566fde233 May 25 12:07:17.027: INFO: Deleting PersistentVolumeClaim "pvc-5wtt6" May 25 12:07:17.031: INFO: Deleting PersistentVolumeClaim "pvc-bbbc2" STEP: Delete "local-pvhdvpw" and create a new PV for same local volume storage May 25 12:07:17.039: INFO: Deleting PersistentVolumeClaim "pvc-fkbpt" May 25 12:07:17.043: INFO: 22/28 pods finished STEP: Delete "local-pvhdvpw" and create a new PV for same local volume storage STEP: Delete "local-pv8c2qc" and create a new PV for same local volume storage STEP: Delete "local-pvt7b7l" and create a new PV for same local volume storage STEP: Delete "local-pv2b56s" and create a new PV for same local volume storage STEP: Delete "local-pv4qtdr" and create a new PV for same local volume storage STEP: Delete "local-pv4xflr" and create a new PV for same local volume storage May 25 12:07:17.997: INFO: Deleting pod pod-1ab60108-f41a-47fd-a151-575e78568d16 May 25 12:07:18.379: INFO: Deleting PersistentVolumeClaim "pvc-tjjd8" May 25 12:07:18.386: INFO: Deleting PersistentVolumeClaim "pvc-qvh2x" May 25 12:07:18.392: INFO: Deleting PersistentVolumeClaim "pvc-wmjcw" May 25 12:07:18.397: INFO: 23/28 pods finished STEP: Delete "local-pvwj7rj" and create a new PV for same local volume storage STEP: Delete "local-pvps4z9" and create a new PV for same local volume storage STEP: Delete "local-pvtlgkp" and create a new PV for same local volume storage May 25 12:07:18.997: INFO: Deleting pod pod-739c059e-0726-4942-bd6d-63e92f06b91f May 25 12:07:19.183: INFO: Deleting PersistentVolumeClaim "pvc-l8wpt" May 25 12:07:19.188: INFO: Deleting PersistentVolumeClaim "pvc-vq24b" May 25 12:07:19.193: INFO: Deleting PersistentVolumeClaim "pvc-zvptj" May 25 12:07:19.197: INFO: 24/28 pods finished STEP: Delete "local-pvk8kw5" and create a new PV for same local volume storage STEP: Delete "local-pv5z7ds" and create a new PV for same local volume storage STEP: Delete "local-pvzf4rs" and create a new PV for same local volume storage May 25 12:07:21.996: INFO: Deleting pod pod-20b24da8-d2a8-420b-9e77-9c956d182dc6 May 25 12:07:22.004: INFO: Deleting PersistentVolumeClaim "pvc-8dkkf" May 25 12:07:22.008: INFO: Deleting PersistentVolumeClaim "pvc-zfgds" May 25 12:07:22.015: INFO: Deleting PersistentVolumeClaim "pvc-9vfbc" May 25 12:07:22.020: INFO: 25/28 pods finished STEP: Delete "local-pvzzrfx" and create a new PV for same local volume storage STEP: Delete "local-pvqnsj8" and create a new PV for same local volume storage STEP: Delete "local-pv98kxp" and create a new PV for same local volume storage May 25 12:07:23.997: INFO: Deleting pod pod-71c3afb9-00d3-4fe3-99b8-58fe0362686f May 25 12:07:24.007: INFO: Deleting PersistentVolumeClaim "pvc-8kbbb" May 25 12:07:24.011: INFO: Deleting PersistentVolumeClaim "pvc-ccfb9" May 25 12:07:24.015: INFO: Deleting PersistentVolumeClaim "pvc-tg4m2" May 25 12:07:24.020: INFO: 26/28 pods finished STEP: Delete "local-pvrkfc2" and create a new PV for same local volume storage STEP: Delete "local-pvm5f56" and create a new PV for same local volume storage STEP: Delete "local-pvsg944" and create a new PV for same local volume storage May 25 12:07:25.996: INFO: Deleting pod pod-073e17f3-a535-4c40-be82-138535fb02e9 May 25 12:07:26.004: INFO: Deleting PersistentVolumeClaim "pvc-mpcmc" May 25 12:07:26.009: INFO: Deleting PersistentVolumeClaim "pvc-2vcvg" May 25 12:07:26.015: INFO: Deleting PersistentVolumeClaim "pvc-vq7hw" May 25 12:07:26.019: INFO: 27/28 pods finished STEP: Delete "local-pvg8ftg" and create a new PV for same local volume storage STEP: Delete "local-pv5qkcc" and create a new PV for same local volume storage STEP: Delete "local-pvgxls7" and create a new PV for same local volume storage May 25 12:07:27.995: INFO: Deleting pod pod-9751367c-f0c1-4785-b071-01080d86ba7c May 25 12:07:28.005: INFO: Deleting PersistentVolumeClaim "pvc-bs24c" May 25 12:07:28.009: INFO: Deleting PersistentVolumeClaim "pvc-tf6tc" May 25 12:07:28.014: INFO: Deleting PersistentVolumeClaim "pvc-2nz7x" May 25 12:07:28.018: INFO: 28/28 pods finished [AfterEach] Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:519 STEP: Stop and wait for recycle goroutine to finish STEP: Clean all PVs STEP: Cleaning up 10 local volumes on node "v1.21-worker" STEP: Cleaning up PVC and PV May 25 12:07:28.019: INFO: pvc is nil May 25 12:07:28.019: INFO: Deleting PersistentVolume "local-pvlk669" STEP: Cleaning up PVC and PV May 25 12:07:28.023: INFO: pvc is nil May 25 12:07:28.023: INFO: Deleting PersistentVolume "local-pvkdscz" STEP: Cleaning up PVC and PV May 25 12:07:28.027: INFO: pvc is nil May 25 12:07:28.027: INFO: Deleting PersistentVolume "local-pvc69h5" STEP: Cleaning up PVC and PV May 25 12:07:28.031: INFO: pvc is nil May 25 12:07:28.031: INFO: Deleting PersistentVolume "local-pvn7kbb" STEP: Cleaning up PVC and PV May 25 12:07:28.035: INFO: pvc is nil May 25 12:07:28.035: INFO: Deleting PersistentVolume "local-pvjd9fm" STEP: Cleaning up PVC and PV May 25 12:07:28.039: INFO: pvc is nil May 25 12:07:28.039: INFO: Deleting PersistentVolume "local-pvdf5ww" STEP: Cleaning up PVC and PV May 25 12:07:28.043: INFO: pvc is nil May 25 12:07:28.043: INFO: Deleting PersistentVolume "local-pvq7lsb" STEP: Cleaning up PVC and PV May 25 12:07:28.048: INFO: pvc is nil May 25 12:07:28.048: INFO: Deleting PersistentVolume "local-pvvc5h9" STEP: Cleaning up PVC and PV May 25 12:07:28.052: INFO: pvc is nil May 25 12:07:28.052: INFO: Deleting PersistentVolume "local-pv8kzsg" STEP: Cleaning up PVC and PV May 25 12:07:28.056: INFO: pvc is nil May 25 12:07:28.056: INFO: Deleting PersistentVolume "local-pv7fk64" STEP: Unmount tmpfs mount point on node "v1.21-worker" at path "/tmp/local-volume-test-ba777b98-3c47-4649-822f-07ca242b6e0d" May 25 12:07:28.060: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-ba777b98-3c47-4649-822f-07ca242b6e0d"] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker-fcq2c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:07:28.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 25 12:07:28.461: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ba777b98-3c47-4649-822f-07ca242b6e0d] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker-fcq2c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:07:28.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "v1.21-worker" at path "/tmp/local-volume-test-a6f5ba18-c850-4589-a70c-af7bc3a34fdb" May 25 12:07:28.596: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-a6f5ba18-c850-4589-a70c-af7bc3a34fdb"] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker-fcq2c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:07:28.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 25 12:07:28.728: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a6f5ba18-c850-4589-a70c-af7bc3a34fdb] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker-fcq2c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:07:28.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "v1.21-worker" at path "/tmp/local-volume-test-3bce380d-47e1-4535-bb8b-4fbfb47c757e" May 25 12:07:28.873: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-3bce380d-47e1-4535-bb8b-4fbfb47c757e"] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker-fcq2c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:07:28.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 25 12:07:29.006: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-3bce380d-47e1-4535-bb8b-4fbfb47c757e] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker-fcq2c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:07:29.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "v1.21-worker" at path "/tmp/local-volume-test-0e8b414e-b971-4667-8740-9150a78ed44d" May 25 12:07:29.165: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-0e8b414e-b971-4667-8740-9150a78ed44d"] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker-fcq2c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:07:29.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 25 12:07:29.310: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0e8b414e-b971-4667-8740-9150a78ed44d] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker-fcq2c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:07:29.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "v1.21-worker" at path "/tmp/local-volume-test-5ac2104a-28be-44db-a3fd-4617256684d4" May 25 12:07:29.455: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-5ac2104a-28be-44db-a3fd-4617256684d4"] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker-fcq2c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:07:29.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 25 12:07:29.593: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-5ac2104a-28be-44db-a3fd-4617256684d4] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker-fcq2c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:07:29.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "v1.21-worker" at path "/tmp/local-volume-test-115089cf-9090-48a8-ac7a-6661cd0e6c38" May 25 12:07:29.736: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-115089cf-9090-48a8-ac7a-6661cd0e6c38"] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker-fcq2c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:07:29.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 25 12:07:29.865: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-115089cf-9090-48a8-ac7a-6661cd0e6c38] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker-fcq2c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:07:29.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "v1.21-worker" at path "/tmp/local-volume-test-54c91b64-9f15-4c5b-a325-7a314bfa76db" May 25 12:07:29.995: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-54c91b64-9f15-4c5b-a325-7a314bfa76db"] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker-fcq2c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:07:29.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 25 12:07:30.126: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-54c91b64-9f15-4c5b-a325-7a314bfa76db] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker-fcq2c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:07:30.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "v1.21-worker" at path "/tmp/local-volume-test-3d76a265-4f56-4b11-a4b0-8a2501594124" May 25 12:07:30.260: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-3d76a265-4f56-4b11-a4b0-8a2501594124"] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker-fcq2c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:07:30.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 25 12:07:30.395: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-3d76a265-4f56-4b11-a4b0-8a2501594124] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker-fcq2c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:07:30.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "v1.21-worker" at path "/tmp/local-volume-test-2ad427c7-44d9-4e22-9585-f3ac16a4f6d3" May 25 12:07:30.533: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-2ad427c7-44d9-4e22-9585-f3ac16a4f6d3"] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker-fcq2c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:07:30.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 25 12:07:30.675: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-2ad427c7-44d9-4e22-9585-f3ac16a4f6d3] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker-fcq2c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:07:30.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "v1.21-worker" at path "/tmp/local-volume-test-bf380325-d7e9-4f5c-b50a-88002faf02cd" May 25 12:07:30.823: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-bf380325-d7e9-4f5c-b50a-88002faf02cd"] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker-fcq2c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:07:30.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 25 12:07:30.949: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-bf380325-d7e9-4f5c-b50a-88002faf02cd] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker-fcq2c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:07:30.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up 10 local volumes on node "v1.21-worker2" STEP: Cleaning up PVC and PV May 25 12:07:31.088: INFO: pvc is nil May 25 12:07:31.088: INFO: Deleting PersistentVolume "local-pv69ltj" STEP: Cleaning up PVC and PV May 25 12:07:31.096: INFO: pvc is nil May 25 12:07:31.096: INFO: Deleting PersistentVolume "local-pvxgcgv" STEP: Cleaning up PVC and PV May 25 12:07:31.105: INFO: pvc is nil May 25 12:07:31.105: INFO: Deleting PersistentVolume "local-pvxmkj6" STEP: Cleaning up PVC and PV May 25 12:07:31.109: INFO: pvc is nil May 25 12:07:31.109: INFO: Deleting PersistentVolume "local-pvm7rb4" STEP: Cleaning up PVC and PV May 25 12:07:31.114: INFO: pvc is nil May 25 12:07:31.114: INFO: Deleting PersistentVolume "local-pv5cbt5" STEP: Cleaning up PVC and PV May 25 12:07:31.118: INFO: pvc is nil May 25 12:07:31.118: INFO: Deleting PersistentVolume "local-pv2j9gh" STEP: Cleaning up PVC and PV May 25 12:07:31.123: INFO: pvc is nil May 25 12:07:31.123: INFO: Deleting PersistentVolume "local-pv6cc7n" STEP: Cleaning up PVC and PV May 25 12:07:31.127: INFO: pvc is nil May 25 12:07:31.127: INFO: Deleting PersistentVolume "local-pvhf6jr" STEP: Cleaning up PVC and PV May 25 12:07:31.131: INFO: pvc is nil May 25 12:07:31.131: INFO: Deleting PersistentVolume "local-pvcpxqv" STEP: Cleaning up PVC and PV May 25 12:07:31.136: INFO: pvc is nil May 25 12:07:31.136: INFO: Deleting PersistentVolume "local-pvmzm2c" STEP: Unmount tmpfs mount point on node "v1.21-worker2" at path "/tmp/local-volume-test-42a78681-28e8-49a4-b2ca-a87b98a550d5" May 25 12:07:31.140: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-42a78681-28e8-49a4-b2ca-a87b98a550d5"] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker2-bjvmt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:07:31.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 25 12:07:31.284: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-42a78681-28e8-49a4-b2ca-a87b98a550d5] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker2-bjvmt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:07:31.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "v1.21-worker2" at path "/tmp/local-volume-test-41cf342f-4548-4eaa-858e-0fdae015d88f" May 25 12:07:31.423: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-41cf342f-4548-4eaa-858e-0fdae015d88f"] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker2-bjvmt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:07:31.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 25 12:07:31.574: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-41cf342f-4548-4eaa-858e-0fdae015d88f] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker2-bjvmt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:07:31.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "v1.21-worker2" at path "/tmp/local-volume-test-736f7210-1dde-49b6-af7d-d7bde721c546" May 25 12:07:31.710: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-736f7210-1dde-49b6-af7d-d7bde721c546"] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker2-bjvmt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:07:31.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 25 12:07:31.857: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-736f7210-1dde-49b6-af7d-d7bde721c546] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker2-bjvmt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:07:31.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "v1.21-worker2" at path "/tmp/local-volume-test-48d7d251-28a8-4094-906c-52815d7ee469" May 25 12:07:31.990: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-48d7d251-28a8-4094-906c-52815d7ee469"] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker2-bjvmt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:07:31.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 25 12:07:32.120: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-48d7d251-28a8-4094-906c-52815d7ee469] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker2-bjvmt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:07:32.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "v1.21-worker2" at path "/tmp/local-volume-test-108d762a-9f8f-47a7-85e8-9a7d0c3ea22b" May 25 12:07:32.253: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-108d762a-9f8f-47a7-85e8-9a7d0c3ea22b"] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker2-bjvmt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:07:32.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 25 12:07:32.380: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-108d762a-9f8f-47a7-85e8-9a7d0c3ea22b] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker2-bjvmt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:07:32.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "v1.21-worker2" at path "/tmp/local-volume-test-c433520f-8434-4eed-9df5-1706f3152eeb" May 25 12:07:32.509: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-c433520f-8434-4eed-9df5-1706f3152eeb"] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker2-bjvmt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:07:32.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 25 12:07:32.640: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c433520f-8434-4eed-9df5-1706f3152eeb] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker2-bjvmt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:07:32.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "v1.21-worker2" at path "/tmp/local-volume-test-51b74732-2615-47e5-a7a0-3009afc514fa" May 25 12:07:32.770: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-51b74732-2615-47e5-a7a0-3009afc514fa"] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker2-bjvmt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:07:32.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 25 12:07:32.898: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-51b74732-2615-47e5-a7a0-3009afc514fa] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker2-bjvmt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:07:32.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "v1.21-worker2" at path "/tmp/local-volume-test-f59a9074-cc6b-425f-9ac4-01a8bbd1743e" May 25 12:07:33.038: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-f59a9074-cc6b-425f-9ac4-01a8bbd1743e"] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker2-bjvmt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:07:33.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 25 12:07:33.175: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f59a9074-cc6b-425f-9ac4-01a8bbd1743e] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker2-bjvmt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:07:33.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "v1.21-worker2" at path "/tmp/local-volume-test-b64a52c1-ef51-4d22-a134-bb3e2bd753ed" May 25 12:07:33.312: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-b64a52c1-ef51-4d22-a134-bb3e2bd753ed"] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker2-bjvmt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:07:33.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 25 12:07:33.445: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-b64a52c1-ef51-4d22-a134-bb3e2bd753ed] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker2-bjvmt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:07:33.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "v1.21-worker2" at path "/tmp/local-volume-test-4e3372c4-1821-4292-9a62-34ee41331d27" May 25 12:07:33.575: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-4e3372c4-1821-4292-9a62-34ee41331d27"] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker2-bjvmt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:07:33.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 25 12:07:33.714: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4e3372c4-1821-4292-9a62-34ee41331d27] Namespace:persistent-local-volumes-test-1036 PodName:hostexec-v1.21-worker2-bjvmt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 25 12:07:33.714: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 12:07:33.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1036" for this suite. • [SLOW TEST:87.057 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:441 should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:531 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","total":18,"completed":2,"skipped":5671,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 25 12:07:33.877: INFO: Running AfterSuite actions on all nodes May 25 12:07:33.877: INFO: Running AfterSuite actions on node 1 May 25 12:07:33.877: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_storage_serial/junit_01.xml {"msg":"Test Suite completed","total":18,"completed":2,"skipped":5769,"failed":0} Ran 2 of 5771 Specs in 205.206 seconds SUCCESS! -- 2 Passed | 0 Failed | 0 Pending | 5769 Skipped PASS Ginkgo ran 1 suite in 3m26.786927527s Test Suite Passed