I0521 17:12:16.894436 17 test_context.go:429] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0521 17:12:16.894592 17 e2e.go:129] Starting e2e run "d022c509-2259-4eeb-8685-191c851442ea" on Ginkgo node 1 {"msg":"Test Suite starting","total":17,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1621617135 - Will randomize all specs Will run 17 of 5484 specs May 21 17:12:16.989: INFO: >>> kubeConfig: /root/.kube/config May 21 17:12:16.993: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 21 17:12:17.021: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 21 17:12:17.071: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 21 17:12:17.071: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 21 17:12:17.071: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 21 17:12:17.086: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) May 21 17:12:17.086: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 21 17:12:17.086: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds' (0 seconds elapsed) May 21 17:12:17.086: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 21 17:12:17.086: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'tune-sysctls' (0 seconds elapsed) May 21 17:12:17.086: INFO: e2e test version: v1.19.11 May 21 17:12:17.088: INFO: kube-apiserver version: v1.19.11 May 21 17:12:17.088: INFO: >>> kubeConfig: /root/.kube/config May 21 17:12:17.093: INFO: Cluster IP family: ipv4 SSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create volume metrics in Volume Manager /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:291 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 17:12:17.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv May 21 17:12:17.123: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 21 17:12:17.131: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 21 17:12:17.134: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 17:12:17.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3399" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.055 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create volume metrics in Volume Manager [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:291 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] One pod requesting one prebound PVC should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:228 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 17:12:17.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 May 21 17:12:19.197: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-6662 PodName:hostexec-kali-worker-vll69 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:12:19.197: INFO: >>> kubeConfig: /root/.kube/config May 21 17:12:19.359: INFO: exec kali-worker: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l May 21 17:12:19.359: INFO: exec kali-worker: stdout: "0\n" May 21 17:12:19.359: INFO: exec kali-worker: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" May 21 17:12:19.359: INFO: exec kali-worker: exit code: 0 May 21 17:12:19.359: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 17:12:19.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6662" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.220 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 One pod requesting one prebound PVC [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:228 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Pod Disks [Serial] attach on previously attached volumes should work /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:457 [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 17:12:19.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 [It] [Serial] attach on previously attached volumes should work /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:457 May 21 17:12:19.413: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 17:12:19.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-4570" for this suite. S [SKIPPING] [0.051 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Serial] attach on previously attached volumes should work [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:457 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:458 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create volume metrics with the correct PVC ref /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:203 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 17:12:19.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 21 17:12:19.459: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 17:12:19.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-1495" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.040 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create volume metrics with the correct PVC ref [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:203 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create none metrics for pvc controller before creating any PV or PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:477 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 17:12:19.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 21 17:12:19.519: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 17:12:19.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-7620" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.039 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:382 should create none metrics for pvc controller before creating any PV or PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:477 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:245 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 17:12:19.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 May 21 17:12:21.584: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-6554 PodName:hostexec-kali-worker-s44kc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:12:21.584: INFO: >>> kubeConfig: /root/.kube/config May 21 17:12:21.769: INFO: exec kali-worker: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l May 21 17:12:21.769: INFO: exec kali-worker: stdout: "0\n" May 21 17:12:21.769: INFO: exec kali-worker: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" May 21 17:12:21.769: INFO: exec kali-worker: exit code: 0 May 21 17:12:21.769: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 17:12:21.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6554" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.244 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 Two pods mounting a local volume at the same time [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:245 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:517 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 17:12:21.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:441 STEP: Setting up 10 local volumes on node "kali-worker" STEP: Creating tmpfs mount point on node "kali-worker" at path "/tmp/local-volume-test-7465f901-38bf-404b-8b1c-9182157bf3ab" May 21 17:12:23.850: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-7465f901-38bf-404b-8b1c-9182157bf3ab" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-7465f901-38bf-404b-8b1c-9182157bf3ab" "/tmp/local-volume-test-7465f901-38bf-404b-8b1c-9182157bf3ab"] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker-hvw2g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:12:23.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "kali-worker" at path "/tmp/local-volume-test-7b49e007-5524-42c5-9c4a-6b90aec92d64" May 21 17:12:24.017: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-7b49e007-5524-42c5-9c4a-6b90aec92d64" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-7b49e007-5524-42c5-9c4a-6b90aec92d64" "/tmp/local-volume-test-7b49e007-5524-42c5-9c4a-6b90aec92d64"] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker-hvw2g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:12:24.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "kali-worker" at path "/tmp/local-volume-test-96437ead-a906-4638-a6f3-9699c4aa399a" May 21 17:12:24.153: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-96437ead-a906-4638-a6f3-9699c4aa399a" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-96437ead-a906-4638-a6f3-9699c4aa399a" "/tmp/local-volume-test-96437ead-a906-4638-a6f3-9699c4aa399a"] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker-hvw2g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:12:24.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "kali-worker" at path "/tmp/local-volume-test-0be6c878-e9ba-417a-8dd6-df60b3466212" May 21 17:12:24.283: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-0be6c878-e9ba-417a-8dd6-df60b3466212" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-0be6c878-e9ba-417a-8dd6-df60b3466212" "/tmp/local-volume-test-0be6c878-e9ba-417a-8dd6-df60b3466212"] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker-hvw2g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:12:24.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "kali-worker" at path "/tmp/local-volume-test-88ba8cda-a80c-4b24-8232-4913fa00414a" May 21 17:12:24.415: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-88ba8cda-a80c-4b24-8232-4913fa00414a" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-88ba8cda-a80c-4b24-8232-4913fa00414a" "/tmp/local-volume-test-88ba8cda-a80c-4b24-8232-4913fa00414a"] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker-hvw2g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:12:24.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "kali-worker" at path "/tmp/local-volume-test-996e084a-3ccc-46bd-adf1-410647a4e65e" May 21 17:12:24.576: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-996e084a-3ccc-46bd-adf1-410647a4e65e" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-996e084a-3ccc-46bd-adf1-410647a4e65e" "/tmp/local-volume-test-996e084a-3ccc-46bd-adf1-410647a4e65e"] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker-hvw2g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:12:24.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "kali-worker" at path "/tmp/local-volume-test-f5cd8efb-ac99-48b7-9728-860cfca65c2f" May 21 17:12:24.733: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-f5cd8efb-ac99-48b7-9728-860cfca65c2f" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-f5cd8efb-ac99-48b7-9728-860cfca65c2f" "/tmp/local-volume-test-f5cd8efb-ac99-48b7-9728-860cfca65c2f"] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker-hvw2g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:12:24.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "kali-worker" at path "/tmp/local-volume-test-35949dfb-041f-4ddd-88fb-18993d50d4a6" May 21 17:12:24.890: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-35949dfb-041f-4ddd-88fb-18993d50d4a6" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-35949dfb-041f-4ddd-88fb-18993d50d4a6" "/tmp/local-volume-test-35949dfb-041f-4ddd-88fb-18993d50d4a6"] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker-hvw2g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:12:24.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "kali-worker" at path "/tmp/local-volume-test-58067dd1-34c5-4b1d-ab5f-8b40a212844c" May 21 17:12:25.032: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-58067dd1-34c5-4b1d-ab5f-8b40a212844c" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-58067dd1-34c5-4b1d-ab5f-8b40a212844c" "/tmp/local-volume-test-58067dd1-34c5-4b1d-ab5f-8b40a212844c"] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker-hvw2g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:12:25.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "kali-worker" at path "/tmp/local-volume-test-e529360f-15b1-43e4-8339-77abdc31924b" May 21 17:12:25.181: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-e529360f-15b1-43e4-8339-77abdc31924b" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-e529360f-15b1-43e4-8339-77abdc31924b" "/tmp/local-volume-test-e529360f-15b1-43e4-8339-77abdc31924b"] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker-hvw2g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:12:25.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Setting up 10 local volumes on node "kali-worker2" STEP: Creating tmpfs mount point on node "kali-worker2" at path "/tmp/local-volume-test-8111c802-2ef2-4f83-b326-d3355bac5a02" May 21 17:12:27.343: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-8111c802-2ef2-4f83-b326-d3355bac5a02" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-8111c802-2ef2-4f83-b326-d3355bac5a02" "/tmp/local-volume-test-8111c802-2ef2-4f83-b326-d3355bac5a02"] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker2-st5xh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:12:27.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "kali-worker2" at path "/tmp/local-volume-test-241ce5ca-4700-4ef1-90b9-3857f968bcd1" May 21 17:12:27.469: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-241ce5ca-4700-4ef1-90b9-3857f968bcd1" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-241ce5ca-4700-4ef1-90b9-3857f968bcd1" "/tmp/local-volume-test-241ce5ca-4700-4ef1-90b9-3857f968bcd1"] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker2-st5xh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:12:27.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "kali-worker2" at path "/tmp/local-volume-test-c48df879-00fb-45ce-8f3e-5c5178ce82a7" May 21 17:12:27.614: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-c48df879-00fb-45ce-8f3e-5c5178ce82a7" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-c48df879-00fb-45ce-8f3e-5c5178ce82a7" "/tmp/local-volume-test-c48df879-00fb-45ce-8f3e-5c5178ce82a7"] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker2-st5xh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:12:27.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "kali-worker2" at path "/tmp/local-volume-test-8fd85b96-e6ca-46d1-85ec-e662bce4e038" May 21 17:12:27.765: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-8fd85b96-e6ca-46d1-85ec-e662bce4e038" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-8fd85b96-e6ca-46d1-85ec-e662bce4e038" "/tmp/local-volume-test-8fd85b96-e6ca-46d1-85ec-e662bce4e038"] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker2-st5xh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:12:27.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "kali-worker2" at path "/tmp/local-volume-test-ec6fcdfe-7f43-4fc7-9dec-a8edd5687836" May 21 17:12:27.903: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-ec6fcdfe-7f43-4fc7-9dec-a8edd5687836" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-ec6fcdfe-7f43-4fc7-9dec-a8edd5687836" "/tmp/local-volume-test-ec6fcdfe-7f43-4fc7-9dec-a8edd5687836"] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker2-st5xh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:12:27.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "kali-worker2" at path "/tmp/local-volume-test-ad5c8e2c-6e29-4720-a2fb-2e8d17ab2b56" May 21 17:12:28.057: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-ad5c8e2c-6e29-4720-a2fb-2e8d17ab2b56" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-ad5c8e2c-6e29-4720-a2fb-2e8d17ab2b56" "/tmp/local-volume-test-ad5c8e2c-6e29-4720-a2fb-2e8d17ab2b56"] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker2-st5xh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:12:28.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "kali-worker2" at path "/tmp/local-volume-test-6a8ce1e4-2126-499e-8c09-4113efac2584" May 21 17:12:28.199: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-6a8ce1e4-2126-499e-8c09-4113efac2584" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-6a8ce1e4-2126-499e-8c09-4113efac2584" "/tmp/local-volume-test-6a8ce1e4-2126-499e-8c09-4113efac2584"] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker2-st5xh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:12:28.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "kali-worker2" at path "/tmp/local-volume-test-ceb22d19-01c6-46e2-83cb-4edb645b9f9b" May 21 17:12:28.307: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-ceb22d19-01c6-46e2-83cb-4edb645b9f9b" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-ceb22d19-01c6-46e2-83cb-4edb645b9f9b" "/tmp/local-volume-test-ceb22d19-01c6-46e2-83cb-4edb645b9f9b"] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker2-st5xh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:12:28.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "kali-worker2" at path "/tmp/local-volume-test-0338c386-6811-458e-a486-2e5b077bfa76" May 21 17:12:28.445: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-0338c386-6811-458e-a486-2e5b077bfa76" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-0338c386-6811-458e-a486-2e5b077bfa76" "/tmp/local-volume-test-0338c386-6811-458e-a486-2e5b077bfa76"] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker2-st5xh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:12:28.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "kali-worker2" at path "/tmp/local-volume-test-54c3565b-afe1-43e7-94d1-39d1071ea075" May 21 17:12:28.601: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-54c3565b-afe1-43e7-94d1-39d1071ea075" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-54c3565b-afe1-43e7-94d1-39d1071ea075" "/tmp/local-volume-test-54c3565b-afe1-43e7-94d1-39d1071ea075"] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker2-st5xh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:12:28.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Create 20 PVs STEP: Start a goroutine to recycle unbound PVs [It] should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:517 STEP: Creating 7 pods periodically STEP: Waiting for all pods to complete successfully May 21 17:12:33.921: INFO: Deleting pod pod-159bb809-4568-41eb-b3c3-cf652320d8bf May 21 17:12:33.929: INFO: Deleting PersistentVolumeClaim "pvc-t2vrx" May 21 17:12:33.934: INFO: Deleting PersistentVolumeClaim "pvc-bfvkj" May 21 17:12:33.938: INFO: Deleting PersistentVolumeClaim "pvc-5q6ls" May 21 17:12:33.943: INFO: 1/28 pods finished STEP: Delete "local-pvwqt2h" and create a new PV for same local volume storage STEP: Delete "local-pvwqt2h" and create a new PV for same local volume storage STEP: Delete "local-pv8cjxt" and create a new PV for same local volume storage STEP: Delete "local-pv8cjxt" and create a new PV for same local volume storage STEP: Delete "local-pv9fvlx" and create a new PV for same local volume storage STEP: Delete "local-pv9fvlx" and create a new PV for same local volume storage May 21 17:12:35.921: INFO: Deleting pod pod-2831b8d4-0663-4fcc-a0be-04211b42e9c2 May 21 17:12:35.930: INFO: Deleting PersistentVolumeClaim "pvc-ddkdw" May 21 17:12:35.935: INFO: Deleting PersistentVolumeClaim "pvc-7scfx" May 21 17:12:35.940: INFO: Deleting PersistentVolumeClaim "pvc-sprgb" May 21 17:12:35.945: INFO: 2/28 pods finished May 21 17:12:35.945: INFO: Deleting pod pod-a0e0c8ad-f41c-4159-87fc-4d66287db736 May 21 17:12:35.953: INFO: Deleting PersistentVolumeClaim "pvc-dgtqk" May 21 17:12:35.957: INFO: Deleting PersistentVolumeClaim "pvc-5zdmq" STEP: Delete "local-pvwgb4h" and create a new PV for same local volume storage May 21 17:12:35.960: INFO: Deleting PersistentVolumeClaim "pvc-6zb4s" May 21 17:12:35.964: INFO: 3/28 pods finished STEP: Delete "local-pvwgb4h" and create a new PV for same local volume storage STEP: Delete "local-pv5nwtx" and create a new PV for same local volume storage STEP: Delete "local-pv5nwtx" and create a new PV for same local volume storage STEP: Delete "local-pvsg52t" and create a new PV for same local volume storage STEP: Delete "local-pvsg52t" and create a new PV for same local volume storage STEP: Delete "local-pvbsv89" and create a new PV for same local volume storage STEP: Delete "local-pvbsv89" and create a new PV for same local volume storage STEP: Delete "local-pvn28bm" and create a new PV for same local volume storage STEP: Delete "local-pv29mf7" and create a new PV for same local volume storage May 21 17:12:36.921: INFO: Deleting pod pod-5232762d-1a58-44f4-aeb1-85d5bd5f0156 May 21 17:12:36.929: INFO: Deleting PersistentVolumeClaim "pvc-szn82" May 21 17:12:36.934: INFO: Deleting PersistentVolumeClaim "pvc-tghs8" May 21 17:12:36.938: INFO: Deleting PersistentVolumeClaim "pvc-7pcgx" May 21 17:12:36.942: INFO: 4/28 pods finished STEP: Delete "local-pvlmdzf" and create a new PV for same local volume storage STEP: Delete "local-pvlmdzf" and create a new PV for same local volume storage STEP: Delete "local-pvbbt4d" and create a new PV for same local volume storage STEP: Delete "local-pvbbt4d" and create a new PV for same local volume storage STEP: Delete "local-pvbf7rz" and create a new PV for same local volume storage STEP: Delete "local-pvbf7rz" and create a new PV for same local volume storage May 21 17:12:37.921: INFO: Deleting pod pod-53655254-6c5e-4a1b-958d-0abf3b426188 May 21 17:12:37.930: INFO: Deleting PersistentVolumeClaim "pvc-5dz5r" May 21 17:12:37.935: INFO: Deleting PersistentVolumeClaim "pvc-cxzmw" May 21 17:12:37.939: INFO: Deleting PersistentVolumeClaim "pvc-ddd82" May 21 17:12:37.943: INFO: 5/28 pods finished May 21 17:12:37.943: INFO: Deleting pod pod-99f2bd52-8d85-42e5-b02b-507b20abbe1c May 21 17:12:37.951: INFO: Deleting PersistentVolumeClaim "pvc-2ws9t" STEP: Delete "local-pvpmdrn" and create a new PV for same local volume storage May 21 17:12:37.955: INFO: Deleting PersistentVolumeClaim "pvc-kshb8" May 21 17:12:37.960: INFO: Deleting PersistentVolumeClaim "pvc-fr2gm" May 21 17:12:37.964: INFO: 6/28 pods finished STEP: Delete "local-pvpmdrn" and create a new PV for same local volume storage STEP: Delete "local-pvpprqx" and create a new PV for same local volume storage STEP: Delete "local-pvpprqx" and create a new PV for same local volume storage STEP: Delete "local-pv4vz7g" and create a new PV for same local volume storage STEP: Delete "local-pv4vz7g" and create a new PV for same local volume storage STEP: Delete "local-pvjbz4q" and create a new PV for same local volume storage STEP: Delete "local-pv9d2kr" and create a new PV for same local volume storage STEP: Delete "local-pvbr2r5" and create a new PV for same local volume storage May 21 17:12:42.921: INFO: Deleting pod pod-3d42f48d-dcfa-48d6-8321-d8e07c770d7f May 21 17:12:42.931: INFO: Deleting PersistentVolumeClaim "pvc-t74vt" May 21 17:12:42.936: INFO: Deleting PersistentVolumeClaim "pvc-r4rzl" May 21 17:12:42.943: INFO: Deleting PersistentVolumeClaim "pvc-2qhx6" May 21 17:12:42.949: INFO: 7/28 pods finished STEP: Delete "local-pv9f477" and create a new PV for same local volume storage STEP: Delete "local-pv9f477" and create a new PV for same local volume storage STEP: Delete "local-pv5qz42" and create a new PV for same local volume storage STEP: Delete "local-pv7p74z" and create a new PV for same local volume storage May 21 17:12:44.921: INFO: Deleting pod pod-688bd1c0-b0e4-4e4e-acbf-8cd6a03b6690 May 21 17:12:44.929: INFO: Deleting PersistentVolumeClaim "pvc-85h6q" May 21 17:12:44.933: INFO: Deleting PersistentVolumeClaim "pvc-sckrw" May 21 17:12:44.938: INFO: Deleting PersistentVolumeClaim "pvc-7j2p8" May 21 17:12:44.943: INFO: 8/28 pods finished STEP: Delete "local-pvxchfd" and create a new PV for same local volume storage STEP: Delete "local-pvxchfd" and create a new PV for same local volume storage STEP: Delete "local-pvkrhdm" and create a new PV for same local volume storage STEP: Delete "local-pvkrhdm" and create a new PV for same local volume storage STEP: Delete "local-pvc7l8c" and create a new PV for same local volume storage STEP: Delete "local-pvc7l8c" and create a new PV for same local volume storage May 21 17:12:45.921: INFO: Deleting pod pod-601f8247-f866-45f6-aa53-b4fc953455b7 May 21 17:12:45.931: INFO: Deleting PersistentVolumeClaim "pvc-9vgt9" May 21 17:12:45.935: INFO: Deleting PersistentVolumeClaim "pvc-xkcpj" May 21 17:12:45.939: INFO: Deleting PersistentVolumeClaim "pvc-gcdrs" May 21 17:12:45.944: INFO: 9/28 pods finished May 21 17:12:45.944: INFO: Deleting pod pod-b35b8580-6cbd-4b5a-8875-3ca0c4b59376 May 21 17:12:45.951: INFO: Deleting PersistentVolumeClaim "pvc-tw2hs" May 21 17:12:45.954: INFO: Deleting PersistentVolumeClaim "pvc-l62jt" STEP: Delete "local-pvbwxvv" and create a new PV for same local volume storage May 21 17:12:45.958: INFO: Deleting PersistentVolumeClaim "pvc-w6xhp" May 21 17:12:45.962: INFO: 10/28 pods finished STEP: Delete "local-pvbwxvv" and create a new PV for same local volume storage STEP: Delete "local-pvd4lqd" and create a new PV for same local volume storage STEP: Delete "local-pvgg6xl" and create a new PV for same local volume storage STEP: Delete "local-pvwwf9g" and create a new PV for same local volume storage STEP: Delete "local-pvwwf9g" and create a new PV for same local volume storage STEP: Delete "local-pvs29jd" and create a new PV for same local volume storage STEP: Delete "local-pvll5g9" and create a new PV for same local volume storage May 21 17:12:47.921: INFO: Deleting pod pod-6da74351-ada4-43d1-a348-15e412fe1d52 May 21 17:12:47.930: INFO: Deleting PersistentVolumeClaim "pvc-7l8k6" May 21 17:12:47.935: INFO: Deleting PersistentVolumeClaim "pvc-pvvw5" May 21 17:12:47.940: INFO: Deleting PersistentVolumeClaim "pvc-9gsrm" May 21 17:12:47.945: INFO: 11/28 pods finished May 21 17:12:47.945: INFO: Deleting pod pod-c5861396-4558-4ad2-afab-f57b66212201 May 21 17:12:47.952: INFO: Deleting PersistentVolumeClaim "pvc-48hwl" STEP: Delete "local-pv99r8j" and create a new PV for same local volume storage May 21 17:12:47.957: INFO: Deleting PersistentVolumeClaim "pvc-plbjq" May 21 17:12:47.961: INFO: Deleting PersistentVolumeClaim "pvc-btvm4" May 21 17:12:47.965: INFO: 12/28 pods finished STEP: Delete "local-pv99r8j" and create a new PV for same local volume storage STEP: Delete "local-pvd5nvg" and create a new PV for same local volume storage STEP: Delete "local-pvglvfc" and create a new PV for same local volume storage STEP: Delete "local-pvxp4jr" and create a new PV for same local volume storage STEP: Delete "local-pvmh9wl" and create a new PV for same local volume storage STEP: Delete "local-pvxhvk2" and create a new PV for same local volume storage May 21 17:12:53.922: INFO: Deleting pod pod-789e81ac-9b2f-4421-b283-cfc5d22329d0 May 21 17:12:53.931: INFO: Deleting PersistentVolumeClaim "pvc-g9tkz" May 21 17:12:53.937: INFO: Deleting PersistentVolumeClaim "pvc-jvp9m" May 21 17:12:53.941: INFO: Deleting PersistentVolumeClaim "pvc-5zlpp" May 21 17:12:53.945: INFO: 13/28 pods finished STEP: Delete "local-pvr482s" and create a new PV for same local volume storage STEP: Delete "local-pvr482s" and create a new PV for same local volume storage STEP: Delete "local-pvntc2r" and create a new PV for same local volume storage STEP: Delete "local-pvf824m" and create a new PV for same local volume storage May 21 17:12:54.922: INFO: Deleting pod pod-e5f77aa1-b8ce-4c59-859d-856fae0e7643 May 21 17:12:54.936: INFO: Deleting PersistentVolumeClaim "pvc-c7kjp" May 21 17:12:54.940: INFO: Deleting PersistentVolumeClaim "pvc-jdh5p" May 21 17:12:54.944: INFO: Deleting PersistentVolumeClaim "pvc-fbcsw" May 21 17:12:54.948: INFO: 14/28 pods finished STEP: Delete "local-pvvpmg2" and create a new PV for same local volume storage STEP: Delete "local-pvvpmg2" and create a new PV for same local volume storage STEP: Delete "local-pvdgxdd" and create a new PV for same local volume storage STEP: Delete "local-pvdgxdd" and create a new PV for same local volume storage STEP: Delete "local-pvjkr9k" and create a new PV for same local volume storage STEP: Delete "local-pvjkr9k" and create a new PV for same local volume storage May 21 17:12:55.922: INFO: Deleting pod pod-0bf0c756-aa53-40f9-8927-76489659edb2 May 21 17:12:55.931: INFO: Deleting PersistentVolumeClaim "pvc-zkbx4" May 21 17:12:55.935: INFO: Deleting PersistentVolumeClaim "pvc-2g8cp" May 21 17:12:55.939: INFO: Deleting PersistentVolumeClaim "pvc-22t8t" May 21 17:12:55.943: INFO: 15/28 pods finished May 21 17:12:55.943: INFO: Deleting pod pod-788cc190-aa29-4d4a-9279-5cf43429e4ed May 21 17:12:55.951: INFO: Deleting PersistentVolumeClaim "pvc-562k7" STEP: Delete "local-pvhgl9v" and create a new PV for same local volume storage May 21 17:12:55.955: INFO: Deleting PersistentVolumeClaim "pvc-chw5f" May 21 17:12:55.959: INFO: Deleting PersistentVolumeClaim "pvc-lgg89" May 21 17:12:55.963: INFO: 16/28 pods finished STEP: Delete "local-pvhgl9v" and create a new PV for same local volume storage STEP: Delete "local-pvlqgzp" and create a new PV for same local volume storage STEP: Delete "local-pvlqgzp" and create a new PV for same local volume storage STEP: Delete "local-pvvtwhl" and create a new PV for same local volume storage STEP: Delete "local-pvvtwhl" and create a new PV for same local volume storage STEP: Delete "local-pvs9rwj" and create a new PV for same local volume storage STEP: Delete "local-pv64hz9" and create a new PV for same local volume storage STEP: Delete "local-pvkcrng" and create a new PV for same local volume storage May 21 17:12:57.921: INFO: Deleting pod pod-5dda21b7-4dca-4692-b5a0-ae782d8c8217 May 21 17:12:57.930: INFO: Deleting PersistentVolumeClaim "pvc-frkkv" May 21 17:12:57.935: INFO: Deleting PersistentVolumeClaim "pvc-kzgbc" May 21 17:12:57.939: INFO: Deleting PersistentVolumeClaim "pvc-hpgh9" May 21 17:12:57.945: INFO: 17/28 pods finished May 21 17:12:57.945: INFO: Deleting pod pod-ad4bd1dd-d010-4008-9c03-a425763b9747 May 21 17:12:57.952: INFO: Deleting PersistentVolumeClaim "pvc-rxmpj" STEP: Delete "local-pv95lsb" and create a new PV for same local volume storage May 21 17:12:57.956: INFO: Deleting PersistentVolumeClaim "pvc-2v897" May 21 17:12:57.960: INFO: Deleting PersistentVolumeClaim "pvc-sp282" STEP: Delete "local-pv95lsb" and create a new PV for same local volume storage May 21 17:12:57.964: INFO: 18/28 pods finished STEP: Delete "local-pvvhmsv" and create a new PV for same local volume storage STEP: Delete "local-pvvhmsv" and create a new PV for same local volume storage STEP: Delete "local-pvzvmgb" and create a new PV for same local volume storage STEP: Delete "local-pv5stx4" and create a new PV for same local volume storage STEP: Delete "local-pvf44zw" and create a new PV for same local volume storage STEP: Delete "local-pvqtk68" and create a new PV for same local volume storage May 21 17:13:03.922: INFO: Deleting pod pod-873c1b26-4435-4990-9570-9eab06f1c210 May 21 17:13:03.931: INFO: Deleting PersistentVolumeClaim "pvc-dbkrk" May 21 17:13:03.936: INFO: Deleting PersistentVolumeClaim "pvc-47zgk" May 21 17:13:03.940: INFO: Deleting PersistentVolumeClaim "pvc-bc8d8" May 21 17:13:03.945: INFO: 19/28 pods finished May 21 17:13:03.945: INFO: Deleting pod pod-8b09e0e7-fafd-46e1-9f6b-179259e622ca May 21 17:13:03.954: INFO: Deleting PersistentVolumeClaim "pvc-bw4d7" May 21 17:13:03.958: INFO: Deleting PersistentVolumeClaim "pvc-4lg7x" STEP: Delete "local-pv5mmmq" and create a new PV for same local volume storage May 21 17:13:03.963: INFO: Deleting PersistentVolumeClaim "pvc-f7t94" May 21 17:13:03.967: INFO: 20/28 pods finished STEP: Delete "local-pv5mmmq" and create a new PV for same local volume storage STEP: Delete "local-pv69ws9" and create a new PV for same local volume storage STEP: Delete "local-pv69ws9" and create a new PV for same local volume storage STEP: Delete "local-pvm69ss" and create a new PV for same local volume storage STEP: Delete "local-pvm69ss" and create a new PV for same local volume storage STEP: Delete "local-pvvxrnp" and create a new PV for same local volume storage STEP: Delete "local-pvvxrnp" and create a new PV for same local volume storage STEP: Delete "local-pvh9wmq" and create a new PV for same local volume storage STEP: Delete "local-pvrqvc8" and create a new PV for same local volume storage May 21 17:13:05.922: INFO: Deleting pod pod-4951c4c4-30e4-4c7a-a660-b131cfdd44ba May 21 17:13:05.930: INFO: Deleting PersistentVolumeClaim "pvc-4hg2f" May 21 17:13:05.936: INFO: Deleting PersistentVolumeClaim "pvc-n5g9t" May 21 17:13:05.940: INFO: Deleting PersistentVolumeClaim "pvc-j4qfg" May 21 17:13:05.944: INFO: 21/28 pods finished May 21 17:13:05.944: INFO: Deleting pod pod-f36383e2-0558-4851-b7bf-8b91f3d01e37 May 21 17:13:05.950: INFO: Deleting PersistentVolumeClaim "pvc-kltt7" May 21 17:13:05.954: INFO: Deleting PersistentVolumeClaim "pvc-9srj4" STEP: Delete "local-pv4jdcc" and create a new PV for same local volume storage May 21 17:13:05.959: INFO: Deleting PersistentVolumeClaim "pvc-59qmx" May 21 17:13:05.963: INFO: 22/28 pods finished STEP: Delete "local-pv4jdcc" and create a new PV for same local volume storage STEP: Delete "local-pvcsl26" and create a new PV for same local volume storage STEP: Delete "local-pvcsl26" and create a new PV for same local volume storage STEP: Delete "local-pvk8h9x" and create a new PV for same local volume storage STEP: Delete "local-pvk8h9x" and create a new PV for same local volume storage STEP: Delete "local-pvq78td" and create a new PV for same local volume storage STEP: Delete "local-pvq78td" and create a new PV for same local volume storage STEP: Delete "local-pvmx6rc" and create a new PV for same local volume storage STEP: Delete "local-pvcxq5m" and create a new PV for same local volume storage May 21 17:13:07.921: INFO: Deleting pod pod-474d2b7a-3eb9-4109-b50d-8f44a811bac4 May 21 17:13:07.929: INFO: Deleting PersistentVolumeClaim "pvc-wv7b6" May 21 17:13:07.934: INFO: Deleting PersistentVolumeClaim "pvc-kjbxd" May 21 17:13:07.939: INFO: Deleting PersistentVolumeClaim "pvc-jb2km" May 21 17:13:07.943: INFO: 23/28 pods finished STEP: Delete "local-pvswqlc" and create a new PV for same local volume storage STEP: Delete "local-pvswqlc" and create a new PV for same local volume storage STEP: Delete "local-pvjq2c6" and create a new PV for same local volume storage STEP: Delete "local-pvjq2c6" and create a new PV for same local volume storage STEP: Delete "local-pvnthvs" and create a new PV for same local volume storage STEP: Delete "local-pvnthvs" and create a new PV for same local volume storage May 21 17:13:09.921: INFO: Deleting pod pod-2a718877-290a-4d2a-a480-0d76f6aa908b May 21 17:13:09.931: INFO: Deleting PersistentVolumeClaim "pvc-9l2dj" May 21 17:13:09.935: INFO: Deleting PersistentVolumeClaim "pvc-4mmfw" May 21 17:13:09.939: INFO: Deleting PersistentVolumeClaim "pvc-78qs8" May 21 17:13:09.943: INFO: 24/28 pods finished STEP: Delete "local-pvwrgfq" and create a new PV for same local volume storage STEP: Delete "local-pvwrgfq" and create a new PV for same local volume storage STEP: Delete "local-pvww47q" and create a new PV for same local volume storage STEP: Delete "local-pvww47q" and create a new PV for same local volume storage STEP: Delete "local-pvqzhfs" and create a new PV for same local volume storage STEP: Delete "local-pvqzhfs" and create a new PV for same local volume storage May 21 17:13:12.921: INFO: Deleting pod pod-1d3a1747-dae2-4b7d-a639-6d81290c8d77 May 21 17:13:12.932: INFO: Deleting PersistentVolumeClaim "pvc-rm8rk" May 21 17:13:12.936: INFO: Deleting PersistentVolumeClaim "pvc-tsl2l" May 21 17:13:12.941: INFO: Deleting PersistentVolumeClaim "pvc-79wz7" May 21 17:13:12.945: INFO: 25/28 pods finished STEP: Delete "local-pvzkk4l" and create a new PV for same local volume storage STEP: Delete "local-pvzkk4l" and create a new PV for same local volume storage STEP: Delete "local-pvr4zmc" and create a new PV for same local volume storage STEP: Delete "local-pvr4zmc" and create a new PV for same local volume storage STEP: Delete "local-pvrjd42" and create a new PV for same local volume storage STEP: Delete "local-pvrjd42" and create a new PV for same local volume storage May 21 17:13:14.921: INFO: Deleting pod pod-cf64711f-8ab2-40b3-aed0-b76bbd79ca14 May 21 17:13:14.934: INFO: Deleting PersistentVolumeClaim "pvc-65krz" May 21 17:13:14.941: INFO: Deleting PersistentVolumeClaim "pvc-xt62q" May 21 17:13:14.946: INFO: Deleting PersistentVolumeClaim "pvc-qptgk" May 21 17:13:14.950: INFO: 26/28 pods finished May 21 17:13:14.950: INFO: Deleting pod pod-ff63fb1c-df04-4635-9220-c42aaac6b37c May 21 17:13:14.957: INFO: Deleting PersistentVolumeClaim "pvc-4fcjn" STEP: Delete "local-pv69pr7" and create a new PV for same local volume storage May 21 17:13:14.961: INFO: Deleting PersistentVolumeClaim "pvc-scm8f" May 21 17:13:14.966: INFO: Deleting PersistentVolumeClaim "pvc-sz2cw" May 21 17:13:14.975: INFO: 27/28 pods finished STEP: Delete "local-pv69pr7" and create a new PV for same local volume storage STEP: Delete "local-pvbxtfz" and create a new PV for same local volume storage STEP: Delete "local-pvbxtfz" and create a new PV for same local volume storage STEP: Delete "local-pv6s566" and create a new PV for same local volume storage STEP: Delete "local-pv9bpq5" and create a new PV for same local volume storage STEP: Delete "local-pv7j2b9" and create a new PV for same local volume storage STEP: Delete "local-pvd27cr" and create a new PV for same local volume storage May 21 17:13:15.921: INFO: Deleting pod pod-0c6285fd-1081-45ac-ad0e-2ec668f72180 May 21 17:13:15.930: INFO: Deleting PersistentVolumeClaim "pvc-dcdvl" May 21 17:13:15.935: INFO: Deleting PersistentVolumeClaim "pvc-tjgpx" May 21 17:13:15.939: INFO: Deleting PersistentVolumeClaim "pvc-dnhvw" May 21 17:13:15.944: INFO: 28/28 pods finished [AfterEach] Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:505 STEP: Stop and wait for recycle goroutine to finish STEP: Clean all PVs STEP: Cleaning up 10 local volumes on node "kali-worker" STEP: Cleaning up PVC and PV May 21 17:13:15.944: INFO: pvc is nil May 21 17:13:15.944: INFO: Deleting PersistentVolume "local-pvkshwd" STEP: Cleaning up PVC and PV May 21 17:13:15.948: INFO: pvc is nil May 21 17:13:15.948: INFO: Deleting PersistentVolume "local-pvxdtt2" STEP: Cleaning up PVC and PV May 21 17:13:15.953: INFO: pvc is nil May 21 17:13:15.953: INFO: Deleting PersistentVolume "local-pv7q9lj" STEP: Cleaning up PVC and PV May 21 17:13:15.957: INFO: pvc is nil May 21 17:13:15.957: INFO: Deleting PersistentVolume "local-pvqk8mm" STEP: Cleaning up PVC and PV May 21 17:13:15.961: INFO: pvc is nil May 21 17:13:15.961: INFO: Deleting PersistentVolume "local-pvxg76l" STEP: Cleaning up PVC and PV May 21 17:13:15.965: INFO: pvc is nil May 21 17:13:15.965: INFO: Deleting PersistentVolume "local-pvtxtbz" STEP: Cleaning up PVC and PV May 21 17:13:15.969: INFO: pvc is nil May 21 17:13:15.969: INFO: Deleting PersistentVolume "local-pvsr7kw" STEP: Cleaning up PVC and PV May 21 17:13:15.973: INFO: pvc is nil May 21 17:13:15.973: INFO: Deleting PersistentVolume "local-pv9wh6t" STEP: Cleaning up PVC and PV May 21 17:13:15.977: INFO: pvc is nil May 21 17:13:15.977: INFO: Deleting PersistentVolume "local-pv9g8kh" STEP: Cleaning up PVC and PV May 21 17:13:15.981: INFO: pvc is nil May 21 17:13:15.981: INFO: Deleting PersistentVolume "local-pvh9zm9" STEP: Unmount tmpfs mount point on node "kali-worker" at path "/tmp/local-volume-test-7465f901-38bf-404b-8b1c-9182157bf3ab" May 21 17:13:15.985: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-7465f901-38bf-404b-8b1c-9182157bf3ab"] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker-hvw2g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:15.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 21 17:13:16.143: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-7465f901-38bf-404b-8b1c-9182157bf3ab] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker-hvw2g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:16.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "kali-worker" at path "/tmp/local-volume-test-7b49e007-5524-42c5-9c4a-6b90aec92d64" May 21 17:13:16.279: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-7b49e007-5524-42c5-9c4a-6b90aec92d64"] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker-hvw2g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:16.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 21 17:13:16.428: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-7b49e007-5524-42c5-9c4a-6b90aec92d64] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker-hvw2g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:16.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "kali-worker" at path "/tmp/local-volume-test-96437ead-a906-4638-a6f3-9699c4aa399a" May 21 17:13:16.526: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-96437ead-a906-4638-a6f3-9699c4aa399a"] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker-hvw2g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:16.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 21 17:13:16.658: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-96437ead-a906-4638-a6f3-9699c4aa399a] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker-hvw2g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:16.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "kali-worker" at path "/tmp/local-volume-test-0be6c878-e9ba-417a-8dd6-df60b3466212" May 21 17:13:16.766: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-0be6c878-e9ba-417a-8dd6-df60b3466212"] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker-hvw2g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:16.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 21 17:13:16.908: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0be6c878-e9ba-417a-8dd6-df60b3466212] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker-hvw2g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:16.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "kali-worker" at path "/tmp/local-volume-test-88ba8cda-a80c-4b24-8232-4913fa00414a" May 21 17:13:17.095: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-88ba8cda-a80c-4b24-8232-4913fa00414a"] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker-hvw2g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:17.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 21 17:13:17.260: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-88ba8cda-a80c-4b24-8232-4913fa00414a] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker-hvw2g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:17.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "kali-worker" at path "/tmp/local-volume-test-996e084a-3ccc-46bd-adf1-410647a4e65e" May 21 17:13:17.396: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-996e084a-3ccc-46bd-adf1-410647a4e65e"] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker-hvw2g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:17.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 21 17:13:17.494: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-996e084a-3ccc-46bd-adf1-410647a4e65e] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker-hvw2g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:17.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "kali-worker" at path "/tmp/local-volume-test-f5cd8efb-ac99-48b7-9728-860cfca65c2f" May 21 17:13:17.624: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-f5cd8efb-ac99-48b7-9728-860cfca65c2f"] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker-hvw2g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:17.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 21 17:13:17.757: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f5cd8efb-ac99-48b7-9728-860cfca65c2f] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker-hvw2g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:17.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "kali-worker" at path "/tmp/local-volume-test-35949dfb-041f-4ddd-88fb-18993d50d4a6" May 21 17:13:17.886: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-35949dfb-041f-4ddd-88fb-18993d50d4a6"] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker-hvw2g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:17.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 21 17:13:18.036: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-35949dfb-041f-4ddd-88fb-18993d50d4a6] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker-hvw2g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:18.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "kali-worker" at path "/tmp/local-volume-test-58067dd1-34c5-4b1d-ab5f-8b40a212844c" May 21 17:13:18.180: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-58067dd1-34c5-4b1d-ab5f-8b40a212844c"] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker-hvw2g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:18.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 21 17:13:18.325: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-58067dd1-34c5-4b1d-ab5f-8b40a212844c] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker-hvw2g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:18.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "kali-worker" at path "/tmp/local-volume-test-e529360f-15b1-43e4-8339-77abdc31924b" May 21 17:13:18.465: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-e529360f-15b1-43e4-8339-77abdc31924b"] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker-hvw2g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:18.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 21 17:13:18.597: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e529360f-15b1-43e4-8339-77abdc31924b] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker-hvw2g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:18.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up 10 local volumes on node "kali-worker2" STEP: Cleaning up PVC and PV May 21 17:13:18.731: INFO: pvc is nil May 21 17:13:18.731: INFO: Deleting PersistentVolume "local-pv7ns9s" STEP: Cleaning up PVC and PV May 21 17:13:18.738: INFO: pvc is nil May 21 17:13:18.738: INFO: Deleting PersistentVolume "local-pvdcxqk" STEP: Cleaning up PVC and PV May 21 17:13:18.742: INFO: pvc is nil May 21 17:13:18.742: INFO: Deleting PersistentVolume "local-pv8rbsv" STEP: Cleaning up PVC and PV May 21 17:13:18.746: INFO: pvc is nil May 21 17:13:18.746: INFO: Deleting PersistentVolume "local-pv9b2pl" STEP: Cleaning up PVC and PV May 21 17:13:18.751: INFO: pvc is nil May 21 17:13:18.751: INFO: Deleting PersistentVolume "local-pvshzkk" STEP: Cleaning up PVC and PV May 21 17:13:18.755: INFO: pvc is nil May 21 17:13:18.755: INFO: Deleting PersistentVolume "local-pv6jlqx" STEP: Cleaning up PVC and PV May 21 17:13:18.760: INFO: pvc is nil May 21 17:13:18.760: INFO: Deleting PersistentVolume "local-pvxdd8w" STEP: Cleaning up PVC and PV May 21 17:13:18.765: INFO: pvc is nil May 21 17:13:18.765: INFO: Deleting PersistentVolume "local-pvkkfgz" STEP: Cleaning up PVC and PV May 21 17:13:18.769: INFO: pvc is nil May 21 17:13:18.769: INFO: Deleting PersistentVolume "local-pvhb5vc" STEP: Cleaning up PVC and PV May 21 17:13:18.773: INFO: pvc is nil May 21 17:13:18.773: INFO: Deleting PersistentVolume "local-pv5x5wr" STEP: Unmount tmpfs mount point on node "kali-worker2" at path "/tmp/local-volume-test-8111c802-2ef2-4f83-b326-d3355bac5a02" May 21 17:13:18.777: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-8111c802-2ef2-4f83-b326-d3355bac5a02"] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker2-st5xh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:18.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 21 17:13:18.925: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8111c802-2ef2-4f83-b326-d3355bac5a02] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker2-st5xh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:18.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "kali-worker2" at path "/tmp/local-volume-test-241ce5ca-4700-4ef1-90b9-3857f968bcd1" May 21 17:13:19.060: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-241ce5ca-4700-4ef1-90b9-3857f968bcd1"] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker2-st5xh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:19.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 21 17:13:19.197: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-241ce5ca-4700-4ef1-90b9-3857f968bcd1] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker2-st5xh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:19.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "kali-worker2" at path "/tmp/local-volume-test-c48df879-00fb-45ce-8f3e-5c5178ce82a7" May 21 17:13:19.351: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-c48df879-00fb-45ce-8f3e-5c5178ce82a7"] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker2-st5xh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:19.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 21 17:13:19.485: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c48df879-00fb-45ce-8f3e-5c5178ce82a7] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker2-st5xh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:19.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "kali-worker2" at path "/tmp/local-volume-test-8fd85b96-e6ca-46d1-85ec-e662bce4e038" May 21 17:13:19.626: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-8fd85b96-e6ca-46d1-85ec-e662bce4e038"] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker2-st5xh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:19.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 21 17:13:19.750: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8fd85b96-e6ca-46d1-85ec-e662bce4e038] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker2-st5xh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:19.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "kali-worker2" at path "/tmp/local-volume-test-ec6fcdfe-7f43-4fc7-9dec-a8edd5687836" May 21 17:13:19.883: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-ec6fcdfe-7f43-4fc7-9dec-a8edd5687836"] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker2-st5xh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:19.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 21 17:13:20.023: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ec6fcdfe-7f43-4fc7-9dec-a8edd5687836] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker2-st5xh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:20.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "kali-worker2" at path "/tmp/local-volume-test-ad5c8e2c-6e29-4720-a2fb-2e8d17ab2b56" May 21 17:13:20.160: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-ad5c8e2c-6e29-4720-a2fb-2e8d17ab2b56"] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker2-st5xh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:20.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 21 17:13:20.302: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ad5c8e2c-6e29-4720-a2fb-2e8d17ab2b56] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker2-st5xh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:20.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "kali-worker2" at path "/tmp/local-volume-test-6a8ce1e4-2126-499e-8c09-4113efac2584" May 21 17:13:20.409: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-6a8ce1e4-2126-499e-8c09-4113efac2584"] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker2-st5xh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:20.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 21 17:13:20.553: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6a8ce1e4-2126-499e-8c09-4113efac2584] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker2-st5xh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:20.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "kali-worker2" at path "/tmp/local-volume-test-ceb22d19-01c6-46e2-83cb-4edb645b9f9b" May 21 17:13:20.700: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-ceb22d19-01c6-46e2-83cb-4edb645b9f9b"] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker2-st5xh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:20.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 21 17:13:20.835: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ceb22d19-01c6-46e2-83cb-4edb645b9f9b] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker2-st5xh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:20.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "kali-worker2" at path "/tmp/local-volume-test-0338c386-6811-458e-a486-2e5b077bfa76" May 21 17:13:20.981: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-0338c386-6811-458e-a486-2e5b077bfa76"] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker2-st5xh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:20.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 21 17:13:21.112: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0338c386-6811-458e-a486-2e5b077bfa76] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker2-st5xh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:21.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "kali-worker2" at path "/tmp/local-volume-test-54c3565b-afe1-43e7-94d1-39d1071ea075" May 21 17:13:21.255: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-54c3565b-afe1-43e7-94d1-39d1071ea075"] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker2-st5xh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:21.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 21 17:13:21.397: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-54c3565b-afe1-43e7-94d1-39d1071ea075] Namespace:persistent-local-volumes-test-3256 PodName:hostexec-kali-worker2-st5xh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:21.397: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 17:13:21.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3256" for this suite. • [SLOW TEST:59.758 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:427 should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:517 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","total":17,"completed":1,"skipped":3222,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create metrics for total time taken in volume operations in P/V Controller /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:260 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 17:13:21.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 21 17:13:21.585: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 17:13:21.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-862" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.041 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create metrics for total time taken in volume operations in P/V Controller [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:260 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:282 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 17:13:21.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 May 21 17:13:23.646: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-1193 PodName:hostexec-kali-worker-m8bxb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:23.646: INFO: >>> kubeConfig: /root/.kube/config May 21 17:13:23.804: INFO: exec kali-worker: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l May 21 17:13:23.804: INFO: exec kali-worker: stdout: "0\n" May 21 17:13:23.804: INFO: exec kali-worker: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" May 21 17:13:23.804: INFO: exec kali-worker: exit code: 0 May 21 17:13:23.804: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 17:13:23.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1193" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.219 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:256 should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:282 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:251 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 17:13:23.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 May 21 17:13:25.867: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-752 PodName:hostexec-kali-worker-24h2n ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:13:25.867: INFO: >>> kubeConfig: /root/.kube/config May 21 17:13:26.033: INFO: exec kali-worker: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l May 21 17:13:26.033: INFO: exec kali-worker: stdout: "0\n" May 21 17:13:26.033: INFO: exec kali-worker: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" May 21 17:13:26.033: INFO: exec kali-worker: exit code: 0 May 21 17:13:26.033: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 17:13:26.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-752" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.227 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 Two pods mounting a local volume one after the other [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:251 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create bound pv/pvc count metrics for pvc controller after creating both pv and pvc /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:499 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 17:13:26.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 21 17:13:26.081: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 17:13:26.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-2794" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.041 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:382 should create bound pv/pvc count metrics for pvc controller after creating both pv and pvc /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:499 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create unbound pv count metrics for pvc controller after creating pv only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:481 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 17:13:26.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 21 17:13:26.122: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 17:13:26.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-2531" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.038 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:382 should create unbound pv count metrics for pvc controller after creating pv only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:481 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create unbound pvc count metrics for pvc controller after creating pvc only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:490 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 17:13:26.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 21 17:13:26.164: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 17:13:26.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3460" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.040 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:382 should create unbound pvc count metrics for pvc controller after creating pvc only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:490 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create metrics for total number of volumes in A/D Controller /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:321 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 17:13:26.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 21 17:13:26.211: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 17:13:26.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-8650" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.039 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create metrics for total number of volumes in A/D Controller [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:321 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:642 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 17:13:26.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:619 [It] all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:642 STEP: Create a PVC STEP: Create 50 pods to use this PVC STEP: Wait for all pods are running [AfterEach] Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:633 STEP: Clean PV local-pvltbrt [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 17:14:41.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8860" for this suite. • [SLOW TEST:75.391 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:614 all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:642 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running","total":17,"completed":2,"skipped":5022,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create prometheus metrics for volume provisioning and attach/detach /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:100 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 17:14:41.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 21 17:14:41.652: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 17:14:41.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3473" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.043 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create prometheus metrics for volume provisioning and attach/detach [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:100 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] One pod requesting one prebound PVC should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:234 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 17:14:41.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 May 21 17:14:47.714: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-8095 PodName:hostexec-kali-worker-b2w8k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 21 17:14:47.714: INFO: >>> kubeConfig: /root/.kube/config May 21 17:14:47.859: INFO: exec kali-worker: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l May 21 17:14:47.859: INFO: exec kali-worker: stdout: "0\n" May 21 17:14:47.859: INFO: exec kali-worker: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" May 21 17:14:47.859: INFO: exec kali-worker: exit code: 0 May 21 17:14:47.859: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 17:14:47.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8095" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [6.207 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 One pod requesting one prebound PVC [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:234 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 21 17:14:47.875: INFO: Running AfterSuite actions on all nodes May 21 17:14:47.875: INFO: Running AfterSuite actions on node 1 May 21 17:14:47.875: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_storage_serial/junit_01.xml {"msg":"Test Suite completed","total":17,"completed":2,"skipped":5482,"failed":0} Ran 2 of 5484 Specs in 150.890 seconds SUCCESS! -- 2 Passed | 0 Failed | 0 Pending | 5482 Skipped PASS Ginkgo ran 1 suite in 2m32.55416115s Test Suite Passed