I0418 18:21:46.990372 16 e2e.go:126] Starting e2e run "752ecd17-a73a-447b-b782-d659ae0a632e" on Ginkgo node 1 Apr 18 18:21:47.008: INFO: Enabling in-tree volume drivers Running Suite: Kubernetes e2e suite - /usr/local/bin ==================================================== Random Seed: 1713464506 - will randomize all specs Will run 28 of 7069 specs ------------------------------ [SynchronizedBeforeSuite] test/e2e/e2e.go:77 [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:77 Apr 18 18:21:47.296: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:21:47.300: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 18 18:21:47.333: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 18 18:21:47.365: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 18 18:21:47.365: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 18 18:21:47.365: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 18 18:21:47.372: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) Apr 18 18:21:47.372: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 18 18:21:47.372: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 18 18:21:47.372: INFO: e2e test version: v1.26.13 Apr 18 18:21:47.373: INFO: kube-apiserver version: v1.26.6 [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:77 Apr 18 18:21:47.373: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:21:47.379: INFO: Cluster IP family: ipv4 ------------------------------ [SynchronizedBeforeSuite] PASSED [0.083 seconds] [SynchronizedBeforeSuite] test/e2e/e2e.go:77 Begin Captured GinkgoWriter Output >> [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:77 Apr 18 18:21:47.296: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:21:47.300: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 18 18:21:47.333: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 18 18:21:47.365: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 18 18:21:47.365: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 18 18:21:47.365: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 18 18:21:47.372: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) Apr 18 18:21:47.372: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 18 18:21:47.372: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 18 18:21:47.372: INFO: e2e test version: v1.26.13 Apr 18 18:21:47.373: INFO: kube-apiserver version: v1.26.6 [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:77 Apr 18 18:21:47.373: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:21:47.379: INFO: Cluster IP family: ipv4 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVC should create volume metrics with the correct FilesystemMode PVC ref test/e2e/storage/volume_metrics.go:474 [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:21:47.436 Apr 18 18:21:47.436: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/18/24 18:21:47.437 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:21:47.449 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:21:47.453 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 18 18:21:47.457: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 18 18:21:47.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-456" for this suite. 04/18/24 18:21:47.462 ------------------------------ S [SKIPPED] [0.032 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 PVC test/e2e/storage/volume_metrics.go:491 should create volume metrics with the correct FilesystemMode PVC ref test/e2e/storage/volume_metrics.go:474 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:21:47.436 Apr 18 18:21:47.436: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/18/24 18:21:47.437 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:21:47.449 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:21:47.453 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 18 18:21:47.457: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 18 18:21:47.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-456" for this suite. 04/18/24 18:21:47.462 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func33.2() test/e2e/storage/volume_metrics.go:102 +0x6c ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create bound pv/pvc count metrics for pvc controller after creating both pv and pvc test/e2e/storage/volume_metrics.go:620 [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:21:47.469 Apr 18 18:21:47.469: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/18/24 18:21:47.471 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:21:47.481 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:21:47.485 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 18 18:21:47.490: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 18 18:21:47.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-8500" for this suite. 04/18/24 18:21:47.495 ------------------------------ S [SKIPPED] [0.031 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 PVController test/e2e/storage/volume_metrics.go:500 should create bound pv/pvc count metrics for pvc controller after creating both pv and pvc test/e2e/storage/volume_metrics.go:620 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:21:47.469 Apr 18 18:21:47.469: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/18/24 18:21:47.471 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:21:47.481 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:21:47.485 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 18 18:21:47.490: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 18 18:21:47.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-8500" for this suite. 04/18/24 18:21:47.495 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func33.2() test/e2e/storage/volume_metrics.go:102 +0x6c ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics Ephemeral should create metrics for total time taken in volume operations in P/V Controller test/e2e/storage/volume_metrics.go:480 [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:21:47.515 Apr 18 18:21:47.515: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/18/24 18:21:47.517 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:21:47.528 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:21:47.532 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 18 18:21:47.537: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 18 18:21:47.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-5568" for this suite. 04/18/24 18:21:47.542 ------------------------------ S [SKIPPED] [0.032 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 Ephemeral test/e2e/storage/volume_metrics.go:495 should create metrics for total time taken in volume operations in P/V Controller test/e2e/storage/volume_metrics.go:480 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:21:47.515 Apr 18 18:21:47.515: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/18/24 18:21:47.517 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:21:47.528 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:21:47.532 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 18 18:21:47.537: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 18 18:21:47.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-5568" for this suite. 04/18/24 18:21:47.542 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func33.2() test/e2e/storage/volume_metrics.go:102 +0x6c ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Set fsGroup for local volume should set fsGroup for one pod [Slow] test/e2e/storage/persistent_volumes-local.go:270 [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:21:47.55 Apr 18 18:21:47.550: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/18/24 18:21:47.552 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:21:47.563 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:21:47.567 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:198 Apr 18 18:21:47.584: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker-wn2t5" in namespace "persistent-local-volumes-test-6577" to be "running" Apr 18 18:21:47.587: INFO: Pod "hostexec-v126-worker-wn2t5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.029593ms Apr 18 18:21:49.593: INFO: Pod "hostexec-v126-worker-wn2t5": Phase="Running", Reason="", readiness=true. Elapsed: 2.00877639s Apr 18 18:21:49.593: INFO: Pod "hostexec-v126-worker-wn2t5" satisfied condition "running" Apr 18 18:21:49.593: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-6577 PodName:hostexec-v126-worker-wn2t5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:21:49.593: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:21:49.594: INFO: ExecWithOptions: Clientset creation Apr 18 18:21:49.595: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-6577/pods/hostexec-v126-worker-wn2t5/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=ls+-1+%2Fmnt%2Fdisks%2Fby-uuid%2Fgoogle-local-ssds-scsi-fs%2F+%7C+wc+-l&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 18 18:21:49.772: INFO: exec v126-worker: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Apr 18 18:21:49.772: INFO: exec v126-worker: stdout: "0\n" Apr 18 18:21:49.772: INFO: exec v126-worker: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Apr 18 18:21:49.772: INFO: exec v126-worker: exit code: 0 Apr 18 18:21:49.772: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:207 STEP: Cleaning up PVC and PV 04/18/24 18:21:49.772 [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 18 18:21:49.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-6577" for this suite. 04/18/24 18:21:49.778 ------------------------------ S [SKIPPED] [2.233 seconds] [sig-storage] PersistentVolumes-local test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] [BeforeEach] test/e2e/storage/persistent_volumes-local.go:198 Set fsGroup for local volume test/e2e/storage/persistent_volumes-local.go:263 should set fsGroup for one pod [Slow] test/e2e/storage/persistent_volumes-local.go:270 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:21:47.55 Apr 18 18:21:47.550: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/18/24 18:21:47.552 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:21:47.563 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:21:47.567 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:198 Apr 18 18:21:47.584: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker-wn2t5" in namespace "persistent-local-volumes-test-6577" to be "running" Apr 18 18:21:47.587: INFO: Pod "hostexec-v126-worker-wn2t5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.029593ms Apr 18 18:21:49.593: INFO: Pod "hostexec-v126-worker-wn2t5": Phase="Running", Reason="", readiness=true. Elapsed: 2.00877639s Apr 18 18:21:49.593: INFO: Pod "hostexec-v126-worker-wn2t5" satisfied condition "running" Apr 18 18:21:49.593: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-6577 PodName:hostexec-v126-worker-wn2t5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:21:49.593: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:21:49.594: INFO: ExecWithOptions: Clientset creation Apr 18 18:21:49.595: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-6577/pods/hostexec-v126-worker-wn2t5/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=ls+-1+%2Fmnt%2Fdisks%2Fby-uuid%2Fgoogle-local-ssds-scsi-fs%2F+%7C+wc+-l&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 18 18:21:49.772: INFO: exec v126-worker: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Apr 18 18:21:49.772: INFO: exec v126-worker: stdout: "0\n" Apr 18 18:21:49.772: INFO: exec v126-worker: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Apr 18 18:21:49.772: INFO: exec v126-worker: exit code: 0 Apr 18 18:21:49.772: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:207 STEP: Cleaning up PVC and PV 04/18/24 18:21:49.772 [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 18 18:21:49.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-6577" for this suite. 04/18/24 18:21:49.778 << End Captured GinkgoWriter Output Requires at least 1 scsi fs localSSD In [BeforeEach] at: test/e2e/storage/persistent_volumes-local.go:1255 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.cleanupLocalPVCsPVs(0xc00064fe60, {0xc002a61f58, 0x1, 0x22?}) test/e2e/storage/persistent_volumes-local.go:854 +0xa9 k8s.io/kubernetes/test/e2e/storage.cleanupLocalVolumes(0xc00064fe60, {0xc002a61f58?, 0x1, 0x200?}) test/e2e/storage/persistent_volumes-local.go:863 +0x2d k8s.io/kubernetes/test/e2e/storage.glob..func25.2.2() test/e2e/storage/persistent_volumes-local.go:208 +0x47 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] One pod requesting one prebound PVC should be able to mount volume and read from pod1 test/e2e/storage/persistent_volumes-local.go:235 [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:21:49.787 Apr 18 18:21:49.787: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/18/24 18:21:49.789 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:21:49.8 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:21:49.804 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:198 Apr 18 18:21:49.819: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker-t2b7t" in namespace "persistent-local-volumes-test-2847" to be "running" Apr 18 18:21:49.822: INFO: Pod "hostexec-v126-worker-t2b7t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.89276ms Apr 18 18:21:51.827: INFO: Pod "hostexec-v126-worker-t2b7t": Phase="Running", Reason="", readiness=true. Elapsed: 2.007898698s Apr 18 18:21:51.827: INFO: Pod "hostexec-v126-worker-t2b7t" satisfied condition "running" Apr 18 18:21:51.827: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-2847 PodName:hostexec-v126-worker-t2b7t ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:21:51.827: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:21:51.829: INFO: ExecWithOptions: Clientset creation Apr 18 18:21:51.829: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-2847/pods/hostexec-v126-worker-t2b7t/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=ls+-1+%2Fmnt%2Fdisks%2Fby-uuid%2Fgoogle-local-ssds-scsi-fs%2F+%7C+wc+-l&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 18 18:21:51.986: INFO: exec v126-worker: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Apr 18 18:21:51.986: INFO: exec v126-worker: stdout: "0\n" Apr 18 18:21:51.986: INFO: exec v126-worker: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Apr 18 18:21:51.986: INFO: exec v126-worker: exit code: 0 Apr 18 18:21:51.986: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:207 STEP: Cleaning up PVC and PV 04/18/24 18:21:51.987 [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 18 18:21:51.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-2847" for this suite. 04/18/24 18:21:51.992 ------------------------------ S [SKIPPED] [2.210 seconds] [sig-storage] PersistentVolumes-local test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] [BeforeEach] test/e2e/storage/persistent_volumes-local.go:198 One pod requesting one prebound PVC test/e2e/storage/persistent_volumes-local.go:212 should be able to mount volume and read from pod1 test/e2e/storage/persistent_volumes-local.go:235 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:21:49.787 Apr 18 18:21:49.787: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/18/24 18:21:49.789 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:21:49.8 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:21:49.804 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:198 Apr 18 18:21:49.819: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker-t2b7t" in namespace "persistent-local-volumes-test-2847" to be "running" Apr 18 18:21:49.822: INFO: Pod "hostexec-v126-worker-t2b7t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.89276ms Apr 18 18:21:51.827: INFO: Pod "hostexec-v126-worker-t2b7t": Phase="Running", Reason="", readiness=true. Elapsed: 2.007898698s Apr 18 18:21:51.827: INFO: Pod "hostexec-v126-worker-t2b7t" satisfied condition "running" Apr 18 18:21:51.827: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-2847 PodName:hostexec-v126-worker-t2b7t ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:21:51.827: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:21:51.829: INFO: ExecWithOptions: Clientset creation Apr 18 18:21:51.829: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-2847/pods/hostexec-v126-worker-t2b7t/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=ls+-1+%2Fmnt%2Fdisks%2Fby-uuid%2Fgoogle-local-ssds-scsi-fs%2F+%7C+wc+-l&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 18 18:21:51.986: INFO: exec v126-worker: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Apr 18 18:21:51.986: INFO: exec v126-worker: stdout: "0\n" Apr 18 18:21:51.986: INFO: exec v126-worker: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Apr 18 18:21:51.986: INFO: exec v126-worker: exit code: 0 Apr 18 18:21:51.986: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:207 STEP: Cleaning up PVC and PV 04/18/24 18:21:51.987 [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 18 18:21:51.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-2847" for this suite. 04/18/24 18:21:51.992 << End Captured GinkgoWriter Output Requires at least 1 scsi fs localSSD In [BeforeEach] at: test/e2e/storage/persistent_volumes-local.go:1255 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.cleanupLocalPVCsPVs(0xc003c145a0, {0xc002a5df58, 0x1, 0x22?}) test/e2e/storage/persistent_volumes-local.go:854 +0xa9 k8s.io/kubernetes/test/e2e/storage.cleanupLocalVolumes(0xc003c145a0, {0xc002a5df58?, 0x1, 0x200?}) test/e2e/storage/persistent_volumes-local.go:863 +0x2d k8s.io/kubernetes/test/e2e/storage.glob..func25.2.2() test/e2e/storage/persistent_volumes-local.go:208 +0x47 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create none metrics for pvc controller before creating any PV or PVC test/e2e/storage/volume_metrics.go:598 [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:21:52.008 Apr 18 18:21:52.008: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/18/24 18:21:52.009 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:21:52.018 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:21:52.022 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 18 18:21:52.026: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 18 18:21:52.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-2940" for this suite. 04/18/24 18:21:52.031 ------------------------------ S [SKIPPED] [0.028 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 PVController test/e2e/storage/volume_metrics.go:500 should create none metrics for pvc controller before creating any PV or PVC test/e2e/storage/volume_metrics.go:598 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:21:52.008 Apr 18 18:21:52.008: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/18/24 18:21:52.009 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:21:52.018 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:21:52.022 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 18 18:21:52.026: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 18 18:21:52.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-2940" for this suite. 04/18/24 18:21:52.031 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func33.2() test/e2e/storage/volume_metrics.go:102 +0x6c ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 test/e2e/storage/persistent_volumes-local.go:252 [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:21:52.039 Apr 18 18:21:52.039: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/18/24 18:21:52.04 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:21:52.051 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:21:52.055 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:198 Apr 18 18:21:52.071: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker2-dx9d6" in namespace "persistent-local-volumes-test-9101" to be "running" Apr 18 18:21:52.074: INFO: Pod "hostexec-v126-worker2-dx9d6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.14857ms Apr 18 18:21:54.079: INFO: Pod "hostexec-v126-worker2-dx9d6": Phase="Running", Reason="", readiness=true. Elapsed: 2.008299923s Apr 18 18:21:54.079: INFO: Pod "hostexec-v126-worker2-dx9d6" satisfied condition "running" Apr 18 18:21:54.080: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-9101 PodName:hostexec-v126-worker2-dx9d6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:21:54.080: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:21:54.081: INFO: ExecWithOptions: Clientset creation Apr 18 18:21:54.081: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-9101/pods/hostexec-v126-worker2-dx9d6/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=ls+-1+%2Fmnt%2Fdisks%2Fby-uuid%2Fgoogle-local-ssds-scsi-fs%2F+%7C+wc+-l&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 18 18:21:54.251: INFO: exec v126-worker2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Apr 18 18:21:54.251: INFO: exec v126-worker2: stdout: "0\n" Apr 18 18:21:54.251: INFO: exec v126-worker2: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Apr 18 18:21:54.251: INFO: exec v126-worker2: exit code: 0 Apr 18 18:21:54.251: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:207 STEP: Cleaning up PVC and PV 04/18/24 18:21:54.252 [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 18 18:21:54.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-9101" for this suite. 04/18/24 18:21:54.257 ------------------------------ S [SKIPPED] [2.224 seconds] [sig-storage] PersistentVolumes-local test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] [BeforeEach] test/e2e/storage/persistent_volumes-local.go:198 Two pods mounting a local volume at the same time test/e2e/storage/persistent_volumes-local.go:251 should be able to write from pod1 and read from pod2 test/e2e/storage/persistent_volumes-local.go:252 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:21:52.039 Apr 18 18:21:52.039: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/18/24 18:21:52.04 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:21:52.051 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:21:52.055 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:198 Apr 18 18:21:52.071: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker2-dx9d6" in namespace "persistent-local-volumes-test-9101" to be "running" Apr 18 18:21:52.074: INFO: Pod "hostexec-v126-worker2-dx9d6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.14857ms Apr 18 18:21:54.079: INFO: Pod "hostexec-v126-worker2-dx9d6": Phase="Running", Reason="", readiness=true. Elapsed: 2.008299923s Apr 18 18:21:54.079: INFO: Pod "hostexec-v126-worker2-dx9d6" satisfied condition "running" Apr 18 18:21:54.080: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-9101 PodName:hostexec-v126-worker2-dx9d6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:21:54.080: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:21:54.081: INFO: ExecWithOptions: Clientset creation Apr 18 18:21:54.081: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-9101/pods/hostexec-v126-worker2-dx9d6/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=ls+-1+%2Fmnt%2Fdisks%2Fby-uuid%2Fgoogle-local-ssds-scsi-fs%2F+%7C+wc+-l&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 18 18:21:54.251: INFO: exec v126-worker2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Apr 18 18:21:54.251: INFO: exec v126-worker2: stdout: "0\n" Apr 18 18:21:54.251: INFO: exec v126-worker2: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Apr 18 18:21:54.251: INFO: exec v126-worker2: exit code: 0 Apr 18 18:21:54.251: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:207 STEP: Cleaning up PVC and PV 04/18/24 18:21:54.252 [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 18 18:21:54.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-9101" for this suite. 04/18/24 18:21:54.257 << End Captured GinkgoWriter Output Requires at least 1 scsi fs localSSD In [BeforeEach] at: test/e2e/storage/persistent_volumes-local.go:1255 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.cleanupLocalPVCsPVs(0xc0056f10e0, {0xc00220df58, 0x1, 0x22?}) test/e2e/storage/persistent_volumes-local.go:854 +0xa9 k8s.io/kubernetes/test/e2e/storage.cleanupLocalVolumes(0xc0056f10e0, {0xc00220df58?, 0x1, 0x200?}) test/e2e/storage/persistent_volumes-local.go:863 +0x2d k8s.io/kubernetes/test/e2e/storage.glob..func25.2.2() test/e2e/storage/persistent_volumes-local.go:208 +0x47 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics Ephemeral should create volume metrics with the correct BlockMode PVC ref test/e2e/storage/volume_metrics.go:477 [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:21:54.309 Apr 18 18:21:54.309: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/18/24 18:21:54.31 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:21:54.321 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:21:54.326 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 18 18:21:54.330: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 18 18:21:54.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-5299" for this suite. 04/18/24 18:21:54.335 ------------------------------ S [SKIPPED] [0.031 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 Ephemeral test/e2e/storage/volume_metrics.go:495 should create volume metrics with the correct BlockMode PVC ref test/e2e/storage/volume_metrics.go:477 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:21:54.309 Apr 18 18:21:54.309: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/18/24 18:21:54.31 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:21:54.321 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:21:54.326 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 18 18:21:54.330: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 18 18:21:54.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-5299" for this suite. 04/18/24 18:21:54.335 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func33.2() test/e2e/storage/volume_metrics.go:102 +0x6c ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create unbound pvc count metrics for pvc controller after creating pvc only test/e2e/storage/volume_metrics.go:611 [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:21:54.357 Apr 18 18:21:54.357: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/18/24 18:21:54.359 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:21:54.37 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:21:54.374 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 18 18:21:54.379: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 18 18:21:54.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-3274" for this suite. 04/18/24 18:21:54.384 ------------------------------ S [SKIPPED] [0.032 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 PVController test/e2e/storage/volume_metrics.go:500 should create unbound pvc count metrics for pvc controller after creating pvc only test/e2e/storage/volume_metrics.go:611 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:21:54.357 Apr 18 18:21:54.357: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/18/24 18:21:54.359 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:21:54.37 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:21:54.374 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 18 18:21:54.379: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 18 18:21:54.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-3274" for this suite. 04/18/24 18:21:54.384 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func33.2() test/e2e/storage/volume_metrics.go:102 +0x6c ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics Ephemeral should create volume metrics in Volume Manager test/e2e/storage/volume_metrics.go:483 [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:21:54.409 Apr 18 18:21:54.409: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/18/24 18:21:54.41 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:21:54.421 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:21:54.425 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 18 18:21:54.429: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 18 18:21:54.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-5065" for this suite. 04/18/24 18:21:54.435 ------------------------------ S [SKIPPED] [0.031 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 Ephemeral test/e2e/storage/volume_metrics.go:495 should create volume metrics in Volume Manager test/e2e/storage/volume_metrics.go:483 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:21:54.409 Apr 18 18:21:54.409: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/18/24 18:21:54.41 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:21:54.421 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:21:54.425 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 18 18:21:54.429: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 18 18:21:54.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-5065" for this suite. 04/18/24 18:21:54.435 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func33.2() test/e2e/storage/volume_metrics.go:102 +0x6c ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics Ephemeral should create volume metrics with the correct FilesystemMode PVC ref test/e2e/storage/volume_metrics.go:474 [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:21:54.451 Apr 18 18:21:54.452: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/18/24 18:21:54.453 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:21:54.464 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:21:54.468 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 18 18:21:54.472: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 18 18:21:54.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-4985" for this suite. 04/18/24 18:21:54.477 ------------------------------ S [SKIPPED] [0.031 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 Ephemeral test/e2e/storage/volume_metrics.go:495 should create volume metrics with the correct FilesystemMode PVC ref test/e2e/storage/volume_metrics.go:474 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:21:54.451 Apr 18 18:21:54.452: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/18/24 18:21:54.453 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:21:54.464 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:21:54.468 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 18 18:21:54.472: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 18 18:21:54.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-4985" for this suite. 04/18/24 18:21:54.477 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func33.2() test/e2e/storage/volume_metrics.go:102 +0x6c ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running test/e2e/storage/persistent_volumes-local.go:656 [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:21:54.485 Apr 18 18:21:54.486: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/18/24 18:21:54.487 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:21:54.497 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:21:54.501 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] Pods sharing a single local PV [Serial] test/e2e/storage/persistent_volumes-local.go:633 [It] all pods should be running test/e2e/storage/persistent_volumes-local.go:656 STEP: Create a PVC 04/18/24 18:21:54.515 STEP: Create 2 pods to use this PVC 04/18/24 18:21:54.522 STEP: Wait for all pods are running 04/18/24 18:21:54.533 [AfterEach] Pods sharing a single local PV [Serial] test/e2e/storage/persistent_volumes-local.go:647 STEP: Clean PV local-pvzgdls 04/18/24 18:22:00.541 [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 18 18:22:00.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-390" for this suite. 04/18/24 18:22:00.553 ------------------------------ • [SLOW TEST] [6.073 seconds] [sig-storage] PersistentVolumes-local test/e2e/storage/utils/framework.go:23 Pods sharing a single local PV [Serial] test/e2e/storage/persistent_volumes-local.go:628 all pods should be running test/e2e/storage/persistent_volumes-local.go:656 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:21:54.485 Apr 18 18:21:54.486: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/18/24 18:21:54.487 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:21:54.497 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:21:54.501 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] Pods sharing a single local PV [Serial] test/e2e/storage/persistent_volumes-local.go:633 [It] all pods should be running test/e2e/storage/persistent_volumes-local.go:656 STEP: Create a PVC 04/18/24 18:21:54.515 STEP: Create 2 pods to use this PVC 04/18/24 18:21:54.522 STEP: Wait for all pods are running 04/18/24 18:21:54.533 [AfterEach] Pods sharing a single local PV [Serial] test/e2e/storage/persistent_volumes-local.go:647 STEP: Clean PV local-pvzgdls 04/18/24 18:22:00.541 [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 18 18:22:00.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-390" for this suite. 04/18/24 18:22:00.553 << End Captured GinkgoWriter Output ------------------------------ S ------------------------------ [sig-storage] [Serial] Volume metrics PVC should create metrics for total time taken in volume operations in P/V Controller test/e2e/storage/volume_metrics.go:480 [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:22:00.559 Apr 18 18:22:00.559: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/18/24 18:22:00.561 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:22:00.572 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:22:00.576 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 18 18:22:00.580: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 18 18:22:00.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-9904" for this suite. 04/18/24 18:22:00.585 ------------------------------ S [SKIPPED] [0.032 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 PVC test/e2e/storage/volume_metrics.go:491 should create metrics for total time taken in volume operations in P/V Controller test/e2e/storage/volume_metrics.go:480 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:22:00.559 Apr 18 18:22:00.559: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/18/24 18:22:00.561 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:22:00.572 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:22:00.576 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 18 18:22:00.580: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 18 18:22:00.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-9904" for this suite. 04/18/24 18:22:00.585 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func33.2() test/e2e/storage/volume_metrics.go:102 +0x6c ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVC should create metrics for total number of volumes in A/D Controller test/e2e/storage/volume_metrics.go:486 [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:22:00.606 Apr 18 18:22:00.607: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/18/24 18:22:00.608 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:22:00.619 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:22:00.623 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 18 18:22:00.627: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 18 18:22:00.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-6430" for this suite. 04/18/24 18:22:00.632 ------------------------------ S [SKIPPED] [0.031 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 PVC test/e2e/storage/volume_metrics.go:491 should create metrics for total number of volumes in A/D Controller test/e2e/storage/volume_metrics.go:486 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:22:00.606 Apr 18 18:22:00.607: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/18/24 18:22:00.608 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:22:00.619 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:22:00.623 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 18 18:22:00.627: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 18 18:22:00.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-6430" for this suite. 04/18/24 18:22:00.632 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func33.2() test/e2e/storage/volume_metrics.go:102 +0x6c ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics Ephemeral should create prometheus metrics for volume provisioning and attach/detach test/e2e/storage/volume_metrics.go:466 [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:22:00.669 Apr 18 18:22:00.669: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/18/24 18:22:00.671 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:22:00.68 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:22:00.684 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 18 18:22:00.689: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 18 18:22:00.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-291" for this suite. 04/18/24 18:22:00.694 ------------------------------ S [SKIPPED] [0.030 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 Ephemeral test/e2e/storage/volume_metrics.go:495 should create prometheus metrics for volume provisioning and attach/detach test/e2e/storage/volume_metrics.go:466 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:22:00.669 Apr 18 18:22:00.669: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/18/24 18:22:00.671 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:22:00.68 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:22:00.684 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 18 18:22:00.689: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 18 18:22:00.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-291" for this suite. 04/18/24 18:22:00.694 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func33.2() test/e2e/storage/volume_metrics.go:102 +0x6c ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVC should create prometheus metrics for volume provisioning errors [Slow] test/e2e/storage/volume_metrics.go:471 [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:22:00.705 Apr 18 18:22:00.705: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/18/24 18:22:00.706 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:22:00.717 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:22:00.721 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 18 18:22:00.725: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 18 18:22:00.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-4315" for this suite. 04/18/24 18:22:00.73 ------------------------------ S [SKIPPED] [0.030 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 PVC test/e2e/storage/volume_metrics.go:491 should create prometheus metrics for volume provisioning errors [Slow] test/e2e/storage/volume_metrics.go:471 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:22:00.705 Apr 18 18:22:00.705: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/18/24 18:22:00.706 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:22:00.717 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:22:00.721 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 18 18:22:00.725: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 18 18:22:00.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-4315" for this suite. 04/18/24 18:22:00.73 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func33.2() test/e2e/storage/volume_metrics.go:102 +0x6c ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVC should create volume metrics with the correct BlockMode PVC ref test/e2e/storage/volume_metrics.go:477 [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:22:00.769 Apr 18 18:22:00.769: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/18/24 18:22:00.772 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:22:00.783 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:22:00.787 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 18 18:22:00.791: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 18 18:22:00.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-5762" for this suite. 04/18/24 18:22:00.796 ------------------------------ S [SKIPPED] [0.032 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 PVC test/e2e/storage/volume_metrics.go:491 should create volume metrics with the correct BlockMode PVC ref test/e2e/storage/volume_metrics.go:477 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:22:00.769 Apr 18 18:22:00.769: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/18/24 18:22:00.772 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:22:00.783 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:22:00.787 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 18 18:22:00.791: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 18 18:22:00.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-5762" for this suite. 04/18/24 18:22:00.796 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func33.2() test/e2e/storage/volume_metrics.go:102 +0x6c ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Pod Disks [Feature:StorageProvider] [Serial] attach on previously attached volumes should work test/e2e/storage/pd.go:461 [BeforeEach] [sig-storage] Pod Disks [Feature:StorageProvider] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:22:00.802 Apr 18 18:22:00.802: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pod-disks 04/18/24 18:22:00.804 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:22:00.815 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:22:00.819 [BeforeEach] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/storage/pd.go:76 [It] [Serial] attach on previously attached volumes should work test/e2e/storage/pd.go:461 Apr 18 18:22:00.832: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/framework/node/init/init.go:32 Apr 18 18:22:00.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] Pod Disks [Feature:StorageProvider] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] Pod Disks [Feature:StorageProvider] tear down framework | framework.go:193 STEP: Destroying namespace "pod-disks-6463" for this suite. 04/18/24 18:22:00.837 ------------------------------ S [SKIPPED] [0.040 seconds] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/storage/utils/framework.go:23 [It] [Serial] attach on previously attached volumes should work test/e2e/storage/pd.go:461 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] Pod Disks [Feature:StorageProvider] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:22:00.802 Apr 18 18:22:00.802: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pod-disks 04/18/24 18:22:00.804 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:22:00.815 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:22:00.819 [BeforeEach] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/storage/pd.go:76 [It] [Serial] attach on previously attached volumes should work test/e2e/storage/pd.go:461 Apr 18 18:22:00.832: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/framework/node/init/init.go:32 Apr 18 18:22:00.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] Pod Disks [Feature:StorageProvider] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] Pod Disks [Feature:StorageProvider] tear down framework | framework.go:193 STEP: Destroying namespace "pod-disks-6463" for this suite. 04/18/24 18:22:00.837 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [It] at: test/e2e/storage/pd.go:462 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create unbound pv count metrics for pvc controller after creating pv only test/e2e/storage/volume_metrics.go:602 [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:22:00.849 Apr 18 18:22:00.849: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/18/24 18:22:00.851 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:22:00.861 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:22:00.865 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 18 18:22:00.869: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 18 18:22:00.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-1370" for this suite. 04/18/24 18:22:00.875 ------------------------------ S [SKIPPED] [0.031 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 PVController test/e2e/storage/volume_metrics.go:500 should create unbound pv count metrics for pvc controller after creating pv only test/e2e/storage/volume_metrics.go:602 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:22:00.849 Apr 18 18:22:00.849: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/18/24 18:22:00.851 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:22:00.861 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:22:00.865 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 18 18:22:00.869: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 18 18:22:00.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-1370" for this suite. 04/18/24 18:22:00.875 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func33.2() test/e2e/storage/volume_metrics.go:102 +0x6c ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 test/e2e/storage/persistent_volumes-local.go:258 [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:22:00.889 Apr 18 18:22:00.889: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/18/24 18:22:00.891 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:22:00.901 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:22:00.905 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:198 Apr 18 18:22:00.921: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker2-mnwlb" in namespace "persistent-local-volumes-test-4693" to be "running" Apr 18 18:22:00.924: INFO: Pod "hostexec-v126-worker2-mnwlb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.210904ms Apr 18 18:22:02.929: INFO: Pod "hostexec-v126-worker2-mnwlb": Phase="Running", Reason="", readiness=true. Elapsed: 2.007720969s Apr 18 18:22:02.929: INFO: Pod "hostexec-v126-worker2-mnwlb" satisfied condition "running" Apr 18 18:22:02.929: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-4693 PodName:hostexec-v126-worker2-mnwlb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:02.929: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:02.930: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:02.931: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-4693/pods/hostexec-v126-worker2-mnwlb/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=ls+-1+%2Fmnt%2Fdisks%2Fby-uuid%2Fgoogle-local-ssds-scsi-fs%2F+%7C+wc+-l&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 18 18:22:03.093: INFO: exec v126-worker2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Apr 18 18:22:03.093: INFO: exec v126-worker2: stdout: "0\n" Apr 18 18:22:03.094: INFO: exec v126-worker2: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Apr 18 18:22:03.094: INFO: exec v126-worker2: exit code: 0 Apr 18 18:22:03.094: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:207 STEP: Cleaning up PVC and PV 04/18/24 18:22:03.094 [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 18 18:22:03.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-4693" for this suite. 04/18/24 18:22:03.099 ------------------------------ S [SKIPPED] [2.216 seconds] [sig-storage] PersistentVolumes-local test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] [BeforeEach] test/e2e/storage/persistent_volumes-local.go:198 Two pods mounting a local volume one after the other test/e2e/storage/persistent_volumes-local.go:257 should be able to write from pod1 and read from pod2 test/e2e/storage/persistent_volumes-local.go:258 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:22:00.889 Apr 18 18:22:00.889: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/18/24 18:22:00.891 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:22:00.901 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:22:00.905 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:198 Apr 18 18:22:00.921: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker2-mnwlb" in namespace "persistent-local-volumes-test-4693" to be "running" Apr 18 18:22:00.924: INFO: Pod "hostexec-v126-worker2-mnwlb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.210904ms Apr 18 18:22:02.929: INFO: Pod "hostexec-v126-worker2-mnwlb": Phase="Running", Reason="", readiness=true. Elapsed: 2.007720969s Apr 18 18:22:02.929: INFO: Pod "hostexec-v126-worker2-mnwlb" satisfied condition "running" Apr 18 18:22:02.929: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-4693 PodName:hostexec-v126-worker2-mnwlb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:02.929: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:02.930: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:02.931: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-4693/pods/hostexec-v126-worker2-mnwlb/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=ls+-1+%2Fmnt%2Fdisks%2Fby-uuid%2Fgoogle-local-ssds-scsi-fs%2F+%7C+wc+-l&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 18 18:22:03.093: INFO: exec v126-worker2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Apr 18 18:22:03.093: INFO: exec v126-worker2: stdout: "0\n" Apr 18 18:22:03.094: INFO: exec v126-worker2: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Apr 18 18:22:03.094: INFO: exec v126-worker2: exit code: 0 Apr 18 18:22:03.094: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:207 STEP: Cleaning up PVC and PV 04/18/24 18:22:03.094 [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 18 18:22:03.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-4693" for this suite. 04/18/24 18:22:03.099 << End Captured GinkgoWriter Output Requires at least 1 scsi fs localSSD In [BeforeEach] at: test/e2e/storage/persistent_volumes-local.go:1255 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.cleanupLocalPVCsPVs(0xc001c08360, {0xc005adbf58, 0x1, 0x22?}) test/e2e/storage/persistent_volumes-local.go:854 +0xa9 k8s.io/kubernetes/test/e2e/storage.cleanupLocalVolumes(0xc001c08360, {0xc005adbf58?, 0x1, 0x200?}) test/e2e/storage/persistent_volumes-local.go:863 +0x2d k8s.io/kubernetes/test/e2e/storage.glob..func25.2.2() test/e2e/storage/persistent_volumes-local.go:208 +0x47 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create total pv count metrics for with plugin and volume mode labels after creating pv test/e2e/storage/volume_metrics.go:630 [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:22:03.116 Apr 18 18:22:03.116: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/18/24 18:22:03.117 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:22:03.129 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:22:03.133 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 18 18:22:03.137: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 18 18:22:03.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-5145" for this suite. 04/18/24 18:22:03.142 ------------------------------ S [SKIPPED] [0.032 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 PVController test/e2e/storage/volume_metrics.go:500 should create total pv count metrics for with plugin and volume mode labels after creating pv test/e2e/storage/volume_metrics.go:630 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:22:03.116 Apr 18 18:22:03.116: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/18/24 18:22:03.117 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:22:03.129 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:22:03.133 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 18 18:22:03.137: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 18 18:22:03.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-5145" for this suite. 04/18/24 18:22:03.142 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func33.2() test/e2e/storage/volume_metrics.go:102 +0x6c ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] One pod requesting one prebound PVC should be able to mount volume and write from pod1 test/e2e/storage/persistent_volumes-local.go:241 [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:22:03.156 Apr 18 18:22:03.156: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/18/24 18:22:03.157 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:22:03.167 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:22:03.171 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:198 Apr 18 18:22:03.187: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker-5d6v8" in namespace "persistent-local-volumes-test-2306" to be "running" Apr 18 18:22:03.190: INFO: Pod "hostexec-v126-worker-5d6v8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.898408ms Apr 18 18:22:05.195: INFO: Pod "hostexec-v126-worker-5d6v8": Phase="Running", Reason="", readiness=true. Elapsed: 2.007093994s Apr 18 18:22:05.195: INFO: Pod "hostexec-v126-worker-5d6v8" satisfied condition "running" Apr 18 18:22:05.195: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-2306 PodName:hostexec-v126-worker-5d6v8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:05.195: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:05.197: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:05.197: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-2306/pods/hostexec-v126-worker-5d6v8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=ls+-1+%2Fmnt%2Fdisks%2Fby-uuid%2Fgoogle-local-ssds-scsi-fs%2F+%7C+wc+-l&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 18 18:22:05.370: INFO: exec v126-worker: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Apr 18 18:22:05.370: INFO: exec v126-worker: stdout: "0\n" Apr 18 18:22:05.370: INFO: exec v126-worker: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Apr 18 18:22:05.370: INFO: exec v126-worker: exit code: 0 Apr 18 18:22:05.370: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:207 STEP: Cleaning up PVC and PV 04/18/24 18:22:05.37 [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 18 18:22:05.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-2306" for this suite. 04/18/24 18:22:05.376 ------------------------------ S [SKIPPED] [2.226 seconds] [sig-storage] PersistentVolumes-local test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] [BeforeEach] test/e2e/storage/persistent_volumes-local.go:198 One pod requesting one prebound PVC test/e2e/storage/persistent_volumes-local.go:212 should be able to mount volume and write from pod1 test/e2e/storage/persistent_volumes-local.go:241 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:22:03.156 Apr 18 18:22:03.156: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/18/24 18:22:03.157 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:22:03.167 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:22:03.171 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:198 Apr 18 18:22:03.187: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker-5d6v8" in namespace "persistent-local-volumes-test-2306" to be "running" Apr 18 18:22:03.190: INFO: Pod "hostexec-v126-worker-5d6v8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.898408ms Apr 18 18:22:05.195: INFO: Pod "hostexec-v126-worker-5d6v8": Phase="Running", Reason="", readiness=true. Elapsed: 2.007093994s Apr 18 18:22:05.195: INFO: Pod "hostexec-v126-worker-5d6v8" satisfied condition "running" Apr 18 18:22:05.195: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-2306 PodName:hostexec-v126-worker-5d6v8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:05.195: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:05.197: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:05.197: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-2306/pods/hostexec-v126-worker-5d6v8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=ls+-1+%2Fmnt%2Fdisks%2Fby-uuid%2Fgoogle-local-ssds-scsi-fs%2F+%7C+wc+-l&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 18 18:22:05.370: INFO: exec v126-worker: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Apr 18 18:22:05.370: INFO: exec v126-worker: stdout: "0\n" Apr 18 18:22:05.370: INFO: exec v126-worker: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Apr 18 18:22:05.370: INFO: exec v126-worker: exit code: 0 Apr 18 18:22:05.370: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:207 STEP: Cleaning up PVC and PV 04/18/24 18:22:05.37 [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 18 18:22:05.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-2306" for this suite. 04/18/24 18:22:05.376 << End Captured GinkgoWriter Output Requires at least 1 scsi fs localSSD In [BeforeEach] at: test/e2e/storage/persistent_volumes-local.go:1255 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.cleanupLocalPVCsPVs(0xc000a086c0, {0xc000643f58, 0x1, 0x22?}) test/e2e/storage/persistent_volumes-local.go:854 +0xa9 k8s.io/kubernetes/test/e2e/storage.cleanupLocalVolumes(0xc000a086c0, {0xc000643f58?, 0x1, 0x200?}) test/e2e/storage/persistent_volumes-local.go:863 +0x2d k8s.io/kubernetes/test/e2e/storage.glob..func25.2.2() test/e2e/storage/persistent_volumes-local.go:208 +0x47 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow] test/e2e/storage/persistent_volumes-local.go:277 [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:22:05.387 Apr 18 18:22:05.388: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/18/24 18:22:05.389 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:22:05.401 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:22:05.405 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:198 Apr 18 18:22:05.422: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker-gm8xg" in namespace "persistent-local-volumes-test-567" to be "running" Apr 18 18:22:05.425: INFO: Pod "hostexec-v126-worker-gm8xg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.844135ms Apr 18 18:22:07.429: INFO: Pod "hostexec-v126-worker-gm8xg": Phase="Running", Reason="", readiness=true. Elapsed: 2.007638829s Apr 18 18:22:07.429: INFO: Pod "hostexec-v126-worker-gm8xg" satisfied condition "running" Apr 18 18:22:07.429: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-567 PodName:hostexec-v126-worker-gm8xg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:07.430: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:07.431: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:07.431: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-567/pods/hostexec-v126-worker-gm8xg/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=ls+-1+%2Fmnt%2Fdisks%2Fby-uuid%2Fgoogle-local-ssds-scsi-fs%2F+%7C+wc+-l&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 18 18:22:07.583: INFO: exec v126-worker: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Apr 18 18:22:07.583: INFO: exec v126-worker: stdout: "0\n" Apr 18 18:22:07.583: INFO: exec v126-worker: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Apr 18 18:22:07.583: INFO: exec v126-worker: exit code: 0 Apr 18 18:22:07.583: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:207 STEP: Cleaning up PVC and PV 04/18/24 18:22:07.584 [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 18 18:22:07.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-567" for this suite. 04/18/24 18:22:07.588 ------------------------------ S [SKIPPED] [2.207 seconds] [sig-storage] PersistentVolumes-local test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] [BeforeEach] test/e2e/storage/persistent_volumes-local.go:198 Set fsGroup for local volume test/e2e/storage/persistent_volumes-local.go:263 should set same fsGroup for two pods simultaneously [Slow] test/e2e/storage/persistent_volumes-local.go:277 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:22:05.387 Apr 18 18:22:05.388: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/18/24 18:22:05.389 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:22:05.401 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:22:05.405 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:198 Apr 18 18:22:05.422: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker-gm8xg" in namespace "persistent-local-volumes-test-567" to be "running" Apr 18 18:22:05.425: INFO: Pod "hostexec-v126-worker-gm8xg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.844135ms Apr 18 18:22:07.429: INFO: Pod "hostexec-v126-worker-gm8xg": Phase="Running", Reason="", readiness=true. Elapsed: 2.007638829s Apr 18 18:22:07.429: INFO: Pod "hostexec-v126-worker-gm8xg" satisfied condition "running" Apr 18 18:22:07.429: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-567 PodName:hostexec-v126-worker-gm8xg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:07.430: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:07.431: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:07.431: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-567/pods/hostexec-v126-worker-gm8xg/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=ls+-1+%2Fmnt%2Fdisks%2Fby-uuid%2Fgoogle-local-ssds-scsi-fs%2F+%7C+wc+-l&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 18 18:22:07.583: INFO: exec v126-worker: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Apr 18 18:22:07.583: INFO: exec v126-worker: stdout: "0\n" Apr 18 18:22:07.583: INFO: exec v126-worker: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Apr 18 18:22:07.583: INFO: exec v126-worker: exit code: 0 Apr 18 18:22:07.583: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:207 STEP: Cleaning up PVC and PV 04/18/24 18:22:07.584 [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 18 18:22:07.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-567" for this suite. 04/18/24 18:22:07.588 << End Captured GinkgoWriter Output Requires at least 1 scsi fs localSSD In [BeforeEach] at: test/e2e/storage/persistent_volumes-local.go:1255 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.cleanupLocalPVCsPVs(0xc0047434d0, {0xc00063ff58, 0x1, 0x22?}) test/e2e/storage/persistent_volumes-local.go:854 +0xa9 k8s.io/kubernetes/test/e2e/storage.cleanupLocalVolumes(0xc0047434d0, {0xc00063ff58?, 0x1, 0x200?}) test/e2e/storage/persistent_volumes-local.go:863 +0x2d k8s.io/kubernetes/test/e2e/storage.glob..func25.2.2() test/e2e/storage/persistent_volumes-local.go:208 +0x47 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics Ephemeral should create metrics for total number of volumes in A/D Controller test/e2e/storage/volume_metrics.go:486 [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:22:07.605 Apr 18 18:22:07.605: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/18/24 18:22:07.606 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:22:07.616 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:22:07.619 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 18 18:22:07.624: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 18 18:22:07.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-2607" for this suite. 04/18/24 18:22:07.628 ------------------------------ S [SKIPPED] [0.028 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 Ephemeral test/e2e/storage/volume_metrics.go:495 should create metrics for total number of volumes in A/D Controller test/e2e/storage/volume_metrics.go:486 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:22:07.605 Apr 18 18:22:07.605: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/18/24 18:22:07.606 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:22:07.616 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:22:07.619 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 18 18:22:07.624: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 18 18:22:07.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-2607" for this suite. 04/18/24 18:22:07.628 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func33.2() test/e2e/storage/volume_metrics.go:102 +0x6c ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes test/e2e/storage/persistent_volumes-local.go:534 [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:22:07.645 Apr 18 18:22:07.645: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/18/24 18:22:07.647 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:22:07.657 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:22:07.661 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] Stress with local volumes [Serial] test/e2e/storage/persistent_volumes-local.go:458 STEP: Setting up 10 local volumes on node "v126-worker" 04/18/24 18:22:07.674 STEP: Creating tmpfs mount point on node "v126-worker" at path "/tmp/local-volume-test-0cfc23b5-94ae-4254-9e28-16374a08f7d2" 04/18/24 18:22:07.674 Apr 18 18:22:07.682: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker-6lhv8" in namespace "persistent-local-volumes-test-38" to be "running" Apr 18 18:22:07.685: INFO: Pod "hostexec-v126-worker-6lhv8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.071675ms Apr 18 18:22:09.690: INFO: Pod "hostexec-v126-worker-6lhv8": Phase="Running", Reason="", readiness=true. Elapsed: 2.007887571s Apr 18 18:22:09.690: INFO: Pod "hostexec-v126-worker-6lhv8" satisfied condition "running" Apr 18 18:22:09.690: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-0cfc23b5-94ae-4254-9e28-16374a08f7d2" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-0cfc23b5-94ae-4254-9e28-16374a08f7d2" "/tmp/local-volume-test-0cfc23b5-94ae-4254-9e28-16374a08f7d2"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:09.690: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:09.692: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:09.692: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-0cfc23b5-94ae-4254-9e28-16374a08f7d2%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-0cfc23b5-94ae-4254-9e28-16374a08f7d2%22+%22%2Ftmp%2Flocal-volume-test-0cfc23b5-94ae-4254-9e28-16374a08f7d2%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker" at path "/tmp/local-volume-test-04978b0e-0487-447a-8012-2a78eebe6ae6" 04/18/24 18:22:09.848 Apr 18 18:22:09.848: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-04978b0e-0487-447a-8012-2a78eebe6ae6" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-04978b0e-0487-447a-8012-2a78eebe6ae6" "/tmp/local-volume-test-04978b0e-0487-447a-8012-2a78eebe6ae6"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:09.848: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:09.849: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:09.849: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-04978b0e-0487-447a-8012-2a78eebe6ae6%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-04978b0e-0487-447a-8012-2a78eebe6ae6%22+%22%2Ftmp%2Flocal-volume-test-04978b0e-0487-447a-8012-2a78eebe6ae6%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker" at path "/tmp/local-volume-test-5eab056d-5401-4c3c-bec9-c564e8720d23" 04/18/24 18:22:09.998 Apr 18 18:22:09.998: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-5eab056d-5401-4c3c-bec9-c564e8720d23" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-5eab056d-5401-4c3c-bec9-c564e8720d23" "/tmp/local-volume-test-5eab056d-5401-4c3c-bec9-c564e8720d23"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:09.998: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:10.000: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:10.000: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-5eab056d-5401-4c3c-bec9-c564e8720d23%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-5eab056d-5401-4c3c-bec9-c564e8720d23%22+%22%2Ftmp%2Flocal-volume-test-5eab056d-5401-4c3c-bec9-c564e8720d23%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker" at path "/tmp/local-volume-test-0faedd84-2b24-4d57-9db3-adbb1a294635" 04/18/24 18:22:10.159 Apr 18 18:22:10.160: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-0faedd84-2b24-4d57-9db3-adbb1a294635" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-0faedd84-2b24-4d57-9db3-adbb1a294635" "/tmp/local-volume-test-0faedd84-2b24-4d57-9db3-adbb1a294635"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:10.160: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:10.161: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:10.161: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-0faedd84-2b24-4d57-9db3-adbb1a294635%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-0faedd84-2b24-4d57-9db3-adbb1a294635%22+%22%2Ftmp%2Flocal-volume-test-0faedd84-2b24-4d57-9db3-adbb1a294635%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker" at path "/tmp/local-volume-test-4f2effed-07e0-47c6-bc65-0fb97fa529aa" 04/18/24 18:22:10.308 Apr 18 18:22:10.309: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-4f2effed-07e0-47c6-bc65-0fb97fa529aa" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-4f2effed-07e0-47c6-bc65-0fb97fa529aa" "/tmp/local-volume-test-4f2effed-07e0-47c6-bc65-0fb97fa529aa"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:10.309: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:10.310: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:10.310: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-4f2effed-07e0-47c6-bc65-0fb97fa529aa%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-4f2effed-07e0-47c6-bc65-0fb97fa529aa%22+%22%2Ftmp%2Flocal-volume-test-4f2effed-07e0-47c6-bc65-0fb97fa529aa%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker" at path "/tmp/local-volume-test-cedfd03d-e2fc-4679-a14f-269ecc1e0f60" 04/18/24 18:22:10.464 Apr 18 18:22:10.464: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-cedfd03d-e2fc-4679-a14f-269ecc1e0f60" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-cedfd03d-e2fc-4679-a14f-269ecc1e0f60" "/tmp/local-volume-test-cedfd03d-e2fc-4679-a14f-269ecc1e0f60"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:10.464: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:10.465: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:10.465: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-cedfd03d-e2fc-4679-a14f-269ecc1e0f60%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-cedfd03d-e2fc-4679-a14f-269ecc1e0f60%22+%22%2Ftmp%2Flocal-volume-test-cedfd03d-e2fc-4679-a14f-269ecc1e0f60%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker" at path "/tmp/local-volume-test-5e9dedb7-eba4-43c0-8ee2-f5f05e0307e3" 04/18/24 18:22:10.625 Apr 18 18:22:10.625: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-5e9dedb7-eba4-43c0-8ee2-f5f05e0307e3" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-5e9dedb7-eba4-43c0-8ee2-f5f05e0307e3" "/tmp/local-volume-test-5e9dedb7-eba4-43c0-8ee2-f5f05e0307e3"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:10.625: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:10.626: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:10.626: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-5e9dedb7-eba4-43c0-8ee2-f5f05e0307e3%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-5e9dedb7-eba4-43c0-8ee2-f5f05e0307e3%22+%22%2Ftmp%2Flocal-volume-test-5e9dedb7-eba4-43c0-8ee2-f5f05e0307e3%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker" at path "/tmp/local-volume-test-ad2566bd-e67c-4b03-b54a-0822df6c369a" 04/18/24 18:22:10.779 Apr 18 18:22:10.779: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-ad2566bd-e67c-4b03-b54a-0822df6c369a" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-ad2566bd-e67c-4b03-b54a-0822df6c369a" "/tmp/local-volume-test-ad2566bd-e67c-4b03-b54a-0822df6c369a"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:10.779: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:10.781: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:10.781: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-ad2566bd-e67c-4b03-b54a-0822df6c369a%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-ad2566bd-e67c-4b03-b54a-0822df6c369a%22+%22%2Ftmp%2Flocal-volume-test-ad2566bd-e67c-4b03-b54a-0822df6c369a%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker" at path "/tmp/local-volume-test-4161bc4a-1cda-4b0d-afb1-9de55917bb11" 04/18/24 18:22:10.868 Apr 18 18:22:10.868: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-4161bc4a-1cda-4b0d-afb1-9de55917bb11" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-4161bc4a-1cda-4b0d-afb1-9de55917bb11" "/tmp/local-volume-test-4161bc4a-1cda-4b0d-afb1-9de55917bb11"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:10.868: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:10.870: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:10.870: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-4161bc4a-1cda-4b0d-afb1-9de55917bb11%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-4161bc4a-1cda-4b0d-afb1-9de55917bb11%22+%22%2Ftmp%2Flocal-volume-test-4161bc4a-1cda-4b0d-afb1-9de55917bb11%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker" at path "/tmp/local-volume-test-1a50ef09-db20-4930-a27a-b7fbb5d3c58e" 04/18/24 18:22:11.001 Apr 18 18:22:11.001: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-1a50ef09-db20-4930-a27a-b7fbb5d3c58e" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-1a50ef09-db20-4930-a27a-b7fbb5d3c58e" "/tmp/local-volume-test-1a50ef09-db20-4930-a27a-b7fbb5d3c58e"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:11.001: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:11.002: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:11.003: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-1a50ef09-db20-4930-a27a-b7fbb5d3c58e%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-1a50ef09-db20-4930-a27a-b7fbb5d3c58e%22+%22%2Ftmp%2Flocal-volume-test-1a50ef09-db20-4930-a27a-b7fbb5d3c58e%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Setting up 10 local volumes on node "v126-worker2" 04/18/24 18:22:11.158 STEP: Creating tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-55dbdf26-2db5-4475-a6d2-de0a8a20ac19" 04/18/24 18:22:11.159 Apr 18 18:22:11.166: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker2-qpb6j" in namespace "persistent-local-volumes-test-38" to be "running" Apr 18 18:22:11.169: INFO: Pod "hostexec-v126-worker2-qpb6j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.991798ms Apr 18 18:22:13.175: INFO: Pod "hostexec-v126-worker2-qpb6j": Phase="Running", Reason="", readiness=true. Elapsed: 2.008269065s Apr 18 18:22:13.175: INFO: Pod "hostexec-v126-worker2-qpb6j" satisfied condition "running" Apr 18 18:22:13.175: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-55dbdf26-2db5-4475-a6d2-de0a8a20ac19" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-55dbdf26-2db5-4475-a6d2-de0a8a20ac19" "/tmp/local-volume-test-55dbdf26-2db5-4475-a6d2-de0a8a20ac19"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:13.175: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:13.176: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:13.176: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-55dbdf26-2db5-4475-a6d2-de0a8a20ac19%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-55dbdf26-2db5-4475-a6d2-de0a8a20ac19%22+%22%2Ftmp%2Flocal-volume-test-55dbdf26-2db5-4475-a6d2-de0a8a20ac19%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-45e4afe0-e303-40d7-96dd-ac685b4c9662" 04/18/24 18:22:13.344 Apr 18 18:22:13.344: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-45e4afe0-e303-40d7-96dd-ac685b4c9662" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-45e4afe0-e303-40d7-96dd-ac685b4c9662" "/tmp/local-volume-test-45e4afe0-e303-40d7-96dd-ac685b4c9662"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:13.344: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:13.346: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:13.346: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-45e4afe0-e303-40d7-96dd-ac685b4c9662%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-45e4afe0-e303-40d7-96dd-ac685b4c9662%22+%22%2Ftmp%2Flocal-volume-test-45e4afe0-e303-40d7-96dd-ac685b4c9662%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-01703c6d-2dcb-45a2-a5a4-caa2cd12db4f" 04/18/24 18:22:13.509 Apr 18 18:22:13.510: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-01703c6d-2dcb-45a2-a5a4-caa2cd12db4f" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-01703c6d-2dcb-45a2-a5a4-caa2cd12db4f" "/tmp/local-volume-test-01703c6d-2dcb-45a2-a5a4-caa2cd12db4f"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:13.510: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:13.511: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:13.511: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-01703c6d-2dcb-45a2-a5a4-caa2cd12db4f%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-01703c6d-2dcb-45a2-a5a4-caa2cd12db4f%22+%22%2Ftmp%2Flocal-volume-test-01703c6d-2dcb-45a2-a5a4-caa2cd12db4f%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-618a9b44-b951-4946-a571-dd9ee138aca2" 04/18/24 18:22:13.658 Apr 18 18:22:13.658: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-618a9b44-b951-4946-a571-dd9ee138aca2" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-618a9b44-b951-4946-a571-dd9ee138aca2" "/tmp/local-volume-test-618a9b44-b951-4946-a571-dd9ee138aca2"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:13.658: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:13.659: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:13.659: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-618a9b44-b951-4946-a571-dd9ee138aca2%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-618a9b44-b951-4946-a571-dd9ee138aca2%22+%22%2Ftmp%2Flocal-volume-test-618a9b44-b951-4946-a571-dd9ee138aca2%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-d2540448-7ed6-42ec-b00d-f24ffbc94a00" 04/18/24 18:22:13.812 Apr 18 18:22:13.812: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-d2540448-7ed6-42ec-b00d-f24ffbc94a00" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-d2540448-7ed6-42ec-b00d-f24ffbc94a00" "/tmp/local-volume-test-d2540448-7ed6-42ec-b00d-f24ffbc94a00"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:13.812: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:13.813: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:13.813: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-d2540448-7ed6-42ec-b00d-f24ffbc94a00%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-d2540448-7ed6-42ec-b00d-f24ffbc94a00%22+%22%2Ftmp%2Flocal-volume-test-d2540448-7ed6-42ec-b00d-f24ffbc94a00%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-e8a2452e-c7c1-4012-af90-3b5518fdaecf" 04/18/24 18:22:13.937 Apr 18 18:22:13.937: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-e8a2452e-c7c1-4012-af90-3b5518fdaecf" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-e8a2452e-c7c1-4012-af90-3b5518fdaecf" "/tmp/local-volume-test-e8a2452e-c7c1-4012-af90-3b5518fdaecf"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:13.937: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:13.938: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:13.939: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-e8a2452e-c7c1-4012-af90-3b5518fdaecf%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-e8a2452e-c7c1-4012-af90-3b5518fdaecf%22+%22%2Ftmp%2Flocal-volume-test-e8a2452e-c7c1-4012-af90-3b5518fdaecf%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-09f38913-3cc2-4afa-941f-dbf257f9c880" 04/18/24 18:22:14.086 Apr 18 18:22:14.086: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-09f38913-3cc2-4afa-941f-dbf257f9c880" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-09f38913-3cc2-4afa-941f-dbf257f9c880" "/tmp/local-volume-test-09f38913-3cc2-4afa-941f-dbf257f9c880"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:14.086: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:14.088: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:14.088: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-09f38913-3cc2-4afa-941f-dbf257f9c880%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-09f38913-3cc2-4afa-941f-dbf257f9c880%22+%22%2Ftmp%2Flocal-volume-test-09f38913-3cc2-4afa-941f-dbf257f9c880%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-3a576b13-a40b-4498-8787-f5b71ac3c0e3" 04/18/24 18:22:14.228 Apr 18 18:22:14.228: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-3a576b13-a40b-4498-8787-f5b71ac3c0e3" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-3a576b13-a40b-4498-8787-f5b71ac3c0e3" "/tmp/local-volume-test-3a576b13-a40b-4498-8787-f5b71ac3c0e3"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:14.228: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:14.229: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:14.229: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-3a576b13-a40b-4498-8787-f5b71ac3c0e3%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-3a576b13-a40b-4498-8787-f5b71ac3c0e3%22+%22%2Ftmp%2Flocal-volume-test-3a576b13-a40b-4498-8787-f5b71ac3c0e3%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-fac4eaa9-ab1b-4891-8a00-23f1d52e38af" 04/18/24 18:22:14.379 Apr 18 18:22:14.380: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-fac4eaa9-ab1b-4891-8a00-23f1d52e38af" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-fac4eaa9-ab1b-4891-8a00-23f1d52e38af" "/tmp/local-volume-test-fac4eaa9-ab1b-4891-8a00-23f1d52e38af"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:14.380: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:14.381: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:14.381: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-fac4eaa9-ab1b-4891-8a00-23f1d52e38af%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-fac4eaa9-ab1b-4891-8a00-23f1d52e38af%22+%22%2Ftmp%2Flocal-volume-test-fac4eaa9-ab1b-4891-8a00-23f1d52e38af%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-f8d7be9e-3634-43ae-8736-8636c73aa4d2" 04/18/24 18:22:14.522 Apr 18 18:22:14.522: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-f8d7be9e-3634-43ae-8736-8636c73aa4d2" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-f8d7be9e-3634-43ae-8736-8636c73aa4d2" "/tmp/local-volume-test-f8d7be9e-3634-43ae-8736-8636c73aa4d2"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:14.522: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:14.524: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:14.524: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-f8d7be9e-3634-43ae-8736-8636c73aa4d2%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-f8d7be9e-3634-43ae-8736-8636c73aa4d2%22+%22%2Ftmp%2Flocal-volume-test-f8d7be9e-3634-43ae-8736-8636c73aa4d2%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Create 20 PVs 04/18/24 18:22:14.673 STEP: Start a goroutine to recycle unbound PVs 04/18/24 18:22:14.761 [It] should be able to process many pods and reuse local volumes test/e2e/storage/persistent_volumes-local.go:534 STEP: Creating 7 pods periodically 04/18/24 18:22:14.761 STEP: Waiting for all pods to complete successfully 04/18/24 18:22:14.762 Apr 18 18:22:21.901: INFO: Deleting pod pod-1ccecc6d-8955-40e8-a3cc-15039ff79ae8 Apr 18 18:22:21.909: INFO: Deleting PersistentVolumeClaim "pvc-wxq2q" Apr 18 18:22:21.914: INFO: Deleting PersistentVolumeClaim "pvc-rkx6p" Apr 18 18:22:21.919: INFO: Deleting PersistentVolumeClaim "pvc-p6tfn" Apr 18 18:22:21.924: INFO: 1/28 pods finished Apr 18 18:22:21.924: INFO: Deleting pod pod-2535a53f-97c9-497c-83da-c84fb3edeca5 Apr 18 18:22:21.933: INFO: Deleting PersistentVolumeClaim "pvc-kwjw9" STEP: Delete "local-pvph4pd" and create a new PV for same local volume storage 04/18/24 18:22:21.936 Apr 18 18:22:21.938: INFO: Deleting PersistentVolumeClaim "pvc-w9h7p" Apr 18 18:22:21.946: INFO: Deleting PersistentVolumeClaim "pvc-n2zv9" Apr 18 18:22:21.961: INFO: 2/28 pods finished STEP: Delete "local-pvnzz4x" and create a new PV for same local volume storage 04/18/24 18:22:21.966 STEP: Delete "local-pvf7ktd" and create a new PV for same local volume storage 04/18/24 18:22:21.977 STEP: Delete "local-pv5xhdz" and create a new PV for same local volume storage 04/18/24 18:22:21.994 STEP: Delete "local-pvl97zg" and create a new PV for same local volume storage 04/18/24 18:22:22.007 STEP: Delete "local-pvhpk9w" and create a new PV for same local volume storage 04/18/24 18:22:22.021 Apr 18 18:22:25.900: INFO: Deleting pod pod-10961d13-5c37-4cc3-bdb2-38cb1458d881 Apr 18 18:22:25.909: INFO: Deleting PersistentVolumeClaim "pvc-2vck4" Apr 18 18:22:25.914: INFO: Deleting PersistentVolumeClaim "pvc-rsh4d" Apr 18 18:22:25.920: INFO: Deleting PersistentVolumeClaim "pvc-x2wng" Apr 18 18:22:25.925: INFO: 3/28 pods finished Apr 18 18:22:25.925: INFO: Deleting pod pod-31fd79fc-c1af-4daa-8549-0c6095e8ccdf Apr 18 18:22:25.934: INFO: Deleting PersistentVolumeClaim "pvc-6w2t8" STEP: Delete "local-pv2t4vz" and create a new PV for same local volume storage 04/18/24 18:22:25.936 Apr 18 18:22:25.939: INFO: Deleting PersistentVolumeClaim "pvc-fpdz7" Apr 18 18:22:25.943: INFO: Deleting PersistentVolumeClaim "pvc-6pb7z" Apr 18 18:22:25.948: INFO: 4/28 pods finished STEP: Delete "local-pvwrsd5" and create a new PV for same local volume storage 04/18/24 18:22:25.951 STEP: Delete "local-pv7zmp6" and create a new PV for same local volume storage 04/18/24 18:22:25.963 STEP: Delete "local-pvm4vws" and create a new PV for same local volume storage 04/18/24 18:22:25.977 STEP: Delete "local-pvdgrhs" and create a new PV for same local volume storage 04/18/24 18:22:25.991 STEP: Delete "local-pvmrzbm" and create a new PV for same local volume storage 04/18/24 18:22:26.006 Apr 18 18:22:28.897: INFO: Deleting pod pod-3dd03287-287d-4089-9a34-270023e3cd49 Apr 18 18:22:28.906: INFO: Deleting PersistentVolumeClaim "pvc-md59r" Apr 18 18:22:28.911: INFO: Deleting PersistentVolumeClaim "pvc-wn9dq" Apr 18 18:22:28.917: INFO: Deleting PersistentVolumeClaim "pvc-qxfcr" Apr 18 18:22:28.925: INFO: 5/28 pods finished STEP: Delete "local-pvbvkk9" and create a new PV for same local volume storage 04/18/24 18:22:28.938 STEP: Delete "local-pvbzf44" and create a new PV for same local volume storage 04/18/24 18:22:28.952 STEP: Delete "local-pvvl8p5" and create a new PV for same local volume storage 04/18/24 18:22:28.968 Apr 18 18:22:30.901: INFO: Deleting pod pod-243195a3-408b-4e61-be40-41353209c9a6 Apr 18 18:22:30.909: INFO: Deleting PersistentVolumeClaim "pvc-4pfcl" Apr 18 18:22:30.914: INFO: Deleting PersistentVolumeClaim "pvc-4c4b4" Apr 18 18:22:30.919: INFO: Deleting PersistentVolumeClaim "pvc-6fhtf" Apr 18 18:22:30.923: INFO: 6/28 pods finished STEP: Delete "local-pv98xdd" and create a new PV for same local volume storage 04/18/24 18:22:30.935 STEP: Delete "local-pvcqrjl" and create a new PV for same local volume storage 04/18/24 18:22:30.949 STEP: Delete "local-pvcqrjl" and create a new PV for same local volume storage 04/18/24 18:22:30.962 STEP: Delete "local-pvz4k5z" and create a new PV for same local volume storage 04/18/24 18:22:30.965 Apr 18 18:22:35.895: INFO: Deleting pod pod-04b4bced-9f6d-47ba-bc71-9dfa936f71b5 Apr 18 18:22:35.903: INFO: Deleting PersistentVolumeClaim "pvc-kzxg6" Apr 18 18:22:35.907: INFO: Deleting PersistentVolumeClaim "pvc-q8lgs" Apr 18 18:22:35.911: INFO: Deleting PersistentVolumeClaim "pvc-9k5rq" Apr 18 18:22:35.916: INFO: 7/28 pods finished STEP: Delete "local-pvtwr9f" and create a new PV for same local volume storage 04/18/24 18:22:35.93 STEP: Delete "local-pvb8pmh" and create a new PV for same local volume storage 04/18/24 18:22:35.944 STEP: Delete "local-pvrbqxs" and create a new PV for same local volume storage 04/18/24 18:22:35.958 Apr 18 18:22:37.895: INFO: Deleting pod pod-266fb444-3fed-40b3-93b6-1b6f7775b3f5 Apr 18 18:22:37.905: INFO: Deleting PersistentVolumeClaim "pvc-kvlwc" Apr 18 18:22:37.910: INFO: Deleting PersistentVolumeClaim "pvc-7k7mp" Apr 18 18:22:37.915: INFO: Deleting PersistentVolumeClaim "pvc-cr4g8" Apr 18 18:22:37.920: INFO: 8/28 pods finished STEP: Delete "local-pvbbr8z" and create a new PV for same local volume storage 04/18/24 18:22:37.934 STEP: Delete "local-pv69pm7" and create a new PV for same local volume storage 04/18/24 18:22:37.949 STEP: Delete "local-pvm4p28" and create a new PV for same local volume storage 04/18/24 18:22:37.963 Apr 18 18:22:39.899: INFO: Deleting pod pod-1e92c2a2-f5d9-4c5c-94cf-f0094f040bc1 Apr 18 18:22:39.907: INFO: Deleting PersistentVolumeClaim "pvc-6s9tg" Apr 18 18:22:39.912: INFO: Deleting PersistentVolumeClaim "pvc-f2wwz" Apr 18 18:22:39.917: INFO: Deleting PersistentVolumeClaim "pvc-2jjt5" Apr 18 18:22:39.922: INFO: 9/28 pods finished STEP: Delete "local-pvk7pth" and create a new PV for same local volume storage 04/18/24 18:22:39.934 STEP: Delete "local-pvxn875" and create a new PV for same local volume storage 04/18/24 18:22:39.948 STEP: Delete "local-pv5mnqp" and create a new PV for same local volume storage 04/18/24 18:22:39.962 Apr 18 18:22:42.895: INFO: Deleting pod pod-aad48fcf-4bc8-483b-8cc0-545579bcbaab Apr 18 18:22:42.906: INFO: Deleting PersistentVolumeClaim "pvc-gtvbd" Apr 18 18:22:42.911: INFO: Deleting PersistentVolumeClaim "pvc-6xmg2" Apr 18 18:22:42.915: INFO: Deleting PersistentVolumeClaim "pvc-sqngg" Apr 18 18:22:42.921: INFO: 10/28 pods finished STEP: Delete "local-pvlktv2" and create a new PV for same local volume storage 04/18/24 18:22:42.931 STEP: Delete "local-pvtb9b4" and create a new PV for same local volume storage 04/18/24 18:22:42.947 STEP: Delete "local-pvtb5vr" and create a new PV for same local volume storage 04/18/24 18:22:42.962 Apr 18 18:22:45.895: INFO: Deleting pod pod-070b1014-39fd-4bff-b232-6905c2be859c Apr 18 18:22:45.909: INFO: Deleting PersistentVolumeClaim "pvc-t6tk2" Apr 18 18:22:45.915: INFO: Deleting PersistentVolumeClaim "pvc-rj4v5" Apr 18 18:22:45.920: INFO: Deleting PersistentVolumeClaim "pvc-dgfbs" Apr 18 18:22:45.925: INFO: 11/28 pods finished STEP: Delete "local-pvj769l" and create a new PV for same local volume storage 04/18/24 18:22:45.938 STEP: Delete "local-pvk24gr" and create a new PV for same local volume storage 04/18/24 18:22:45.952 STEP: Delete "local-pvs8spd" and create a new PV for same local volume storage 04/18/24 18:22:45.967 Apr 18 18:22:46.900: INFO: Deleting pod pod-43bf0990-017c-4359-879f-32557c509cd9 Apr 18 18:22:46.909: INFO: Deleting PersistentVolumeClaim "pvc-cgc2r" Apr 18 18:22:46.915: INFO: Deleting PersistentVolumeClaim "pvc-24gp9" Apr 18 18:22:46.920: INFO: Deleting PersistentVolumeClaim "pvc-tg698" Apr 18 18:22:46.926: INFO: 12/28 pods finished STEP: Delete "local-pv5vw48" and create a new PV for same local volume storage 04/18/24 18:22:46.936 STEP: Delete "local-pv9kxbm" and create a new PV for same local volume storage 04/18/24 18:22:46.95 STEP: Delete "local-pvrgsgl" and create a new PV for same local volume storage 04/18/24 18:22:46.966 Apr 18 18:22:50.900: INFO: Deleting pod pod-22de8776-6d2b-480a-af12-d8ed67daf8d8 Apr 18 18:22:50.910: INFO: Deleting PersistentVolumeClaim "pvc-6ddgp" Apr 18 18:22:50.915: INFO: Deleting PersistentVolumeClaim "pvc-jn4gq" Apr 18 18:22:50.920: INFO: Deleting PersistentVolumeClaim "pvc-bt9m7" Apr 18 18:22:50.925: INFO: 13/28 pods finished STEP: Delete "local-pvtf2cc" and create a new PV for same local volume storage 04/18/24 18:22:50.937 STEP: Delete "local-pv2b2tz" and create a new PV for same local volume storage 04/18/24 18:22:50.955 STEP: Delete "local-pvq662c" and create a new PV for same local volume storage 04/18/24 18:22:50.969 Apr 18 18:22:52.895: INFO: Deleting pod pod-76847907-7178-4775-8b96-6ffc9b32fa5b Apr 18 18:22:52.903: INFO: Deleting PersistentVolumeClaim "pvc-kp6ch" Apr 18 18:22:52.909: INFO: Deleting PersistentVolumeClaim "pvc-52hf4" Apr 18 18:22:52.914: INFO: Deleting PersistentVolumeClaim "pvc-tp8hw" Apr 18 18:22:52.919: INFO: 14/28 pods finished STEP: Delete "local-pv6sd6k" and create a new PV for same local volume storage 04/18/24 18:22:52.931 STEP: Delete "local-pv4khrm" and create a new PV for same local volume storage 04/18/24 18:22:52.946 STEP: Delete "local-pvtbw2d" and create a new PV for same local volume storage 04/18/24 18:22:52.961 Apr 18 18:22:57.900: INFO: Deleting pod pod-4d393679-e84a-4d96-99ac-7858ff36bd3b Apr 18 18:22:57.910: INFO: Deleting PersistentVolumeClaim "pvc-btkxh" Apr 18 18:22:57.915: INFO: Deleting PersistentVolumeClaim "pvc-vd7nc" Apr 18 18:22:57.919: INFO: Deleting PersistentVolumeClaim "pvc-r2hpq" Apr 18 18:22:57.925: INFO: 15/28 pods finished STEP: Delete "local-pvcxs82" and create a new PV for same local volume storage 04/18/24 18:22:57.939 STEP: Delete "local-pvcxs82" and create a new PV for same local volume storage 04/18/24 18:22:57.952 STEP: Delete "local-pvblwjl" and create a new PV for same local volume storage 04/18/24 18:22:57.955 STEP: Delete "local-pvj8scl" and create a new PV for same local volume storage 04/18/24 18:22:57.969 Apr 18 18:22:58.900: INFO: Deleting pod pod-0546e632-9da3-45fd-96e2-c1372c7a98e2 Apr 18 18:22:58.911: INFO: Deleting PersistentVolumeClaim "pvc-8lpf8" Apr 18 18:22:58.917: INFO: Deleting PersistentVolumeClaim "pvc-mlvpl" Apr 18 18:22:58.921: INFO: Deleting PersistentVolumeClaim "pvc-f2lcn" Apr 18 18:22:58.926: INFO: 16/28 pods finished STEP: Delete "local-pv54ngj" and create a new PV for same local volume storage 04/18/24 18:22:58.939 STEP: Delete "local-pvzrphl" and create a new PV for same local volume storage 04/18/24 18:22:58.954 STEP: Delete "local-pvzdvlq" and create a new PV for same local volume storage 04/18/24 18:22:58.969 Apr 18 18:22:59.894: INFO: Deleting pod pod-3f7cd0d9-93a4-43df-9a8a-2f6f4ebea27e Apr 18 18:22:59.903: INFO: Deleting PersistentVolumeClaim "pvc-dmbhk" Apr 18 18:22:59.908: INFO: Deleting PersistentVolumeClaim "pvc-hvm5f" Apr 18 18:22:59.914: INFO: Deleting PersistentVolumeClaim "pvc-nsb8m" Apr 18 18:22:59.919: INFO: 17/28 pods finished Apr 18 18:22:59.919: INFO: Deleting pod pod-8630be22-df14-4a76-986f-ae1598a71c4b Apr 18 18:22:59.927: INFO: Deleting PersistentVolumeClaim "pvc-7xhzg" Apr 18 18:22:59.932: INFO: Deleting PersistentVolumeClaim "pvc-zmbsx" STEP: Delete "local-pvhtkqt" and create a new PV for same local volume storage 04/18/24 18:22:59.936 Apr 18 18:22:59.937: INFO: Deleting PersistentVolumeClaim "pvc-v5w6v" Apr 18 18:22:59.942: INFO: 18/28 pods finished STEP: Delete "local-pvjzlpn" and create a new PV for same local volume storage 04/18/24 18:22:59.95 STEP: Delete "local-pvtkl94" and create a new PV for same local volume storage 04/18/24 18:22:59.966 STEP: Delete "local-pvj4wqp" and create a new PV for same local volume storage 04/18/24 18:22:59.978 STEP: Delete "local-pvtcgz2" and create a new PV for same local volume storage 04/18/24 18:22:59.992 STEP: Delete "local-pvrqr8v" and create a new PV for same local volume storage 04/18/24 18:23:00.007 Apr 18 18:23:05.901: INFO: Deleting pod pod-2c50178a-9ce6-43ba-b3be-446adce6eeba Apr 18 18:23:05.915: INFO: Deleting PersistentVolumeClaim "pvc-4pk9b" Apr 18 18:23:05.921: INFO: Deleting PersistentVolumeClaim "pvc-n7995" Apr 18 18:23:05.926: INFO: Deleting PersistentVolumeClaim "pvc-9fg4h" Apr 18 18:23:05.931: INFO: 19/28 pods finished STEP: Delete "local-pv97d6m" and create a new PV for same local volume storage 04/18/24 18:23:05.944 STEP: Delete "local-pvjxcgj" and create a new PV for same local volume storage 04/18/24 18:23:05.958 STEP: Delete "local-pvswg62" and create a new PV for same local volume storage 04/18/24 18:23:05.973 Apr 18 18:23:08.895: INFO: Deleting pod pod-e9637635-5306-4c97-b2fd-1331c045d04b Apr 18 18:23:08.905: INFO: Deleting PersistentVolumeClaim "pvc-t9bz4" Apr 18 18:23:08.910: INFO: Deleting PersistentVolumeClaim "pvc-sc5p5" Apr 18 18:23:08.915: INFO: Deleting PersistentVolumeClaim "pvc-k6mmc" Apr 18 18:23:08.920: INFO: 20/28 pods finished STEP: Delete "local-pvtffxv" and create a new PV for same local volume storage 04/18/24 18:23:08.935 STEP: Delete "local-pvfkn8p" and create a new PV for same local volume storage 04/18/24 18:23:08.95 STEP: Delete "local-pv994bt" and create a new PV for same local volume storage 04/18/24 18:23:08.965 Apr 18 18:23:09.900: INFO: Deleting pod pod-555cb69d-dcfd-41e4-85c6-5a14cf3acfa5 Apr 18 18:23:09.910: INFO: Deleting PersistentVolumeClaim "pvc-dx4mq" Apr 18 18:23:09.915: INFO: Deleting PersistentVolumeClaim "pvc-wm85m" Apr 18 18:23:09.919: INFO: Deleting PersistentVolumeClaim "pvc-8n8dk" Apr 18 18:23:09.925: INFO: 21/28 pods finished STEP: Delete "local-pv8hks5" and create a new PV for same local volume storage 04/18/24 18:23:09.937 STEP: Delete "local-pvpgg62" and create a new PV for same local volume storage 04/18/24 18:23:09.951 STEP: Delete "local-pvct2vb" and create a new PV for same local volume storage 04/18/24 18:23:09.966 Apr 18 18:23:11.895: INFO: Deleting pod pod-136bdcaa-07b0-4e0c-8a15-6750fb9fe77b Apr 18 18:23:11.904: INFO: Deleting PersistentVolumeClaim "pvc-9vhd2" Apr 18 18:23:11.910: INFO: Deleting PersistentVolumeClaim "pvc-w82rb" Apr 18 18:23:11.914: INFO: Deleting PersistentVolumeClaim "pvc-snq8b" Apr 18 18:23:11.920: INFO: 22/28 pods finished STEP: Delete "local-pvdxjw2" and create a new PV for same local volume storage 04/18/24 18:23:11.932 STEP: Delete "local-pvx66zp" and create a new PV for same local volume storage 04/18/24 18:23:11.946 STEP: Delete "local-pvhf6lw" and create a new PV for same local volume storage 04/18/24 18:23:11.964 Apr 18 18:23:18.894: INFO: Deleting pod pod-2ba65d0b-344c-400a-bc2d-02b4af2ef615 Apr 18 18:23:18.902: INFO: Deleting PersistentVolumeClaim "pvc-jcc8q" Apr 18 18:23:18.907: INFO: Deleting PersistentVolumeClaim "pvc-wbnv5" Apr 18 18:23:18.912: INFO: Deleting PersistentVolumeClaim "pvc-fnxzm" Apr 18 18:23:18.917: INFO: 23/28 pods finished STEP: Delete "local-pvctxh5" and create a new PV for same local volume storage 04/18/24 18:23:18.929 STEP: Delete "local-pvtqcwm" and create a new PV for same local volume storage 04/18/24 18:23:18.944 STEP: Delete "local-pv8fctw" and create a new PV for same local volume storage 04/18/24 18:23:18.958 Apr 18 18:23:19.895: INFO: Deleting pod pod-6265ef15-ac9a-4c6d-b935-0035a00e97b2 Apr 18 18:23:19.902: INFO: Deleting PersistentVolumeClaim "pvc-krpkg" Apr 18 18:23:19.908: INFO: Deleting PersistentVolumeClaim "pvc-9ptdv" Apr 18 18:23:19.913: INFO: Deleting PersistentVolumeClaim "pvc-8k9v8" Apr 18 18:23:19.918: INFO: 24/28 pods finished Apr 18 18:23:19.918: INFO: Deleting pod pod-ae5704ac-35a5-47bd-a057-63953eab0343 Apr 18 18:23:19.926: INFO: Deleting PersistentVolumeClaim "pvc-6b7vw" STEP: Delete "local-pvs9cf9" and create a new PV for same local volume storage 04/18/24 18:23:19.93 Apr 18 18:23:19.931: INFO: Deleting PersistentVolumeClaim "pvc-vk57q" Apr 18 18:23:19.936: INFO: Deleting PersistentVolumeClaim "pvc-lb8zp" Apr 18 18:23:19.942: INFO: 25/28 pods finished STEP: Delete "local-pvx9jg9" and create a new PV for same local volume storage 04/18/24 18:23:19.945 STEP: Delete "local-pv5bsmk" and create a new PV for same local volume storage 04/18/24 18:23:19.959 STEP: Delete "local-pvx6l7z" and create a new PV for same local volume storage 04/18/24 18:23:19.971 STEP: Delete "local-pvfvd8r" and create a new PV for same local volume storage 04/18/24 18:23:19.986 STEP: Delete "local-pvls246" and create a new PV for same local volume storage 04/18/24 18:23:20 Apr 18 18:23:20.899: INFO: Deleting pod pod-4c7dd5b7-72db-4600-aec6-328eaf359c8f Apr 18 18:23:20.910: INFO: Deleting PersistentVolumeClaim "pvc-k4d4b" Apr 18 18:23:20.915: INFO: Deleting PersistentVolumeClaim "pvc-gmx4d" Apr 18 18:23:20.920: INFO: Deleting PersistentVolumeClaim "pvc-hnvg8" Apr 18 18:23:20.925: INFO: 26/28 pods finished STEP: Delete "local-pv7spb5" and create a new PV for same local volume storage 04/18/24 18:23:20.938 STEP: Delete "local-pvrrsjn" and create a new PV for same local volume storage 04/18/24 18:23:20.952 STEP: Delete "local-pvz85jq" and create a new PV for same local volume storage 04/18/24 18:23:20.969 Apr 18 18:23:21.894: INFO: Deleting pod pod-e518fd0e-f00d-4778-8869-3b5e80d8a4c5 Apr 18 18:23:21.902: INFO: Deleting PersistentVolumeClaim "pvc-d98g2" Apr 18 18:23:21.907: INFO: Deleting PersistentVolumeClaim "pvc-vqrzg" Apr 18 18:23:21.917: INFO: Deleting PersistentVolumeClaim "pvc-7x66k" Apr 18 18:23:21.938: INFO: 27/28 pods finished STEP: Delete "local-pvgqqg8" and create a new PV for same local volume storage 04/18/24 18:23:21.95 STEP: Delete "local-pvnnzxd" and create a new PV for same local volume storage 04/18/24 18:23:21.966 STEP: Delete "local-pvrzqsc" and create a new PV for same local volume storage 04/18/24 18:23:21.981 Apr 18 18:23:23.894: INFO: Deleting pod pod-e1331933-d024-420e-8404-eec66f73f0d5 Apr 18 18:23:23.904: INFO: Deleting PersistentVolumeClaim "pvc-9vcrq" Apr 18 18:23:23.909: INFO: Deleting PersistentVolumeClaim "pvc-kb98s" Apr 18 18:23:23.915: INFO: Deleting PersistentVolumeClaim "pvc-4cbzf" Apr 18 18:23:23.920: INFO: 28/28 pods finished [AfterEach] Stress with local volumes [Serial] test/e2e/storage/persistent_volumes-local.go:522 STEP: Stop and wait for recycle goroutine to finish 04/18/24 18:23:23.92 STEP: Clean all PVs 04/18/24 18:23:23.92 STEP: Cleaning up 10 local volumes on node "v126-worker" 04/18/24 18:23:23.92 STEP: Cleaning up PVC and PV 04/18/24 18:23:23.921 Apr 18 18:23:23.921: INFO: pvc is nil Apr 18 18:23:23.921: INFO: Deleting PersistentVolume "local-pvj67fk" STEP: Cleaning up PVC and PV 04/18/24 18:23:23.926 Apr 18 18:23:23.926: INFO: pvc is nil Apr 18 18:23:23.926: INFO: Deleting PersistentVolume "local-pvj66qj" STEP: Cleaning up PVC and PV 04/18/24 18:23:23.93 Apr 18 18:23:23.931: INFO: pvc is nil Apr 18 18:23:23.931: INFO: Deleting PersistentVolume "local-pvwzfdj" STEP: Cleaning up PVC and PV 04/18/24 18:23:23.936 Apr 18 18:23:23.936: INFO: pvc is nil Apr 18 18:23:23.936: INFO: Deleting PersistentVolume "local-pvbvktg" STEP: Cleaning up PVC and PV 04/18/24 18:23:23.941 Apr 18 18:23:23.941: INFO: pvc is nil Apr 18 18:23:23.941: INFO: Deleting PersistentVolume "local-pvv77ss" STEP: Cleaning up PVC and PV 04/18/24 18:23:23.947 Apr 18 18:23:23.947: INFO: pvc is nil Apr 18 18:23:23.947: INFO: Deleting PersistentVolume "local-pvb5zrf" STEP: Cleaning up PVC and PV 04/18/24 18:23:23.952 Apr 18 18:23:23.952: INFO: pvc is nil Apr 18 18:23:23.952: INFO: Deleting PersistentVolume "local-pvcjk44" STEP: Cleaning up PVC and PV 04/18/24 18:23:23.957 Apr 18 18:23:23.957: INFO: pvc is nil Apr 18 18:23:23.957: INFO: Deleting PersistentVolume "local-pv946rq" STEP: Cleaning up PVC and PV 04/18/24 18:23:23.961 Apr 18 18:23:23.961: INFO: pvc is nil Apr 18 18:23:23.962: INFO: Deleting PersistentVolume "local-pvb4x5j" STEP: Cleaning up PVC and PV 04/18/24 18:23:23.967 Apr 18 18:23:23.967: INFO: pvc is nil Apr 18 18:23:23.967: INFO: Deleting PersistentVolume "local-pv5g6k6" STEP: Unmount tmpfs mount point on node "v126-worker" at path "/tmp/local-volume-test-0cfc23b5-94ae-4254-9e28-16374a08f7d2" 04/18/24 18:23:23.972 Apr 18 18:23:23.972: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-0cfc23b5-94ae-4254-9e28-16374a08f7d2"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:23.972: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:23.973: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:23.974: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-0cfc23b5-94ae-4254-9e28-16374a08f7d2%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/18/24 18:23:24.113 Apr 18 18:23:24.113: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0cfc23b5-94ae-4254-9e28-16374a08f7d2] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:24.113: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:24.114: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:24.114: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-0cfc23b5-94ae-4254-9e28-16374a08f7d2&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker" at path "/tmp/local-volume-test-04978b0e-0487-447a-8012-2a78eebe6ae6" 04/18/24 18:23:24.284 Apr 18 18:23:24.284: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-04978b0e-0487-447a-8012-2a78eebe6ae6"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:24.284: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:24.286: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:24.286: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-04978b0e-0487-447a-8012-2a78eebe6ae6%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/18/24 18:23:24.427 Apr 18 18:23:24.427: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-04978b0e-0487-447a-8012-2a78eebe6ae6] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:24.427: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:24.428: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:24.428: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-04978b0e-0487-447a-8012-2a78eebe6ae6&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker" at path "/tmp/local-volume-test-5eab056d-5401-4c3c-bec9-c564e8720d23" 04/18/24 18:23:24.563 Apr 18 18:23:24.563: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-5eab056d-5401-4c3c-bec9-c564e8720d23"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:24.563: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:24.567: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:24.567: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-5eab056d-5401-4c3c-bec9-c564e8720d23%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/18/24 18:23:24.703 Apr 18 18:23:24.703: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-5eab056d-5401-4c3c-bec9-c564e8720d23] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:24.703: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:24.705: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:24.705: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-5eab056d-5401-4c3c-bec9-c564e8720d23&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker" at path "/tmp/local-volume-test-0faedd84-2b24-4d57-9db3-adbb1a294635" 04/18/24 18:23:24.834 Apr 18 18:23:24.834: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-0faedd84-2b24-4d57-9db3-adbb1a294635"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:24.834: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:24.836: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:24.836: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-0faedd84-2b24-4d57-9db3-adbb1a294635%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/18/24 18:23:24.969 Apr 18 18:23:24.970: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0faedd84-2b24-4d57-9db3-adbb1a294635] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:24.970: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:24.971: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:24.971: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-0faedd84-2b24-4d57-9db3-adbb1a294635&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker" at path "/tmp/local-volume-test-4f2effed-07e0-47c6-bc65-0fb97fa529aa" 04/18/24 18:23:25.122 Apr 18 18:23:25.123: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-4f2effed-07e0-47c6-bc65-0fb97fa529aa"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:25.123: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:25.124: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:25.124: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-4f2effed-07e0-47c6-bc65-0fb97fa529aa%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/18/24 18:23:25.267 Apr 18 18:23:25.267: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4f2effed-07e0-47c6-bc65-0fb97fa529aa] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:25.267: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:25.269: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:25.269: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-4f2effed-07e0-47c6-bc65-0fb97fa529aa&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker" at path "/tmp/local-volume-test-cedfd03d-e2fc-4679-a14f-269ecc1e0f60" 04/18/24 18:23:25.41 Apr 18 18:23:25.410: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-cedfd03d-e2fc-4679-a14f-269ecc1e0f60"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:25.410: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:25.412: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:25.412: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-cedfd03d-e2fc-4679-a14f-269ecc1e0f60%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/18/24 18:23:25.557 Apr 18 18:23:25.557: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-cedfd03d-e2fc-4679-a14f-269ecc1e0f60] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:25.557: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:25.558: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:25.558: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-cedfd03d-e2fc-4679-a14f-269ecc1e0f60&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker" at path "/tmp/local-volume-test-5e9dedb7-eba4-43c0-8ee2-f5f05e0307e3" 04/18/24 18:23:25.718 Apr 18 18:23:25.719: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-5e9dedb7-eba4-43c0-8ee2-f5f05e0307e3"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:25.719: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:25.720: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:25.720: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-5e9dedb7-eba4-43c0-8ee2-f5f05e0307e3%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/18/24 18:23:25.849 Apr 18 18:23:25.850: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-5e9dedb7-eba4-43c0-8ee2-f5f05e0307e3] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:25.850: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:25.851: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:25.851: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-5e9dedb7-eba4-43c0-8ee2-f5f05e0307e3&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker" at path "/tmp/local-volume-test-ad2566bd-e67c-4b03-b54a-0822df6c369a" 04/18/24 18:23:25.994 Apr 18 18:23:25.994: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-ad2566bd-e67c-4b03-b54a-0822df6c369a"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:25.994: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:25.995: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:25.996: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-ad2566bd-e67c-4b03-b54a-0822df6c369a%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/18/24 18:23:26.13 Apr 18 18:23:26.130: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ad2566bd-e67c-4b03-b54a-0822df6c369a] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:26.130: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:26.131: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:26.131: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-ad2566bd-e67c-4b03-b54a-0822df6c369a&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker" at path "/tmp/local-volume-test-4161bc4a-1cda-4b0d-afb1-9de55917bb11" 04/18/24 18:23:26.281 Apr 18 18:23:26.281: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-4161bc4a-1cda-4b0d-afb1-9de55917bb11"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:26.281: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:26.282: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:26.283: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-4161bc4a-1cda-4b0d-afb1-9de55917bb11%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/18/24 18:23:26.444 Apr 18 18:23:26.444: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4161bc4a-1cda-4b0d-afb1-9de55917bb11] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:26.444: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:26.445: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:26.445: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-4161bc4a-1cda-4b0d-afb1-9de55917bb11&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker" at path "/tmp/local-volume-test-1a50ef09-db20-4930-a27a-b7fbb5d3c58e" 04/18/24 18:23:26.6 Apr 18 18:23:26.601: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-1a50ef09-db20-4930-a27a-b7fbb5d3c58e"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:26.601: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:26.602: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:26.602: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-1a50ef09-db20-4930-a27a-b7fbb5d3c58e%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/18/24 18:23:26.747 Apr 18 18:23:26.747: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-1a50ef09-db20-4930-a27a-b7fbb5d3c58e] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:26.747: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:26.749: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:26.749: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-1a50ef09-db20-4930-a27a-b7fbb5d3c58e&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Cleaning up 10 local volumes on node "v126-worker2" 04/18/24 18:23:26.889 STEP: Cleaning up PVC and PV 04/18/24 18:23:26.889 Apr 18 18:23:26.889: INFO: pvc is nil Apr 18 18:23:26.889: INFO: Deleting PersistentVolume "local-pvpktlw" STEP: Cleaning up PVC and PV 04/18/24 18:23:26.896 Apr 18 18:23:26.896: INFO: pvc is nil Apr 18 18:23:26.896: INFO: Deleting PersistentVolume "local-pvpcnjd" STEP: Cleaning up PVC and PV 04/18/24 18:23:26.901 Apr 18 18:23:26.901: INFO: pvc is nil Apr 18 18:23:26.901: INFO: Deleting PersistentVolume "local-pvpcrz6" STEP: Cleaning up PVC and PV 04/18/24 18:23:26.904 Apr 18 18:23:26.905: INFO: pvc is nil Apr 18 18:23:26.905: INFO: Deleting PersistentVolume "local-pvk9vgk" STEP: Cleaning up PVC and PV 04/18/24 18:23:26.909 Apr 18 18:23:26.909: INFO: pvc is nil Apr 18 18:23:26.909: INFO: Deleting PersistentVolume "local-pv54rlh" STEP: Cleaning up PVC and PV 04/18/24 18:23:26.913 Apr 18 18:23:26.913: INFO: pvc is nil Apr 18 18:23:26.913: INFO: Deleting PersistentVolume "local-pvsr4w8" STEP: Cleaning up PVC and PV 04/18/24 18:23:26.916 Apr 18 18:23:26.916: INFO: pvc is nil Apr 18 18:23:26.916: INFO: Deleting PersistentVolume "local-pv2rxbq" STEP: Cleaning up PVC and PV 04/18/24 18:23:26.92 Apr 18 18:23:26.920: INFO: pvc is nil Apr 18 18:23:26.920: INFO: Deleting PersistentVolume "local-pvz5sbj" STEP: Cleaning up PVC and PV 04/18/24 18:23:26.924 Apr 18 18:23:26.924: INFO: pvc is nil Apr 18 18:23:26.924: INFO: Deleting PersistentVolume "local-pv6l2jh" STEP: Cleaning up PVC and PV 04/18/24 18:23:26.928 Apr 18 18:23:26.928: INFO: pvc is nil Apr 18 18:23:26.928: INFO: Deleting PersistentVolume "local-pvv622p" STEP: Unmount tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-55dbdf26-2db5-4475-a6d2-de0a8a20ac19" 04/18/24 18:23:26.932 Apr 18 18:23:26.932: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-55dbdf26-2db5-4475-a6d2-de0a8a20ac19"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:26.933: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:26.934: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:26.934: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-55dbdf26-2db5-4475-a6d2-de0a8a20ac19%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/18/24 18:23:27.09 Apr 18 18:23:27.090: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-55dbdf26-2db5-4475-a6d2-de0a8a20ac19] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:27.090: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:27.091: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:27.091: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-55dbdf26-2db5-4475-a6d2-de0a8a20ac19&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-45e4afe0-e303-40d7-96dd-ac685b4c9662" 04/18/24 18:23:27.191 Apr 18 18:23:27.192: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-45e4afe0-e303-40d7-96dd-ac685b4c9662"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:27.192: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:27.193: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:27.193: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-45e4afe0-e303-40d7-96dd-ac685b4c9662%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/18/24 18:23:27.348 Apr 18 18:23:27.348: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-45e4afe0-e303-40d7-96dd-ac685b4c9662] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:27.348: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:27.350: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:27.350: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-45e4afe0-e303-40d7-96dd-ac685b4c9662&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-01703c6d-2dcb-45a2-a5a4-caa2cd12db4f" 04/18/24 18:23:27.503 Apr 18 18:23:27.503: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-01703c6d-2dcb-45a2-a5a4-caa2cd12db4f"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:27.503: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:27.504: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:27.504: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-01703c6d-2dcb-45a2-a5a4-caa2cd12db4f%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/18/24 18:23:27.667 Apr 18 18:23:27.667: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-01703c6d-2dcb-45a2-a5a4-caa2cd12db4f] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:27.667: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:27.668: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:27.669: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-01703c6d-2dcb-45a2-a5a4-caa2cd12db4f&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-618a9b44-b951-4946-a571-dd9ee138aca2" 04/18/24 18:23:27.817 Apr 18 18:23:27.817: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-618a9b44-b951-4946-a571-dd9ee138aca2"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:27.817: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:27.819: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:27.819: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-618a9b44-b951-4946-a571-dd9ee138aca2%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/18/24 18:23:27.971 Apr 18 18:23:27.971: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-618a9b44-b951-4946-a571-dd9ee138aca2] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:27.971: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:27.972: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:27.972: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-618a9b44-b951-4946-a571-dd9ee138aca2&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-d2540448-7ed6-42ec-b00d-f24ffbc94a00" 04/18/24 18:23:28.121 Apr 18 18:23:28.122: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-d2540448-7ed6-42ec-b00d-f24ffbc94a00"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:28.122: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:28.123: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:28.123: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-d2540448-7ed6-42ec-b00d-f24ffbc94a00%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/18/24 18:23:28.265 Apr 18 18:23:28.266: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-d2540448-7ed6-42ec-b00d-f24ffbc94a00] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:28.266: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:28.267: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:28.267: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-d2540448-7ed6-42ec-b00d-f24ffbc94a00&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-e8a2452e-c7c1-4012-af90-3b5518fdaecf" 04/18/24 18:23:28.408 Apr 18 18:23:28.409: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-e8a2452e-c7c1-4012-af90-3b5518fdaecf"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:28.409: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:28.410: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:28.410: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-e8a2452e-c7c1-4012-af90-3b5518fdaecf%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/18/24 18:23:28.563 Apr 18 18:23:28.563: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e8a2452e-c7c1-4012-af90-3b5518fdaecf] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:28.563: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:28.564: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:28.564: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-e8a2452e-c7c1-4012-af90-3b5518fdaecf&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-09f38913-3cc2-4afa-941f-dbf257f9c880" 04/18/24 18:23:28.726 Apr 18 18:23:28.726: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-09f38913-3cc2-4afa-941f-dbf257f9c880"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:28.726: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:28.727: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:28.727: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-09f38913-3cc2-4afa-941f-dbf257f9c880%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/18/24 18:23:28.86 Apr 18 18:23:28.860: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-09f38913-3cc2-4afa-941f-dbf257f9c880] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:28.860: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:28.861: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:28.861: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-09f38913-3cc2-4afa-941f-dbf257f9c880&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-3a576b13-a40b-4498-8787-f5b71ac3c0e3" 04/18/24 18:23:29.009 Apr 18 18:23:29.010: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-3a576b13-a40b-4498-8787-f5b71ac3c0e3"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:29.010: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:29.011: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:29.011: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-3a576b13-a40b-4498-8787-f5b71ac3c0e3%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/18/24 18:23:29.159 Apr 18 18:23:29.160: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-3a576b13-a40b-4498-8787-f5b71ac3c0e3] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:29.160: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:29.161: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:29.161: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-3a576b13-a40b-4498-8787-f5b71ac3c0e3&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-fac4eaa9-ab1b-4891-8a00-23f1d52e38af" 04/18/24 18:23:29.322 Apr 18 18:23:29.322: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-fac4eaa9-ab1b-4891-8a00-23f1d52e38af"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:29.322: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:29.323: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:29.323: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-fac4eaa9-ab1b-4891-8a00-23f1d52e38af%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/18/24 18:23:29.463 Apr 18 18:23:29.463: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-fac4eaa9-ab1b-4891-8a00-23f1d52e38af] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:29.463: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:29.465: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:29.465: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-fac4eaa9-ab1b-4891-8a00-23f1d52e38af&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-f8d7be9e-3634-43ae-8736-8636c73aa4d2" 04/18/24 18:23:29.605 Apr 18 18:23:29.605: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-f8d7be9e-3634-43ae-8736-8636c73aa4d2"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:29.606: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:29.607: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:29.607: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-f8d7be9e-3634-43ae-8736-8636c73aa4d2%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/18/24 18:23:29.762 Apr 18 18:23:29.762: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f8d7be9e-3634-43ae-8736-8636c73aa4d2] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:29.762: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:29.765: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:29.765: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-f8d7be9e-3634-43ae-8736-8636c73aa4d2&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 18 18:23:29.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-38" for this suite. 04/18/24 18:23:29.935 ------------------------------ • [SLOW TEST] [82.295 seconds] [sig-storage] PersistentVolumes-local test/e2e/storage/utils/framework.go:23 Stress with local volumes [Serial] test/e2e/storage/persistent_volumes-local.go:444 should be able to process many pods and reuse local volumes test/e2e/storage/persistent_volumes-local.go:534 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:22:07.645 Apr 18 18:22:07.645: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/18/24 18:22:07.647 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:22:07.657 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:22:07.661 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] Stress with local volumes [Serial] test/e2e/storage/persistent_volumes-local.go:458 STEP: Setting up 10 local volumes on node "v126-worker" 04/18/24 18:22:07.674 STEP: Creating tmpfs mount point on node "v126-worker" at path "/tmp/local-volume-test-0cfc23b5-94ae-4254-9e28-16374a08f7d2" 04/18/24 18:22:07.674 Apr 18 18:22:07.682: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker-6lhv8" in namespace "persistent-local-volumes-test-38" to be "running" Apr 18 18:22:07.685: INFO: Pod "hostexec-v126-worker-6lhv8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.071675ms Apr 18 18:22:09.690: INFO: Pod "hostexec-v126-worker-6lhv8": Phase="Running", Reason="", readiness=true. Elapsed: 2.007887571s Apr 18 18:22:09.690: INFO: Pod "hostexec-v126-worker-6lhv8" satisfied condition "running" Apr 18 18:22:09.690: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-0cfc23b5-94ae-4254-9e28-16374a08f7d2" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-0cfc23b5-94ae-4254-9e28-16374a08f7d2" "/tmp/local-volume-test-0cfc23b5-94ae-4254-9e28-16374a08f7d2"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:09.690: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:09.692: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:09.692: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-0cfc23b5-94ae-4254-9e28-16374a08f7d2%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-0cfc23b5-94ae-4254-9e28-16374a08f7d2%22+%22%2Ftmp%2Flocal-volume-test-0cfc23b5-94ae-4254-9e28-16374a08f7d2%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker" at path "/tmp/local-volume-test-04978b0e-0487-447a-8012-2a78eebe6ae6" 04/18/24 18:22:09.848 Apr 18 18:22:09.848: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-04978b0e-0487-447a-8012-2a78eebe6ae6" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-04978b0e-0487-447a-8012-2a78eebe6ae6" "/tmp/local-volume-test-04978b0e-0487-447a-8012-2a78eebe6ae6"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:09.848: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:09.849: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:09.849: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-04978b0e-0487-447a-8012-2a78eebe6ae6%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-04978b0e-0487-447a-8012-2a78eebe6ae6%22+%22%2Ftmp%2Flocal-volume-test-04978b0e-0487-447a-8012-2a78eebe6ae6%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker" at path "/tmp/local-volume-test-5eab056d-5401-4c3c-bec9-c564e8720d23" 04/18/24 18:22:09.998 Apr 18 18:22:09.998: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-5eab056d-5401-4c3c-bec9-c564e8720d23" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-5eab056d-5401-4c3c-bec9-c564e8720d23" "/tmp/local-volume-test-5eab056d-5401-4c3c-bec9-c564e8720d23"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:09.998: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:10.000: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:10.000: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-5eab056d-5401-4c3c-bec9-c564e8720d23%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-5eab056d-5401-4c3c-bec9-c564e8720d23%22+%22%2Ftmp%2Flocal-volume-test-5eab056d-5401-4c3c-bec9-c564e8720d23%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker" at path "/tmp/local-volume-test-0faedd84-2b24-4d57-9db3-adbb1a294635" 04/18/24 18:22:10.159 Apr 18 18:22:10.160: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-0faedd84-2b24-4d57-9db3-adbb1a294635" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-0faedd84-2b24-4d57-9db3-adbb1a294635" "/tmp/local-volume-test-0faedd84-2b24-4d57-9db3-adbb1a294635"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:10.160: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:10.161: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:10.161: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-0faedd84-2b24-4d57-9db3-adbb1a294635%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-0faedd84-2b24-4d57-9db3-adbb1a294635%22+%22%2Ftmp%2Flocal-volume-test-0faedd84-2b24-4d57-9db3-adbb1a294635%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker" at path "/tmp/local-volume-test-4f2effed-07e0-47c6-bc65-0fb97fa529aa" 04/18/24 18:22:10.308 Apr 18 18:22:10.309: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-4f2effed-07e0-47c6-bc65-0fb97fa529aa" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-4f2effed-07e0-47c6-bc65-0fb97fa529aa" "/tmp/local-volume-test-4f2effed-07e0-47c6-bc65-0fb97fa529aa"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:10.309: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:10.310: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:10.310: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-4f2effed-07e0-47c6-bc65-0fb97fa529aa%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-4f2effed-07e0-47c6-bc65-0fb97fa529aa%22+%22%2Ftmp%2Flocal-volume-test-4f2effed-07e0-47c6-bc65-0fb97fa529aa%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker" at path "/tmp/local-volume-test-cedfd03d-e2fc-4679-a14f-269ecc1e0f60" 04/18/24 18:22:10.464 Apr 18 18:22:10.464: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-cedfd03d-e2fc-4679-a14f-269ecc1e0f60" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-cedfd03d-e2fc-4679-a14f-269ecc1e0f60" "/tmp/local-volume-test-cedfd03d-e2fc-4679-a14f-269ecc1e0f60"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:10.464: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:10.465: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:10.465: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-cedfd03d-e2fc-4679-a14f-269ecc1e0f60%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-cedfd03d-e2fc-4679-a14f-269ecc1e0f60%22+%22%2Ftmp%2Flocal-volume-test-cedfd03d-e2fc-4679-a14f-269ecc1e0f60%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker" at path "/tmp/local-volume-test-5e9dedb7-eba4-43c0-8ee2-f5f05e0307e3" 04/18/24 18:22:10.625 Apr 18 18:22:10.625: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-5e9dedb7-eba4-43c0-8ee2-f5f05e0307e3" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-5e9dedb7-eba4-43c0-8ee2-f5f05e0307e3" "/tmp/local-volume-test-5e9dedb7-eba4-43c0-8ee2-f5f05e0307e3"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:10.625: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:10.626: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:10.626: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-5e9dedb7-eba4-43c0-8ee2-f5f05e0307e3%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-5e9dedb7-eba4-43c0-8ee2-f5f05e0307e3%22+%22%2Ftmp%2Flocal-volume-test-5e9dedb7-eba4-43c0-8ee2-f5f05e0307e3%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker" at path "/tmp/local-volume-test-ad2566bd-e67c-4b03-b54a-0822df6c369a" 04/18/24 18:22:10.779 Apr 18 18:22:10.779: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-ad2566bd-e67c-4b03-b54a-0822df6c369a" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-ad2566bd-e67c-4b03-b54a-0822df6c369a" "/tmp/local-volume-test-ad2566bd-e67c-4b03-b54a-0822df6c369a"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:10.779: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:10.781: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:10.781: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-ad2566bd-e67c-4b03-b54a-0822df6c369a%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-ad2566bd-e67c-4b03-b54a-0822df6c369a%22+%22%2Ftmp%2Flocal-volume-test-ad2566bd-e67c-4b03-b54a-0822df6c369a%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker" at path "/tmp/local-volume-test-4161bc4a-1cda-4b0d-afb1-9de55917bb11" 04/18/24 18:22:10.868 Apr 18 18:22:10.868: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-4161bc4a-1cda-4b0d-afb1-9de55917bb11" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-4161bc4a-1cda-4b0d-afb1-9de55917bb11" "/tmp/local-volume-test-4161bc4a-1cda-4b0d-afb1-9de55917bb11"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:10.868: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:10.870: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:10.870: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-4161bc4a-1cda-4b0d-afb1-9de55917bb11%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-4161bc4a-1cda-4b0d-afb1-9de55917bb11%22+%22%2Ftmp%2Flocal-volume-test-4161bc4a-1cda-4b0d-afb1-9de55917bb11%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker" at path "/tmp/local-volume-test-1a50ef09-db20-4930-a27a-b7fbb5d3c58e" 04/18/24 18:22:11.001 Apr 18 18:22:11.001: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-1a50ef09-db20-4930-a27a-b7fbb5d3c58e" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-1a50ef09-db20-4930-a27a-b7fbb5d3c58e" "/tmp/local-volume-test-1a50ef09-db20-4930-a27a-b7fbb5d3c58e"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:11.001: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:11.002: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:11.003: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-1a50ef09-db20-4930-a27a-b7fbb5d3c58e%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-1a50ef09-db20-4930-a27a-b7fbb5d3c58e%22+%22%2Ftmp%2Flocal-volume-test-1a50ef09-db20-4930-a27a-b7fbb5d3c58e%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Setting up 10 local volumes on node "v126-worker2" 04/18/24 18:22:11.158 STEP: Creating tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-55dbdf26-2db5-4475-a6d2-de0a8a20ac19" 04/18/24 18:22:11.159 Apr 18 18:22:11.166: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker2-qpb6j" in namespace "persistent-local-volumes-test-38" to be "running" Apr 18 18:22:11.169: INFO: Pod "hostexec-v126-worker2-qpb6j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.991798ms Apr 18 18:22:13.175: INFO: Pod "hostexec-v126-worker2-qpb6j": Phase="Running", Reason="", readiness=true. Elapsed: 2.008269065s Apr 18 18:22:13.175: INFO: Pod "hostexec-v126-worker2-qpb6j" satisfied condition "running" Apr 18 18:22:13.175: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-55dbdf26-2db5-4475-a6d2-de0a8a20ac19" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-55dbdf26-2db5-4475-a6d2-de0a8a20ac19" "/tmp/local-volume-test-55dbdf26-2db5-4475-a6d2-de0a8a20ac19"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:13.175: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:13.176: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:13.176: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-55dbdf26-2db5-4475-a6d2-de0a8a20ac19%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-55dbdf26-2db5-4475-a6d2-de0a8a20ac19%22+%22%2Ftmp%2Flocal-volume-test-55dbdf26-2db5-4475-a6d2-de0a8a20ac19%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-45e4afe0-e303-40d7-96dd-ac685b4c9662" 04/18/24 18:22:13.344 Apr 18 18:22:13.344: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-45e4afe0-e303-40d7-96dd-ac685b4c9662" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-45e4afe0-e303-40d7-96dd-ac685b4c9662" "/tmp/local-volume-test-45e4afe0-e303-40d7-96dd-ac685b4c9662"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:13.344: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:13.346: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:13.346: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-45e4afe0-e303-40d7-96dd-ac685b4c9662%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-45e4afe0-e303-40d7-96dd-ac685b4c9662%22+%22%2Ftmp%2Flocal-volume-test-45e4afe0-e303-40d7-96dd-ac685b4c9662%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-01703c6d-2dcb-45a2-a5a4-caa2cd12db4f" 04/18/24 18:22:13.509 Apr 18 18:22:13.510: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-01703c6d-2dcb-45a2-a5a4-caa2cd12db4f" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-01703c6d-2dcb-45a2-a5a4-caa2cd12db4f" "/tmp/local-volume-test-01703c6d-2dcb-45a2-a5a4-caa2cd12db4f"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:13.510: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:13.511: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:13.511: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-01703c6d-2dcb-45a2-a5a4-caa2cd12db4f%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-01703c6d-2dcb-45a2-a5a4-caa2cd12db4f%22+%22%2Ftmp%2Flocal-volume-test-01703c6d-2dcb-45a2-a5a4-caa2cd12db4f%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-618a9b44-b951-4946-a571-dd9ee138aca2" 04/18/24 18:22:13.658 Apr 18 18:22:13.658: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-618a9b44-b951-4946-a571-dd9ee138aca2" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-618a9b44-b951-4946-a571-dd9ee138aca2" "/tmp/local-volume-test-618a9b44-b951-4946-a571-dd9ee138aca2"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:13.658: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:13.659: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:13.659: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-618a9b44-b951-4946-a571-dd9ee138aca2%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-618a9b44-b951-4946-a571-dd9ee138aca2%22+%22%2Ftmp%2Flocal-volume-test-618a9b44-b951-4946-a571-dd9ee138aca2%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-d2540448-7ed6-42ec-b00d-f24ffbc94a00" 04/18/24 18:22:13.812 Apr 18 18:22:13.812: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-d2540448-7ed6-42ec-b00d-f24ffbc94a00" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-d2540448-7ed6-42ec-b00d-f24ffbc94a00" "/tmp/local-volume-test-d2540448-7ed6-42ec-b00d-f24ffbc94a00"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:13.812: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:13.813: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:13.813: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-d2540448-7ed6-42ec-b00d-f24ffbc94a00%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-d2540448-7ed6-42ec-b00d-f24ffbc94a00%22+%22%2Ftmp%2Flocal-volume-test-d2540448-7ed6-42ec-b00d-f24ffbc94a00%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-e8a2452e-c7c1-4012-af90-3b5518fdaecf" 04/18/24 18:22:13.937 Apr 18 18:22:13.937: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-e8a2452e-c7c1-4012-af90-3b5518fdaecf" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-e8a2452e-c7c1-4012-af90-3b5518fdaecf" "/tmp/local-volume-test-e8a2452e-c7c1-4012-af90-3b5518fdaecf"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:13.937: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:13.938: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:13.939: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-e8a2452e-c7c1-4012-af90-3b5518fdaecf%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-e8a2452e-c7c1-4012-af90-3b5518fdaecf%22+%22%2Ftmp%2Flocal-volume-test-e8a2452e-c7c1-4012-af90-3b5518fdaecf%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-09f38913-3cc2-4afa-941f-dbf257f9c880" 04/18/24 18:22:14.086 Apr 18 18:22:14.086: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-09f38913-3cc2-4afa-941f-dbf257f9c880" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-09f38913-3cc2-4afa-941f-dbf257f9c880" "/tmp/local-volume-test-09f38913-3cc2-4afa-941f-dbf257f9c880"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:14.086: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:14.088: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:14.088: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-09f38913-3cc2-4afa-941f-dbf257f9c880%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-09f38913-3cc2-4afa-941f-dbf257f9c880%22+%22%2Ftmp%2Flocal-volume-test-09f38913-3cc2-4afa-941f-dbf257f9c880%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-3a576b13-a40b-4498-8787-f5b71ac3c0e3" 04/18/24 18:22:14.228 Apr 18 18:22:14.228: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-3a576b13-a40b-4498-8787-f5b71ac3c0e3" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-3a576b13-a40b-4498-8787-f5b71ac3c0e3" "/tmp/local-volume-test-3a576b13-a40b-4498-8787-f5b71ac3c0e3"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:14.228: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:14.229: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:14.229: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-3a576b13-a40b-4498-8787-f5b71ac3c0e3%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-3a576b13-a40b-4498-8787-f5b71ac3c0e3%22+%22%2Ftmp%2Flocal-volume-test-3a576b13-a40b-4498-8787-f5b71ac3c0e3%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-fac4eaa9-ab1b-4891-8a00-23f1d52e38af" 04/18/24 18:22:14.379 Apr 18 18:22:14.380: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-fac4eaa9-ab1b-4891-8a00-23f1d52e38af" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-fac4eaa9-ab1b-4891-8a00-23f1d52e38af" "/tmp/local-volume-test-fac4eaa9-ab1b-4891-8a00-23f1d52e38af"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:14.380: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:14.381: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:14.381: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-fac4eaa9-ab1b-4891-8a00-23f1d52e38af%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-fac4eaa9-ab1b-4891-8a00-23f1d52e38af%22+%22%2Ftmp%2Flocal-volume-test-fac4eaa9-ab1b-4891-8a00-23f1d52e38af%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-f8d7be9e-3634-43ae-8736-8636c73aa4d2" 04/18/24 18:22:14.522 Apr 18 18:22:14.522: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-f8d7be9e-3634-43ae-8736-8636c73aa4d2" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-f8d7be9e-3634-43ae-8736-8636c73aa4d2" "/tmp/local-volume-test-f8d7be9e-3634-43ae-8736-8636c73aa4d2"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:22:14.522: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:22:14.524: INFO: ExecWithOptions: Clientset creation Apr 18 18:22:14.524: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-f8d7be9e-3634-43ae-8736-8636c73aa4d2%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-f8d7be9e-3634-43ae-8736-8636c73aa4d2%22+%22%2Ftmp%2Flocal-volume-test-f8d7be9e-3634-43ae-8736-8636c73aa4d2%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Create 20 PVs 04/18/24 18:22:14.673 STEP: Start a goroutine to recycle unbound PVs 04/18/24 18:22:14.761 [It] should be able to process many pods and reuse local volumes test/e2e/storage/persistent_volumes-local.go:534 STEP: Creating 7 pods periodically 04/18/24 18:22:14.761 STEP: Waiting for all pods to complete successfully 04/18/24 18:22:14.762 Apr 18 18:22:21.901: INFO: Deleting pod pod-1ccecc6d-8955-40e8-a3cc-15039ff79ae8 Apr 18 18:22:21.909: INFO: Deleting PersistentVolumeClaim "pvc-wxq2q" Apr 18 18:22:21.914: INFO: Deleting PersistentVolumeClaim "pvc-rkx6p" Apr 18 18:22:21.919: INFO: Deleting PersistentVolumeClaim "pvc-p6tfn" Apr 18 18:22:21.924: INFO: 1/28 pods finished Apr 18 18:22:21.924: INFO: Deleting pod pod-2535a53f-97c9-497c-83da-c84fb3edeca5 Apr 18 18:22:21.933: INFO: Deleting PersistentVolumeClaim "pvc-kwjw9" STEP: Delete "local-pvph4pd" and create a new PV for same local volume storage 04/18/24 18:22:21.936 Apr 18 18:22:21.938: INFO: Deleting PersistentVolumeClaim "pvc-w9h7p" Apr 18 18:22:21.946: INFO: Deleting PersistentVolumeClaim "pvc-n2zv9" Apr 18 18:22:21.961: INFO: 2/28 pods finished STEP: Delete "local-pvnzz4x" and create a new PV for same local volume storage 04/18/24 18:22:21.966 STEP: Delete "local-pvf7ktd" and create a new PV for same local volume storage 04/18/24 18:22:21.977 STEP: Delete "local-pv5xhdz" and create a new PV for same local volume storage 04/18/24 18:22:21.994 STEP: Delete "local-pvl97zg" and create a new PV for same local volume storage 04/18/24 18:22:22.007 STEP: Delete "local-pvhpk9w" and create a new PV for same local volume storage 04/18/24 18:22:22.021 Apr 18 18:22:25.900: INFO: Deleting pod pod-10961d13-5c37-4cc3-bdb2-38cb1458d881 Apr 18 18:22:25.909: INFO: Deleting PersistentVolumeClaim "pvc-2vck4" Apr 18 18:22:25.914: INFO: Deleting PersistentVolumeClaim "pvc-rsh4d" Apr 18 18:22:25.920: INFO: Deleting PersistentVolumeClaim "pvc-x2wng" Apr 18 18:22:25.925: INFO: 3/28 pods finished Apr 18 18:22:25.925: INFO: Deleting pod pod-31fd79fc-c1af-4daa-8549-0c6095e8ccdf Apr 18 18:22:25.934: INFO: Deleting PersistentVolumeClaim "pvc-6w2t8" STEP: Delete "local-pv2t4vz" and create a new PV for same local volume storage 04/18/24 18:22:25.936 Apr 18 18:22:25.939: INFO: Deleting PersistentVolumeClaim "pvc-fpdz7" Apr 18 18:22:25.943: INFO: Deleting PersistentVolumeClaim "pvc-6pb7z" Apr 18 18:22:25.948: INFO: 4/28 pods finished STEP: Delete "local-pvwrsd5" and create a new PV for same local volume storage 04/18/24 18:22:25.951 STEP: Delete "local-pv7zmp6" and create a new PV for same local volume storage 04/18/24 18:22:25.963 STEP: Delete "local-pvm4vws" and create a new PV for same local volume storage 04/18/24 18:22:25.977 STEP: Delete "local-pvdgrhs" and create a new PV for same local volume storage 04/18/24 18:22:25.991 STEP: Delete "local-pvmrzbm" and create a new PV for same local volume storage 04/18/24 18:22:26.006 Apr 18 18:22:28.897: INFO: Deleting pod pod-3dd03287-287d-4089-9a34-270023e3cd49 Apr 18 18:22:28.906: INFO: Deleting PersistentVolumeClaim "pvc-md59r" Apr 18 18:22:28.911: INFO: Deleting PersistentVolumeClaim "pvc-wn9dq" Apr 18 18:22:28.917: INFO: Deleting PersistentVolumeClaim "pvc-qxfcr" Apr 18 18:22:28.925: INFO: 5/28 pods finished STEP: Delete "local-pvbvkk9" and create a new PV for same local volume storage 04/18/24 18:22:28.938 STEP: Delete "local-pvbzf44" and create a new PV for same local volume storage 04/18/24 18:22:28.952 STEP: Delete "local-pvvl8p5" and create a new PV for same local volume storage 04/18/24 18:22:28.968 Apr 18 18:22:30.901: INFO: Deleting pod pod-243195a3-408b-4e61-be40-41353209c9a6 Apr 18 18:22:30.909: INFO: Deleting PersistentVolumeClaim "pvc-4pfcl" Apr 18 18:22:30.914: INFO: Deleting PersistentVolumeClaim "pvc-4c4b4" Apr 18 18:22:30.919: INFO: Deleting PersistentVolumeClaim "pvc-6fhtf" Apr 18 18:22:30.923: INFO: 6/28 pods finished STEP: Delete "local-pv98xdd" and create a new PV for same local volume storage 04/18/24 18:22:30.935 STEP: Delete "local-pvcqrjl" and create a new PV for same local volume storage 04/18/24 18:22:30.949 STEP: Delete "local-pvcqrjl" and create a new PV for same local volume storage 04/18/24 18:22:30.962 STEP: Delete "local-pvz4k5z" and create a new PV for same local volume storage 04/18/24 18:22:30.965 Apr 18 18:22:35.895: INFO: Deleting pod pod-04b4bced-9f6d-47ba-bc71-9dfa936f71b5 Apr 18 18:22:35.903: INFO: Deleting PersistentVolumeClaim "pvc-kzxg6" Apr 18 18:22:35.907: INFO: Deleting PersistentVolumeClaim "pvc-q8lgs" Apr 18 18:22:35.911: INFO: Deleting PersistentVolumeClaim "pvc-9k5rq" Apr 18 18:22:35.916: INFO: 7/28 pods finished STEP: Delete "local-pvtwr9f" and create a new PV for same local volume storage 04/18/24 18:22:35.93 STEP: Delete "local-pvb8pmh" and create a new PV for same local volume storage 04/18/24 18:22:35.944 STEP: Delete "local-pvrbqxs" and create a new PV for same local volume storage 04/18/24 18:22:35.958 Apr 18 18:22:37.895: INFO: Deleting pod pod-266fb444-3fed-40b3-93b6-1b6f7775b3f5 Apr 18 18:22:37.905: INFO: Deleting PersistentVolumeClaim "pvc-kvlwc" Apr 18 18:22:37.910: INFO: Deleting PersistentVolumeClaim "pvc-7k7mp" Apr 18 18:22:37.915: INFO: Deleting PersistentVolumeClaim "pvc-cr4g8" Apr 18 18:22:37.920: INFO: 8/28 pods finished STEP: Delete "local-pvbbr8z" and create a new PV for same local volume storage 04/18/24 18:22:37.934 STEP: Delete "local-pv69pm7" and create a new PV for same local volume storage 04/18/24 18:22:37.949 STEP: Delete "local-pvm4p28" and create a new PV for same local volume storage 04/18/24 18:22:37.963 Apr 18 18:22:39.899: INFO: Deleting pod pod-1e92c2a2-f5d9-4c5c-94cf-f0094f040bc1 Apr 18 18:22:39.907: INFO: Deleting PersistentVolumeClaim "pvc-6s9tg" Apr 18 18:22:39.912: INFO: Deleting PersistentVolumeClaim "pvc-f2wwz" Apr 18 18:22:39.917: INFO: Deleting PersistentVolumeClaim "pvc-2jjt5" Apr 18 18:22:39.922: INFO: 9/28 pods finished STEP: Delete "local-pvk7pth" and create a new PV for same local volume storage 04/18/24 18:22:39.934 STEP: Delete "local-pvxn875" and create a new PV for same local volume storage 04/18/24 18:22:39.948 STEP: Delete "local-pv5mnqp" and create a new PV for same local volume storage 04/18/24 18:22:39.962 Apr 18 18:22:42.895: INFO: Deleting pod pod-aad48fcf-4bc8-483b-8cc0-545579bcbaab Apr 18 18:22:42.906: INFO: Deleting PersistentVolumeClaim "pvc-gtvbd" Apr 18 18:22:42.911: INFO: Deleting PersistentVolumeClaim "pvc-6xmg2" Apr 18 18:22:42.915: INFO: Deleting PersistentVolumeClaim "pvc-sqngg" Apr 18 18:22:42.921: INFO: 10/28 pods finished STEP: Delete "local-pvlktv2" and create a new PV for same local volume storage 04/18/24 18:22:42.931 STEP: Delete "local-pvtb9b4" and create a new PV for same local volume storage 04/18/24 18:22:42.947 STEP: Delete "local-pvtb5vr" and create a new PV for same local volume storage 04/18/24 18:22:42.962 Apr 18 18:22:45.895: INFO: Deleting pod pod-070b1014-39fd-4bff-b232-6905c2be859c Apr 18 18:22:45.909: INFO: Deleting PersistentVolumeClaim "pvc-t6tk2" Apr 18 18:22:45.915: INFO: Deleting PersistentVolumeClaim "pvc-rj4v5" Apr 18 18:22:45.920: INFO: Deleting PersistentVolumeClaim "pvc-dgfbs" Apr 18 18:22:45.925: INFO: 11/28 pods finished STEP: Delete "local-pvj769l" and create a new PV for same local volume storage 04/18/24 18:22:45.938 STEP: Delete "local-pvk24gr" and create a new PV for same local volume storage 04/18/24 18:22:45.952 STEP: Delete "local-pvs8spd" and create a new PV for same local volume storage 04/18/24 18:22:45.967 Apr 18 18:22:46.900: INFO: Deleting pod pod-43bf0990-017c-4359-879f-32557c509cd9 Apr 18 18:22:46.909: INFO: Deleting PersistentVolumeClaim "pvc-cgc2r" Apr 18 18:22:46.915: INFO: Deleting PersistentVolumeClaim "pvc-24gp9" Apr 18 18:22:46.920: INFO: Deleting PersistentVolumeClaim "pvc-tg698" Apr 18 18:22:46.926: INFO: 12/28 pods finished STEP: Delete "local-pv5vw48" and create a new PV for same local volume storage 04/18/24 18:22:46.936 STEP: Delete "local-pv9kxbm" and create a new PV for same local volume storage 04/18/24 18:22:46.95 STEP: Delete "local-pvrgsgl" and create a new PV for same local volume storage 04/18/24 18:22:46.966 Apr 18 18:22:50.900: INFO: Deleting pod pod-22de8776-6d2b-480a-af12-d8ed67daf8d8 Apr 18 18:22:50.910: INFO: Deleting PersistentVolumeClaim "pvc-6ddgp" Apr 18 18:22:50.915: INFO: Deleting PersistentVolumeClaim "pvc-jn4gq" Apr 18 18:22:50.920: INFO: Deleting PersistentVolumeClaim "pvc-bt9m7" Apr 18 18:22:50.925: INFO: 13/28 pods finished STEP: Delete "local-pvtf2cc" and create a new PV for same local volume storage 04/18/24 18:22:50.937 STEP: Delete "local-pv2b2tz" and create a new PV for same local volume storage 04/18/24 18:22:50.955 STEP: Delete "local-pvq662c" and create a new PV for same local volume storage 04/18/24 18:22:50.969 Apr 18 18:22:52.895: INFO: Deleting pod pod-76847907-7178-4775-8b96-6ffc9b32fa5b Apr 18 18:22:52.903: INFO: Deleting PersistentVolumeClaim "pvc-kp6ch" Apr 18 18:22:52.909: INFO: Deleting PersistentVolumeClaim "pvc-52hf4" Apr 18 18:22:52.914: INFO: Deleting PersistentVolumeClaim "pvc-tp8hw" Apr 18 18:22:52.919: INFO: 14/28 pods finished STEP: Delete "local-pv6sd6k" and create a new PV for same local volume storage 04/18/24 18:22:52.931 STEP: Delete "local-pv4khrm" and create a new PV for same local volume storage 04/18/24 18:22:52.946 STEP: Delete "local-pvtbw2d" and create a new PV for same local volume storage 04/18/24 18:22:52.961 Apr 18 18:22:57.900: INFO: Deleting pod pod-4d393679-e84a-4d96-99ac-7858ff36bd3b Apr 18 18:22:57.910: INFO: Deleting PersistentVolumeClaim "pvc-btkxh" Apr 18 18:22:57.915: INFO: Deleting PersistentVolumeClaim "pvc-vd7nc" Apr 18 18:22:57.919: INFO: Deleting PersistentVolumeClaim "pvc-r2hpq" Apr 18 18:22:57.925: INFO: 15/28 pods finished STEP: Delete "local-pvcxs82" and create a new PV for same local volume storage 04/18/24 18:22:57.939 STEP: Delete "local-pvcxs82" and create a new PV for same local volume storage 04/18/24 18:22:57.952 STEP: Delete "local-pvblwjl" and create a new PV for same local volume storage 04/18/24 18:22:57.955 STEP: Delete "local-pvj8scl" and create a new PV for same local volume storage 04/18/24 18:22:57.969 Apr 18 18:22:58.900: INFO: Deleting pod pod-0546e632-9da3-45fd-96e2-c1372c7a98e2 Apr 18 18:22:58.911: INFO: Deleting PersistentVolumeClaim "pvc-8lpf8" Apr 18 18:22:58.917: INFO: Deleting PersistentVolumeClaim "pvc-mlvpl" Apr 18 18:22:58.921: INFO: Deleting PersistentVolumeClaim "pvc-f2lcn" Apr 18 18:22:58.926: INFO: 16/28 pods finished STEP: Delete "local-pv54ngj" and create a new PV for same local volume storage 04/18/24 18:22:58.939 STEP: Delete "local-pvzrphl" and create a new PV for same local volume storage 04/18/24 18:22:58.954 STEP: Delete "local-pvzdvlq" and create a new PV for same local volume storage 04/18/24 18:22:58.969 Apr 18 18:22:59.894: INFO: Deleting pod pod-3f7cd0d9-93a4-43df-9a8a-2f6f4ebea27e Apr 18 18:22:59.903: INFO: Deleting PersistentVolumeClaim "pvc-dmbhk" Apr 18 18:22:59.908: INFO: Deleting PersistentVolumeClaim "pvc-hvm5f" Apr 18 18:22:59.914: INFO: Deleting PersistentVolumeClaim "pvc-nsb8m" Apr 18 18:22:59.919: INFO: 17/28 pods finished Apr 18 18:22:59.919: INFO: Deleting pod pod-8630be22-df14-4a76-986f-ae1598a71c4b Apr 18 18:22:59.927: INFO: Deleting PersistentVolumeClaim "pvc-7xhzg" Apr 18 18:22:59.932: INFO: Deleting PersistentVolumeClaim "pvc-zmbsx" STEP: Delete "local-pvhtkqt" and create a new PV for same local volume storage 04/18/24 18:22:59.936 Apr 18 18:22:59.937: INFO: Deleting PersistentVolumeClaim "pvc-v5w6v" Apr 18 18:22:59.942: INFO: 18/28 pods finished STEP: Delete "local-pvjzlpn" and create a new PV for same local volume storage 04/18/24 18:22:59.95 STEP: Delete "local-pvtkl94" and create a new PV for same local volume storage 04/18/24 18:22:59.966 STEP: Delete "local-pvj4wqp" and create a new PV for same local volume storage 04/18/24 18:22:59.978 STEP: Delete "local-pvtcgz2" and create a new PV for same local volume storage 04/18/24 18:22:59.992 STEP: Delete "local-pvrqr8v" and create a new PV for same local volume storage 04/18/24 18:23:00.007 Apr 18 18:23:05.901: INFO: Deleting pod pod-2c50178a-9ce6-43ba-b3be-446adce6eeba Apr 18 18:23:05.915: INFO: Deleting PersistentVolumeClaim "pvc-4pk9b" Apr 18 18:23:05.921: INFO: Deleting PersistentVolumeClaim "pvc-n7995" Apr 18 18:23:05.926: INFO: Deleting PersistentVolumeClaim "pvc-9fg4h" Apr 18 18:23:05.931: INFO: 19/28 pods finished STEP: Delete "local-pv97d6m" and create a new PV for same local volume storage 04/18/24 18:23:05.944 STEP: Delete "local-pvjxcgj" and create a new PV for same local volume storage 04/18/24 18:23:05.958 STEP: Delete "local-pvswg62" and create a new PV for same local volume storage 04/18/24 18:23:05.973 Apr 18 18:23:08.895: INFO: Deleting pod pod-e9637635-5306-4c97-b2fd-1331c045d04b Apr 18 18:23:08.905: INFO: Deleting PersistentVolumeClaim "pvc-t9bz4" Apr 18 18:23:08.910: INFO: Deleting PersistentVolumeClaim "pvc-sc5p5" Apr 18 18:23:08.915: INFO: Deleting PersistentVolumeClaim "pvc-k6mmc" Apr 18 18:23:08.920: INFO: 20/28 pods finished STEP: Delete "local-pvtffxv" and create a new PV for same local volume storage 04/18/24 18:23:08.935 STEP: Delete "local-pvfkn8p" and create a new PV for same local volume storage 04/18/24 18:23:08.95 STEP: Delete "local-pv994bt" and create a new PV for same local volume storage 04/18/24 18:23:08.965 Apr 18 18:23:09.900: INFO: Deleting pod pod-555cb69d-dcfd-41e4-85c6-5a14cf3acfa5 Apr 18 18:23:09.910: INFO: Deleting PersistentVolumeClaim "pvc-dx4mq" Apr 18 18:23:09.915: INFO: Deleting PersistentVolumeClaim "pvc-wm85m" Apr 18 18:23:09.919: INFO: Deleting PersistentVolumeClaim "pvc-8n8dk" Apr 18 18:23:09.925: INFO: 21/28 pods finished STEP: Delete "local-pv8hks5" and create a new PV for same local volume storage 04/18/24 18:23:09.937 STEP: Delete "local-pvpgg62" and create a new PV for same local volume storage 04/18/24 18:23:09.951 STEP: Delete "local-pvct2vb" and create a new PV for same local volume storage 04/18/24 18:23:09.966 Apr 18 18:23:11.895: INFO: Deleting pod pod-136bdcaa-07b0-4e0c-8a15-6750fb9fe77b Apr 18 18:23:11.904: INFO: Deleting PersistentVolumeClaim "pvc-9vhd2" Apr 18 18:23:11.910: INFO: Deleting PersistentVolumeClaim "pvc-w82rb" Apr 18 18:23:11.914: INFO: Deleting PersistentVolumeClaim "pvc-snq8b" Apr 18 18:23:11.920: INFO: 22/28 pods finished STEP: Delete "local-pvdxjw2" and create a new PV for same local volume storage 04/18/24 18:23:11.932 STEP: Delete "local-pvx66zp" and create a new PV for same local volume storage 04/18/24 18:23:11.946 STEP: Delete "local-pvhf6lw" and create a new PV for same local volume storage 04/18/24 18:23:11.964 Apr 18 18:23:18.894: INFO: Deleting pod pod-2ba65d0b-344c-400a-bc2d-02b4af2ef615 Apr 18 18:23:18.902: INFO: Deleting PersistentVolumeClaim "pvc-jcc8q" Apr 18 18:23:18.907: INFO: Deleting PersistentVolumeClaim "pvc-wbnv5" Apr 18 18:23:18.912: INFO: Deleting PersistentVolumeClaim "pvc-fnxzm" Apr 18 18:23:18.917: INFO: 23/28 pods finished STEP: Delete "local-pvctxh5" and create a new PV for same local volume storage 04/18/24 18:23:18.929 STEP: Delete "local-pvtqcwm" and create a new PV for same local volume storage 04/18/24 18:23:18.944 STEP: Delete "local-pv8fctw" and create a new PV for same local volume storage 04/18/24 18:23:18.958 Apr 18 18:23:19.895: INFO: Deleting pod pod-6265ef15-ac9a-4c6d-b935-0035a00e97b2 Apr 18 18:23:19.902: INFO: Deleting PersistentVolumeClaim "pvc-krpkg" Apr 18 18:23:19.908: INFO: Deleting PersistentVolumeClaim "pvc-9ptdv" Apr 18 18:23:19.913: INFO: Deleting PersistentVolumeClaim "pvc-8k9v8" Apr 18 18:23:19.918: INFO: 24/28 pods finished Apr 18 18:23:19.918: INFO: Deleting pod pod-ae5704ac-35a5-47bd-a057-63953eab0343 Apr 18 18:23:19.926: INFO: Deleting PersistentVolumeClaim "pvc-6b7vw" STEP: Delete "local-pvs9cf9" and create a new PV for same local volume storage 04/18/24 18:23:19.93 Apr 18 18:23:19.931: INFO: Deleting PersistentVolumeClaim "pvc-vk57q" Apr 18 18:23:19.936: INFO: Deleting PersistentVolumeClaim "pvc-lb8zp" Apr 18 18:23:19.942: INFO: 25/28 pods finished STEP: Delete "local-pvx9jg9" and create a new PV for same local volume storage 04/18/24 18:23:19.945 STEP: Delete "local-pv5bsmk" and create a new PV for same local volume storage 04/18/24 18:23:19.959 STEP: Delete "local-pvx6l7z" and create a new PV for same local volume storage 04/18/24 18:23:19.971 STEP: Delete "local-pvfvd8r" and create a new PV for same local volume storage 04/18/24 18:23:19.986 STEP: Delete "local-pvls246" and create a new PV for same local volume storage 04/18/24 18:23:20 Apr 18 18:23:20.899: INFO: Deleting pod pod-4c7dd5b7-72db-4600-aec6-328eaf359c8f Apr 18 18:23:20.910: INFO: Deleting PersistentVolumeClaim "pvc-k4d4b" Apr 18 18:23:20.915: INFO: Deleting PersistentVolumeClaim "pvc-gmx4d" Apr 18 18:23:20.920: INFO: Deleting PersistentVolumeClaim "pvc-hnvg8" Apr 18 18:23:20.925: INFO: 26/28 pods finished STEP: Delete "local-pv7spb5" and create a new PV for same local volume storage 04/18/24 18:23:20.938 STEP: Delete "local-pvrrsjn" and create a new PV for same local volume storage 04/18/24 18:23:20.952 STEP: Delete "local-pvz85jq" and create a new PV for same local volume storage 04/18/24 18:23:20.969 Apr 18 18:23:21.894: INFO: Deleting pod pod-e518fd0e-f00d-4778-8869-3b5e80d8a4c5 Apr 18 18:23:21.902: INFO: Deleting PersistentVolumeClaim "pvc-d98g2" Apr 18 18:23:21.907: INFO: Deleting PersistentVolumeClaim "pvc-vqrzg" Apr 18 18:23:21.917: INFO: Deleting PersistentVolumeClaim "pvc-7x66k" Apr 18 18:23:21.938: INFO: 27/28 pods finished STEP: Delete "local-pvgqqg8" and create a new PV for same local volume storage 04/18/24 18:23:21.95 STEP: Delete "local-pvnnzxd" and create a new PV for same local volume storage 04/18/24 18:23:21.966 STEP: Delete "local-pvrzqsc" and create a new PV for same local volume storage 04/18/24 18:23:21.981 Apr 18 18:23:23.894: INFO: Deleting pod pod-e1331933-d024-420e-8404-eec66f73f0d5 Apr 18 18:23:23.904: INFO: Deleting PersistentVolumeClaim "pvc-9vcrq" Apr 18 18:23:23.909: INFO: Deleting PersistentVolumeClaim "pvc-kb98s" Apr 18 18:23:23.915: INFO: Deleting PersistentVolumeClaim "pvc-4cbzf" Apr 18 18:23:23.920: INFO: 28/28 pods finished [AfterEach] Stress with local volumes [Serial] test/e2e/storage/persistent_volumes-local.go:522 STEP: Stop and wait for recycle goroutine to finish 04/18/24 18:23:23.92 STEP: Clean all PVs 04/18/24 18:23:23.92 STEP: Cleaning up 10 local volumes on node "v126-worker" 04/18/24 18:23:23.92 STEP: Cleaning up PVC and PV 04/18/24 18:23:23.921 Apr 18 18:23:23.921: INFO: pvc is nil Apr 18 18:23:23.921: INFO: Deleting PersistentVolume "local-pvj67fk" STEP: Cleaning up PVC and PV 04/18/24 18:23:23.926 Apr 18 18:23:23.926: INFO: pvc is nil Apr 18 18:23:23.926: INFO: Deleting PersistentVolume "local-pvj66qj" STEP: Cleaning up PVC and PV 04/18/24 18:23:23.93 Apr 18 18:23:23.931: INFO: pvc is nil Apr 18 18:23:23.931: INFO: Deleting PersistentVolume "local-pvwzfdj" STEP: Cleaning up PVC and PV 04/18/24 18:23:23.936 Apr 18 18:23:23.936: INFO: pvc is nil Apr 18 18:23:23.936: INFO: Deleting PersistentVolume "local-pvbvktg" STEP: Cleaning up PVC and PV 04/18/24 18:23:23.941 Apr 18 18:23:23.941: INFO: pvc is nil Apr 18 18:23:23.941: INFO: Deleting PersistentVolume "local-pvv77ss" STEP: Cleaning up PVC and PV 04/18/24 18:23:23.947 Apr 18 18:23:23.947: INFO: pvc is nil Apr 18 18:23:23.947: INFO: Deleting PersistentVolume "local-pvb5zrf" STEP: Cleaning up PVC and PV 04/18/24 18:23:23.952 Apr 18 18:23:23.952: INFO: pvc is nil Apr 18 18:23:23.952: INFO: Deleting PersistentVolume "local-pvcjk44" STEP: Cleaning up PVC and PV 04/18/24 18:23:23.957 Apr 18 18:23:23.957: INFO: pvc is nil Apr 18 18:23:23.957: INFO: Deleting PersistentVolume "local-pv946rq" STEP: Cleaning up PVC and PV 04/18/24 18:23:23.961 Apr 18 18:23:23.961: INFO: pvc is nil Apr 18 18:23:23.962: INFO: Deleting PersistentVolume "local-pvb4x5j" STEP: Cleaning up PVC and PV 04/18/24 18:23:23.967 Apr 18 18:23:23.967: INFO: pvc is nil Apr 18 18:23:23.967: INFO: Deleting PersistentVolume "local-pv5g6k6" STEP: Unmount tmpfs mount point on node "v126-worker" at path "/tmp/local-volume-test-0cfc23b5-94ae-4254-9e28-16374a08f7d2" 04/18/24 18:23:23.972 Apr 18 18:23:23.972: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-0cfc23b5-94ae-4254-9e28-16374a08f7d2"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:23.972: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:23.973: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:23.974: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-0cfc23b5-94ae-4254-9e28-16374a08f7d2%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/18/24 18:23:24.113 Apr 18 18:23:24.113: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0cfc23b5-94ae-4254-9e28-16374a08f7d2] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:24.113: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:24.114: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:24.114: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-0cfc23b5-94ae-4254-9e28-16374a08f7d2&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker" at path "/tmp/local-volume-test-04978b0e-0487-447a-8012-2a78eebe6ae6" 04/18/24 18:23:24.284 Apr 18 18:23:24.284: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-04978b0e-0487-447a-8012-2a78eebe6ae6"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:24.284: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:24.286: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:24.286: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-04978b0e-0487-447a-8012-2a78eebe6ae6%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/18/24 18:23:24.427 Apr 18 18:23:24.427: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-04978b0e-0487-447a-8012-2a78eebe6ae6] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:24.427: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:24.428: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:24.428: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-04978b0e-0487-447a-8012-2a78eebe6ae6&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker" at path "/tmp/local-volume-test-5eab056d-5401-4c3c-bec9-c564e8720d23" 04/18/24 18:23:24.563 Apr 18 18:23:24.563: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-5eab056d-5401-4c3c-bec9-c564e8720d23"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:24.563: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:24.567: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:24.567: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-5eab056d-5401-4c3c-bec9-c564e8720d23%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/18/24 18:23:24.703 Apr 18 18:23:24.703: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-5eab056d-5401-4c3c-bec9-c564e8720d23] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:24.703: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:24.705: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:24.705: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-5eab056d-5401-4c3c-bec9-c564e8720d23&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker" at path "/tmp/local-volume-test-0faedd84-2b24-4d57-9db3-adbb1a294635" 04/18/24 18:23:24.834 Apr 18 18:23:24.834: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-0faedd84-2b24-4d57-9db3-adbb1a294635"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:24.834: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:24.836: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:24.836: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-0faedd84-2b24-4d57-9db3-adbb1a294635%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/18/24 18:23:24.969 Apr 18 18:23:24.970: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0faedd84-2b24-4d57-9db3-adbb1a294635] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:24.970: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:24.971: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:24.971: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-0faedd84-2b24-4d57-9db3-adbb1a294635&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker" at path "/tmp/local-volume-test-4f2effed-07e0-47c6-bc65-0fb97fa529aa" 04/18/24 18:23:25.122 Apr 18 18:23:25.123: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-4f2effed-07e0-47c6-bc65-0fb97fa529aa"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:25.123: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:25.124: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:25.124: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-4f2effed-07e0-47c6-bc65-0fb97fa529aa%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/18/24 18:23:25.267 Apr 18 18:23:25.267: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4f2effed-07e0-47c6-bc65-0fb97fa529aa] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:25.267: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:25.269: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:25.269: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-4f2effed-07e0-47c6-bc65-0fb97fa529aa&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker" at path "/tmp/local-volume-test-cedfd03d-e2fc-4679-a14f-269ecc1e0f60" 04/18/24 18:23:25.41 Apr 18 18:23:25.410: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-cedfd03d-e2fc-4679-a14f-269ecc1e0f60"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:25.410: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:25.412: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:25.412: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-cedfd03d-e2fc-4679-a14f-269ecc1e0f60%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/18/24 18:23:25.557 Apr 18 18:23:25.557: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-cedfd03d-e2fc-4679-a14f-269ecc1e0f60] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:25.557: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:25.558: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:25.558: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-cedfd03d-e2fc-4679-a14f-269ecc1e0f60&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker" at path "/tmp/local-volume-test-5e9dedb7-eba4-43c0-8ee2-f5f05e0307e3" 04/18/24 18:23:25.718 Apr 18 18:23:25.719: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-5e9dedb7-eba4-43c0-8ee2-f5f05e0307e3"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:25.719: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:25.720: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:25.720: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-5e9dedb7-eba4-43c0-8ee2-f5f05e0307e3%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/18/24 18:23:25.849 Apr 18 18:23:25.850: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-5e9dedb7-eba4-43c0-8ee2-f5f05e0307e3] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:25.850: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:25.851: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:25.851: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-5e9dedb7-eba4-43c0-8ee2-f5f05e0307e3&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker" at path "/tmp/local-volume-test-ad2566bd-e67c-4b03-b54a-0822df6c369a" 04/18/24 18:23:25.994 Apr 18 18:23:25.994: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-ad2566bd-e67c-4b03-b54a-0822df6c369a"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:25.994: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:25.995: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:25.996: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-ad2566bd-e67c-4b03-b54a-0822df6c369a%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/18/24 18:23:26.13 Apr 18 18:23:26.130: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ad2566bd-e67c-4b03-b54a-0822df6c369a] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:26.130: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:26.131: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:26.131: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-ad2566bd-e67c-4b03-b54a-0822df6c369a&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker" at path "/tmp/local-volume-test-4161bc4a-1cda-4b0d-afb1-9de55917bb11" 04/18/24 18:23:26.281 Apr 18 18:23:26.281: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-4161bc4a-1cda-4b0d-afb1-9de55917bb11"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:26.281: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:26.282: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:26.283: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-4161bc4a-1cda-4b0d-afb1-9de55917bb11%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/18/24 18:23:26.444 Apr 18 18:23:26.444: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4161bc4a-1cda-4b0d-afb1-9de55917bb11] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:26.444: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:26.445: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:26.445: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-4161bc4a-1cda-4b0d-afb1-9de55917bb11&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker" at path "/tmp/local-volume-test-1a50ef09-db20-4930-a27a-b7fbb5d3c58e" 04/18/24 18:23:26.6 Apr 18 18:23:26.601: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-1a50ef09-db20-4930-a27a-b7fbb5d3c58e"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:26.601: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:26.602: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:26.602: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-1a50ef09-db20-4930-a27a-b7fbb5d3c58e%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/18/24 18:23:26.747 Apr 18 18:23:26.747: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-1a50ef09-db20-4930-a27a-b7fbb5d3c58e] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker-6lhv8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:26.747: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:26.749: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:26.749: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker-6lhv8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-1a50ef09-db20-4930-a27a-b7fbb5d3c58e&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Cleaning up 10 local volumes on node "v126-worker2" 04/18/24 18:23:26.889 STEP: Cleaning up PVC and PV 04/18/24 18:23:26.889 Apr 18 18:23:26.889: INFO: pvc is nil Apr 18 18:23:26.889: INFO: Deleting PersistentVolume "local-pvpktlw" STEP: Cleaning up PVC and PV 04/18/24 18:23:26.896 Apr 18 18:23:26.896: INFO: pvc is nil Apr 18 18:23:26.896: INFO: Deleting PersistentVolume "local-pvpcnjd" STEP: Cleaning up PVC and PV 04/18/24 18:23:26.901 Apr 18 18:23:26.901: INFO: pvc is nil Apr 18 18:23:26.901: INFO: Deleting PersistentVolume "local-pvpcrz6" STEP: Cleaning up PVC and PV 04/18/24 18:23:26.904 Apr 18 18:23:26.905: INFO: pvc is nil Apr 18 18:23:26.905: INFO: Deleting PersistentVolume "local-pvk9vgk" STEP: Cleaning up PVC and PV 04/18/24 18:23:26.909 Apr 18 18:23:26.909: INFO: pvc is nil Apr 18 18:23:26.909: INFO: Deleting PersistentVolume "local-pv54rlh" STEP: Cleaning up PVC and PV 04/18/24 18:23:26.913 Apr 18 18:23:26.913: INFO: pvc is nil Apr 18 18:23:26.913: INFO: Deleting PersistentVolume "local-pvsr4w8" STEP: Cleaning up PVC and PV 04/18/24 18:23:26.916 Apr 18 18:23:26.916: INFO: pvc is nil Apr 18 18:23:26.916: INFO: Deleting PersistentVolume "local-pv2rxbq" STEP: Cleaning up PVC and PV 04/18/24 18:23:26.92 Apr 18 18:23:26.920: INFO: pvc is nil Apr 18 18:23:26.920: INFO: Deleting PersistentVolume "local-pvz5sbj" STEP: Cleaning up PVC and PV 04/18/24 18:23:26.924 Apr 18 18:23:26.924: INFO: pvc is nil Apr 18 18:23:26.924: INFO: Deleting PersistentVolume "local-pv6l2jh" STEP: Cleaning up PVC and PV 04/18/24 18:23:26.928 Apr 18 18:23:26.928: INFO: pvc is nil Apr 18 18:23:26.928: INFO: Deleting PersistentVolume "local-pvv622p" STEP: Unmount tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-55dbdf26-2db5-4475-a6d2-de0a8a20ac19" 04/18/24 18:23:26.932 Apr 18 18:23:26.932: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-55dbdf26-2db5-4475-a6d2-de0a8a20ac19"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:26.933: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:26.934: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:26.934: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-55dbdf26-2db5-4475-a6d2-de0a8a20ac19%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/18/24 18:23:27.09 Apr 18 18:23:27.090: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-55dbdf26-2db5-4475-a6d2-de0a8a20ac19] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:27.090: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:27.091: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:27.091: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-55dbdf26-2db5-4475-a6d2-de0a8a20ac19&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-45e4afe0-e303-40d7-96dd-ac685b4c9662" 04/18/24 18:23:27.191 Apr 18 18:23:27.192: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-45e4afe0-e303-40d7-96dd-ac685b4c9662"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:27.192: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:27.193: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:27.193: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-45e4afe0-e303-40d7-96dd-ac685b4c9662%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/18/24 18:23:27.348 Apr 18 18:23:27.348: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-45e4afe0-e303-40d7-96dd-ac685b4c9662] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:27.348: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:27.350: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:27.350: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-45e4afe0-e303-40d7-96dd-ac685b4c9662&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-01703c6d-2dcb-45a2-a5a4-caa2cd12db4f" 04/18/24 18:23:27.503 Apr 18 18:23:27.503: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-01703c6d-2dcb-45a2-a5a4-caa2cd12db4f"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:27.503: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:27.504: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:27.504: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-01703c6d-2dcb-45a2-a5a4-caa2cd12db4f%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/18/24 18:23:27.667 Apr 18 18:23:27.667: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-01703c6d-2dcb-45a2-a5a4-caa2cd12db4f] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:27.667: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:27.668: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:27.669: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-01703c6d-2dcb-45a2-a5a4-caa2cd12db4f&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-618a9b44-b951-4946-a571-dd9ee138aca2" 04/18/24 18:23:27.817 Apr 18 18:23:27.817: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-618a9b44-b951-4946-a571-dd9ee138aca2"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:27.817: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:27.819: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:27.819: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-618a9b44-b951-4946-a571-dd9ee138aca2%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/18/24 18:23:27.971 Apr 18 18:23:27.971: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-618a9b44-b951-4946-a571-dd9ee138aca2] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:27.971: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:27.972: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:27.972: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-618a9b44-b951-4946-a571-dd9ee138aca2&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-d2540448-7ed6-42ec-b00d-f24ffbc94a00" 04/18/24 18:23:28.121 Apr 18 18:23:28.122: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-d2540448-7ed6-42ec-b00d-f24ffbc94a00"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:28.122: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:28.123: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:28.123: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-d2540448-7ed6-42ec-b00d-f24ffbc94a00%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/18/24 18:23:28.265 Apr 18 18:23:28.266: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-d2540448-7ed6-42ec-b00d-f24ffbc94a00] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:28.266: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:28.267: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:28.267: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-d2540448-7ed6-42ec-b00d-f24ffbc94a00&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-e8a2452e-c7c1-4012-af90-3b5518fdaecf" 04/18/24 18:23:28.408 Apr 18 18:23:28.409: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-e8a2452e-c7c1-4012-af90-3b5518fdaecf"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:28.409: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:28.410: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:28.410: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-e8a2452e-c7c1-4012-af90-3b5518fdaecf%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/18/24 18:23:28.563 Apr 18 18:23:28.563: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e8a2452e-c7c1-4012-af90-3b5518fdaecf] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:28.563: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:28.564: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:28.564: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-e8a2452e-c7c1-4012-af90-3b5518fdaecf&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-09f38913-3cc2-4afa-941f-dbf257f9c880" 04/18/24 18:23:28.726 Apr 18 18:23:28.726: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-09f38913-3cc2-4afa-941f-dbf257f9c880"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:28.726: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:28.727: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:28.727: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-09f38913-3cc2-4afa-941f-dbf257f9c880%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/18/24 18:23:28.86 Apr 18 18:23:28.860: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-09f38913-3cc2-4afa-941f-dbf257f9c880] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:28.860: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:28.861: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:28.861: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-09f38913-3cc2-4afa-941f-dbf257f9c880&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-3a576b13-a40b-4498-8787-f5b71ac3c0e3" 04/18/24 18:23:29.009 Apr 18 18:23:29.010: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-3a576b13-a40b-4498-8787-f5b71ac3c0e3"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:29.010: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:29.011: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:29.011: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-3a576b13-a40b-4498-8787-f5b71ac3c0e3%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/18/24 18:23:29.159 Apr 18 18:23:29.160: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-3a576b13-a40b-4498-8787-f5b71ac3c0e3] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:29.160: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:29.161: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:29.161: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-3a576b13-a40b-4498-8787-f5b71ac3c0e3&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-fac4eaa9-ab1b-4891-8a00-23f1d52e38af" 04/18/24 18:23:29.322 Apr 18 18:23:29.322: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-fac4eaa9-ab1b-4891-8a00-23f1d52e38af"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:29.322: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:29.323: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:29.323: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-fac4eaa9-ab1b-4891-8a00-23f1d52e38af%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/18/24 18:23:29.463 Apr 18 18:23:29.463: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-fac4eaa9-ab1b-4891-8a00-23f1d52e38af] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:29.463: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:29.465: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:29.465: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-fac4eaa9-ab1b-4891-8a00-23f1d52e38af&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-f8d7be9e-3634-43ae-8736-8636c73aa4d2" 04/18/24 18:23:29.605 Apr 18 18:23:29.605: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-f8d7be9e-3634-43ae-8736-8636c73aa4d2"] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:29.606: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:29.607: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:29.607: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-f8d7be9e-3634-43ae-8736-8636c73aa4d2%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/18/24 18:23:29.762 Apr 18 18:23:29.762: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f8d7be9e-3634-43ae-8736-8636c73aa4d2] Namespace:persistent-local-volumes-test-38 PodName:hostexec-v126-worker2-qpb6j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 18 18:23:29.762: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 18:23:29.765: INFO: ExecWithOptions: Clientset creation Apr 18 18:23:29.765: INFO: ExecWithOptions: execute(POST https://172.30.13.90:34095/api/v1/namespaces/persistent-local-volumes-test-38/pods/hostexec-v126-worker2-qpb6j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-f8d7be9e-3634-43ae-8736-8636c73aa4d2&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 18 18:23:29.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-38" for this suite. 04/18/24 18:23:29.935 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVC should create volume metrics in Volume Manager test/e2e/storage/volume_metrics.go:483 [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:23:29.974 Apr 18 18:23:29.975: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/18/24 18:23:29.976 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:23:29.989 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:23:29.993 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 18 18:23:29.997: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 18 18:23:29.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-6726" for this suite. 04/18/24 18:23:30.002 ------------------------------ S [SKIPPED] [0.033 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 PVC test/e2e/storage/volume_metrics.go:491 should create volume metrics in Volume Manager test/e2e/storage/volume_metrics.go:483 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:23:29.974 Apr 18 18:23:29.975: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/18/24 18:23:29.976 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:23:29.989 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:23:29.993 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 18 18:23:29.997: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 18 18:23:29.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-6726" for this suite. 04/18/24 18:23:30.002 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func33.2() test/e2e/storage/volume_metrics.go:102 +0x6c ------------------------------ SSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics Ephemeral should create prometheus metrics for volume provisioning errors [Slow] test/e2e/storage/volume_metrics.go:471 [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:23:30.008 Apr 18 18:23:30.008: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/18/24 18:23:30.01 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:23:30.021 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:23:30.026 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 18 18:23:30.030: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 18 18:23:30.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-3308" for this suite. 04/18/24 18:23:30.035 ------------------------------ S [SKIPPED] [0.032 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 Ephemeral test/e2e/storage/volume_metrics.go:495 should create prometheus metrics for volume provisioning errors [Slow] test/e2e/storage/volume_metrics.go:471 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:23:30.008 Apr 18 18:23:30.008: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/18/24 18:23:30.01 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:23:30.021 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:23:30.026 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 18 18:23:30.030: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 18 18:23:30.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-3308" for this suite. 04/18/24 18:23:30.035 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func33.2() test/e2e/storage/volume_metrics.go:102 +0x6c ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVC should create prometheus metrics for volume provisioning and attach/detach test/e2e/storage/volume_metrics.go:466 [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:23:30.059 Apr 18 18:23:30.059: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/18/24 18:23:30.061 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:23:30.072 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:23:30.076 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 18 18:23:30.080: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 18 18:23:30.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-8618" for this suite. 04/18/24 18:23:30.085 ------------------------------ S [SKIPPED] [0.031 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 PVC test/e2e/storage/volume_metrics.go:491 should create prometheus metrics for volume provisioning and attach/detach test/e2e/storage/volume_metrics.go:466 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:23:30.059 Apr 18 18:23:30.059: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/18/24 18:23:30.061 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:23:30.072 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:23:30.076 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 18 18:23:30.080: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 18 18:23:30.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-8618" for this suite. 04/18/24 18:23:30.085 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func33.2() test/e2e/storage/volume_metrics.go:102 +0x6c ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [SynchronizedAfterSuite] test/e2e/e2e.go:88 [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:88 [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:88 Apr 18 18:23:30.119: INFO: Running AfterSuite actions on node 1 Apr 18 18:23:30.119: INFO: Skipping dumping logs from cluster ------------------------------ [SynchronizedAfterSuite] PASSED [0.000 seconds] [SynchronizedAfterSuite] test/e2e/e2e.go:88 Begin Captured GinkgoWriter Output >> [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:88 [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:88 Apr 18 18:23:30.119: INFO: Running AfterSuite actions on node 1 Apr 18 18:23:30.119: INFO: Skipping dumping logs from cluster << End Captured GinkgoWriter Output ------------------------------ [ReportAfterSuite] Kubernetes e2e suite report test/e2e/e2e_test.go:153 [ReportAfterSuite] TOP-LEVEL test/e2e/e2e_test.go:153 ------------------------------ [ReportAfterSuite] PASSED [0.000 seconds] [ReportAfterSuite] Kubernetes e2e suite report test/e2e/e2e_test.go:153 Begin Captured GinkgoWriter Output >> [ReportAfterSuite] TOP-LEVEL test/e2e/e2e_test.go:153 << End Captured GinkgoWriter Output ------------------------------ [ReportAfterSuite] Kubernetes e2e JUnit report test/e2e/framework/test_context.go:529 [ReportAfterSuite] TOP-LEVEL test/e2e/framework/test_context.go:529 ------------------------------ [ReportAfterSuite] PASSED [0.198 seconds] [ReportAfterSuite] Kubernetes e2e JUnit report test/e2e/framework/test_context.go:529 Begin Captured GinkgoWriter Output >> [ReportAfterSuite] TOP-LEVEL test/e2e/framework/test_context.go:529 << End Captured GinkgoWriter Output ------------------------------ Ran 2 of 7069 Specs in 102.823 seconds SUCCESS! -- 2 Passed | 0 Failed | 0 Pending | 7067 Skipped PASS Ginkgo ran 1 suite in 1m43.469193527s Test Suite Passed