I0411 18:32:24.907317 17 e2e.go:126] Starting e2e run "6b1e4cd3-6996-4362-82b8-b0e2c584158d" on Ginkgo node 1 Apr 11 18:32:24.922: INFO: Enabling in-tree volume drivers Running Suite: Kubernetes e2e suite - /usr/local/bin ==================================================== Random Seed: 1712860344 - will randomize all specs Will run 28 of 7069 specs ------------------------------ [SynchronizedBeforeSuite] test/e2e/e2e.go:77 [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:77 Apr 11 18:32:25.207: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:32:25.209: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 11 18:32:25.237: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 11 18:32:25.269: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 11 18:32:25.269: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 11 18:32:25.269: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 11 18:32:25.275: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) Apr 11 18:32:25.275: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 11 18:32:25.275: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 11 18:32:25.275: INFO: e2e test version: v1.26.13 Apr 11 18:32:25.277: INFO: kube-apiserver version: v1.26.6 [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:77 Apr 11 18:32:25.277: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:32:25.282: INFO: Cluster IP family: ipv4 ------------------------------ [SynchronizedBeforeSuite] PASSED [0.076 seconds] [SynchronizedBeforeSuite] test/e2e/e2e.go:77 Begin Captured GinkgoWriter Output >> [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:77 Apr 11 18:32:25.207: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:32:25.209: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 11 18:32:25.237: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 11 18:32:25.269: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 11 18:32:25.269: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 11 18:32:25.269: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 11 18:32:25.275: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) Apr 11 18:32:25.275: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 11 18:32:25.275: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 11 18:32:25.275: INFO: e2e test version: v1.26.13 Apr 11 18:32:25.277: INFO: kube-apiserver version: v1.26.6 [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:77 Apr 11 18:32:25.277: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:32:25.282: INFO: Cluster IP family: ipv4 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics Ephemeral should create prometheus metrics for volume provisioning and attach/detach test/e2e/storage/volume_metrics.go:466 [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:25.346 Apr 11 18:32:25.346: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:32:25.347 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:25.359 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:25.363 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:32:25.367: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:32:25.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-9279" for this suite. 04/11/24 18:32:25.374 ------------------------------ S [SKIPPED] [0.032 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 Ephemeral test/e2e/storage/volume_metrics.go:495 should create prometheus metrics for volume provisioning and attach/detach test/e2e/storage/volume_metrics.go:466 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:25.346 Apr 11 18:32:25.346: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:32:25.347 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:25.359 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:25.363 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:32:25.367: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:32:25.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-9279" for this suite. 04/11/24 18:32:25.374 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func33.2() test/e2e/storage/volume_metrics.go:102 +0x6c ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics Ephemeral should create metrics for total time taken in volume operations in P/V Controller test/e2e/storage/volume_metrics.go:480 [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:25.414 Apr 11 18:32:25.414: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:32:25.415 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:25.426 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:25.43 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:32:25.434: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:32:25.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-5232" for this suite. 04/11/24 18:32:25.44 ------------------------------ S [SKIPPED] [0.031 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 Ephemeral test/e2e/storage/volume_metrics.go:495 should create metrics for total time taken in volume operations in P/V Controller test/e2e/storage/volume_metrics.go:480 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:25.414 Apr 11 18:32:25.414: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:32:25.415 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:25.426 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:25.43 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:32:25.434: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:32:25.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-5232" for this suite. 04/11/24 18:32:25.44 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func33.2() test/e2e/storage/volume_metrics.go:102 +0x6c ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVC should create volume metrics in Volume Manager test/e2e/storage/volume_metrics.go:483 [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:25.453 Apr 11 18:32:25.453: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:32:25.455 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:25.465 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:25.47 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:32:25.474: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:32:25.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-8338" for this suite. 04/11/24 18:32:25.48 ------------------------------ S [SKIPPED] [0.032 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 PVC test/e2e/storage/volume_metrics.go:491 should create volume metrics in Volume Manager test/e2e/storage/volume_metrics.go:483 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:25.453 Apr 11 18:32:25.453: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:32:25.455 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:25.465 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:25.47 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:32:25.474: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:32:25.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-8338" for this suite. 04/11/24 18:32:25.48 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func33.2() test/e2e/storage/volume_metrics.go:102 +0x6c ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create bound pv/pvc count metrics for pvc controller after creating both pv and pvc test/e2e/storage/volume_metrics.go:620 [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:25.503 Apr 11 18:32:25.503: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:32:25.505 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:25.515 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:25.519 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:32:25.523: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:32:25.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-3130" for this suite. 04/11/24 18:32:25.528 ------------------------------ S [SKIPPED] [0.030 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 PVController test/e2e/storage/volume_metrics.go:500 should create bound pv/pvc count metrics for pvc controller after creating both pv and pvc test/e2e/storage/volume_metrics.go:620 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:25.503 Apr 11 18:32:25.503: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:32:25.505 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:25.515 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:25.519 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:32:25.523: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:32:25.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-3130" for this suite. 04/11/24 18:32:25.528 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func33.2() test/e2e/storage/volume_metrics.go:102 +0x6c ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create total pv count metrics for with plugin and volume mode labels after creating pv test/e2e/storage/volume_metrics.go:630 [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:25.563 Apr 11 18:32:25.563: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:32:25.565 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:25.579 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:25.586 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:32:25.594: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:32:25.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-9061" for this suite. 04/11/24 18:32:25.6 ------------------------------ S [SKIPPED] [0.043 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 PVController test/e2e/storage/volume_metrics.go:500 should create total pv count metrics for with plugin and volume mode labels after creating pv test/e2e/storage/volume_metrics.go:630 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:25.563 Apr 11 18:32:25.563: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:32:25.565 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:25.579 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:25.586 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:32:25.594: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:32:25.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-9061" for this suite. 04/11/24 18:32:25.6 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func33.2() test/e2e/storage/volume_metrics.go:102 +0x6c ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVC should create volume metrics with the correct FilesystemMode PVC ref test/e2e/storage/volume_metrics.go:474 [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:25.615 Apr 11 18:32:25.615: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:32:25.617 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:25.628 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:25.634 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:32:25.638: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:32:25.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-5515" for this suite. 04/11/24 18:32:25.644 ------------------------------ S [SKIPPED] [0.033 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 PVC test/e2e/storage/volume_metrics.go:491 should create volume metrics with the correct FilesystemMode PVC ref test/e2e/storage/volume_metrics.go:474 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:25.615 Apr 11 18:32:25.615: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:32:25.617 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:25.628 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:25.634 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:32:25.638: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:32:25.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-5515" for this suite. 04/11/24 18:32:25.644 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func33.2() test/e2e/storage/volume_metrics.go:102 +0x6c ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 test/e2e/storage/persistent_volumes-local.go:252 [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:25.673 Apr 11 18:32:25.673: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/11/24 18:32:25.675 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:25.686 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:25.69 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:198 Apr 11 18:32:25.707: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker2-nbq4l" in namespace "persistent-local-volumes-test-3512" to be "running" Apr 11 18:32:25.710: INFO: Pod "hostexec-v126-worker2-nbq4l": Phase="Pending", Reason="", readiness=false. Elapsed: 3.220594ms Apr 11 18:32:27.714: INFO: Pod "hostexec-v126-worker2-nbq4l": Phase="Running", Reason="", readiness=true. Elapsed: 2.007489882s Apr 11 18:32:27.715: INFO: Pod "hostexec-v126-worker2-nbq4l" satisfied condition "running" Apr 11 18:32:27.715: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-3512 PodName:hostexec-v126-worker2-nbq4l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:32:27.715: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:32:27.716: INFO: ExecWithOptions: Clientset creation Apr 11 18:32:27.716: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-3512/pods/hostexec-v126-worker2-nbq4l/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=ls+-1+%2Fmnt%2Fdisks%2Fby-uuid%2Fgoogle-local-ssds-scsi-fs%2F+%7C+wc+-l&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 11 18:32:27.872: INFO: exec v126-worker2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Apr 11 18:32:27.872: INFO: exec v126-worker2: stdout: "0\n" Apr 11 18:32:27.872: INFO: exec v126-worker2: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Apr 11 18:32:27.872: INFO: exec v126-worker2: exit code: 0 Apr 11 18:32:27.872: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:207 STEP: Cleaning up PVC and PV 04/11/24 18:32:27.872 [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 11 18:32:27.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-3512" for this suite. 04/11/24 18:32:27.878 ------------------------------ S [SKIPPED] [2.211 seconds] [sig-storage] PersistentVolumes-local test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] [BeforeEach] test/e2e/storage/persistent_volumes-local.go:198 Two pods mounting a local volume at the same time test/e2e/storage/persistent_volumes-local.go:251 should be able to write from pod1 and read from pod2 test/e2e/storage/persistent_volumes-local.go:252 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:25.673 Apr 11 18:32:25.673: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/11/24 18:32:25.675 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:25.686 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:25.69 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:198 Apr 11 18:32:25.707: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker2-nbq4l" in namespace "persistent-local-volumes-test-3512" to be "running" Apr 11 18:32:25.710: INFO: Pod "hostexec-v126-worker2-nbq4l": Phase="Pending", Reason="", readiness=false. Elapsed: 3.220594ms Apr 11 18:32:27.714: INFO: Pod "hostexec-v126-worker2-nbq4l": Phase="Running", Reason="", readiness=true. Elapsed: 2.007489882s Apr 11 18:32:27.715: INFO: Pod "hostexec-v126-worker2-nbq4l" satisfied condition "running" Apr 11 18:32:27.715: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-3512 PodName:hostexec-v126-worker2-nbq4l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:32:27.715: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:32:27.716: INFO: ExecWithOptions: Clientset creation Apr 11 18:32:27.716: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-3512/pods/hostexec-v126-worker2-nbq4l/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=ls+-1+%2Fmnt%2Fdisks%2Fby-uuid%2Fgoogle-local-ssds-scsi-fs%2F+%7C+wc+-l&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 11 18:32:27.872: INFO: exec v126-worker2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Apr 11 18:32:27.872: INFO: exec v126-worker2: stdout: "0\n" Apr 11 18:32:27.872: INFO: exec v126-worker2: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Apr 11 18:32:27.872: INFO: exec v126-worker2: exit code: 0 Apr 11 18:32:27.872: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:207 STEP: Cleaning up PVC and PV 04/11/24 18:32:27.872 [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 11 18:32:27.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-3512" for this suite. 04/11/24 18:32:27.878 << End Captured GinkgoWriter Output Requires at least 1 scsi fs localSSD In [BeforeEach] at: test/e2e/storage/persistent_volumes-local.go:1255 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.cleanupLocalPVCsPVs(0xc005152cf0, {0xc001fc9f58, 0x1, 0x22?}) test/e2e/storage/persistent_volumes-local.go:854 +0xa9 k8s.io/kubernetes/test/e2e/storage.cleanupLocalVolumes(0xc005152cf0, {0xc001fc9f58?, 0x1, 0x200?}) test/e2e/storage/persistent_volumes-local.go:863 +0x2d k8s.io/kubernetes/test/e2e/storage.glob..func25.2.2() test/e2e/storage/persistent_volumes-local.go:208 +0x47 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics Ephemeral should create volume metrics in Volume Manager test/e2e/storage/volume_metrics.go:483 [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:27.913 Apr 11 18:32:27.913: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:32:27.915 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:27.926 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:27.929 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:32:27.934: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:32:27.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-6089" for this suite. 04/11/24 18:32:27.939 ------------------------------ S [SKIPPED] [0.032 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 Ephemeral test/e2e/storage/volume_metrics.go:495 should create volume metrics in Volume Manager test/e2e/storage/volume_metrics.go:483 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:27.913 Apr 11 18:32:27.913: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:32:27.915 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:27.926 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:27.929 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:32:27.934: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:32:27.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-6089" for this suite. 04/11/24 18:32:27.939 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func33.2() test/e2e/storage/volume_metrics.go:102 +0x6c ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create none metrics for pvc controller before creating any PV or PVC test/e2e/storage/volume_metrics.go:598 [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:27.949 Apr 11 18:32:27.949: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:32:27.951 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:27.962 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:27.966 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:32:27.970: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:32:27.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-1242" for this suite. 04/11/24 18:32:27.976 ------------------------------ S [SKIPPED] [0.032 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 PVController test/e2e/storage/volume_metrics.go:500 should create none metrics for pvc controller before creating any PV or PVC test/e2e/storage/volume_metrics.go:598 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:27.949 Apr 11 18:32:27.949: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:32:27.951 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:27.962 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:27.966 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:32:27.970: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:32:27.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-1242" for this suite. 04/11/24 18:32:27.976 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func33.2() test/e2e/storage/volume_metrics.go:102 +0x6c ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVC should create prometheus metrics for volume provisioning and attach/detach test/e2e/storage/volume_metrics.go:466 [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:28.023 Apr 11 18:32:28.023: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:32:28.025 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:28.037 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:28.041 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:32:28.045: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:32:28.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-4825" for this suite. 04/11/24 18:32:28.05 ------------------------------ S [SKIPPED] [0.032 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 PVC test/e2e/storage/volume_metrics.go:491 should create prometheus metrics for volume provisioning and attach/detach test/e2e/storage/volume_metrics.go:466 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:28.023 Apr 11 18:32:28.023: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:32:28.025 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:28.037 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:28.041 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:32:28.045: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:32:28.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-4825" for this suite. 04/11/24 18:32:28.05 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func33.2() test/e2e/storage/volume_metrics.go:102 +0x6c ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics Ephemeral should create metrics for total number of volumes in A/D Controller test/e2e/storage/volume_metrics.go:486 [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:28.057 Apr 11 18:32:28.057: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:32:28.059 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:28.073 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:28.076 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:32:28.080: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:32:28.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-9048" for this suite. 04/11/24 18:32:28.086 ------------------------------ S [SKIPPED] [0.034 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 Ephemeral test/e2e/storage/volume_metrics.go:495 should create metrics for total number of volumes in A/D Controller test/e2e/storage/volume_metrics.go:486 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:28.057 Apr 11 18:32:28.057: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:32:28.059 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:28.073 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:28.076 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:32:28.080: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:32:28.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-9048" for this suite. 04/11/24 18:32:28.086 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func33.2() test/e2e/storage/volume_metrics.go:102 +0x6c ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create unbound pv count metrics for pvc controller after creating pv only test/e2e/storage/volume_metrics.go:602 [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:28.095 Apr 11 18:32:28.095: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:32:28.097 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:28.108 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:28.112 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:32:28.116: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:32:28.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-5536" for this suite. 04/11/24 18:32:28.121 ------------------------------ S [SKIPPED] [0.031 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 PVController test/e2e/storage/volume_metrics.go:500 should create unbound pv count metrics for pvc controller after creating pv only test/e2e/storage/volume_metrics.go:602 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:28.095 Apr 11 18:32:28.095: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:32:28.097 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:28.108 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:28.112 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:32:28.116: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:32:28.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-5536" for this suite. 04/11/24 18:32:28.121 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func33.2() test/e2e/storage/volume_metrics.go:102 +0x6c ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVC should create metrics for total time taken in volume operations in P/V Controller test/e2e/storage/volume_metrics.go:480 [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:28.136 Apr 11 18:32:28.136: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:32:28.138 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:28.148 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:28.153 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:32:28.157: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:32:28.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-1136" for this suite. 04/11/24 18:32:28.162 ------------------------------ S [SKIPPED] [0.031 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 PVC test/e2e/storage/volume_metrics.go:491 should create metrics for total time taken in volume operations in P/V Controller test/e2e/storage/volume_metrics.go:480 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:28.136 Apr 11 18:32:28.136: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:32:28.138 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:28.148 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:28.153 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:32:28.157: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:32:28.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-1136" for this suite. 04/11/24 18:32:28.162 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func33.2() test/e2e/storage/volume_metrics.go:102 +0x6c ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVC should create metrics for total number of volumes in A/D Controller test/e2e/storage/volume_metrics.go:486 [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:28.17 Apr 11 18:32:28.170: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:32:28.172 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:28.183 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:28.187 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:32:28.191: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:32:28.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-4095" for this suite. 04/11/24 18:32:28.196 ------------------------------ S [SKIPPED] [0.031 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 PVC test/e2e/storage/volume_metrics.go:491 should create metrics for total number of volumes in A/D Controller test/e2e/storage/volume_metrics.go:486 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:28.17 Apr 11 18:32:28.170: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:32:28.172 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:28.183 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:28.187 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:32:28.191: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:32:28.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-4095" for this suite. 04/11/24 18:32:28.196 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func33.2() test/e2e/storage/volume_metrics.go:102 +0x6c ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics Ephemeral should create volume metrics with the correct BlockMode PVC ref test/e2e/storage/volume_metrics.go:477 [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:28.214 Apr 11 18:32:28.214: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:32:28.216 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:28.227 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:28.231 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:32:28.235: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:32:28.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-7962" for this suite. 04/11/24 18:32:28.24 ------------------------------ S [SKIPPED] [0.032 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 Ephemeral test/e2e/storage/volume_metrics.go:495 should create volume metrics with the correct BlockMode PVC ref test/e2e/storage/volume_metrics.go:477 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:28.214 Apr 11 18:32:28.214: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:32:28.216 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:28.227 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:28.231 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:32:28.235: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:32:28.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-7962" for this suite. 04/11/24 18:32:28.24 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func33.2() test/e2e/storage/volume_metrics.go:102 +0x6c ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Pod Disks [Feature:StorageProvider] [Serial] attach on previously attached volumes should work test/e2e/storage/pd.go:461 [BeforeEach] [sig-storage] Pod Disks [Feature:StorageProvider] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:28.368 Apr 11 18:32:28.368: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pod-disks 04/11/24 18:32:28.369 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:28.38 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:28.384 [BeforeEach] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/storage/pd.go:76 Apr 11 18:32:28.388: INFO: Requires at least 2 nodes (not 1) [AfterEach] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/framework/node/init/init.go:32 Apr 11 18:32:28.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] Pod Disks [Feature:StorageProvider] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] Pod Disks [Feature:StorageProvider] tear down framework | framework.go:193 STEP: Destroying namespace "pod-disks-9688" for this suite. 04/11/24 18:32:28.393 ------------------------------ S [SKIPPED] [0.030 seconds] [sig-storage] Pod Disks [Feature:StorageProvider] [BeforeEach] test/e2e/storage/pd.go:76 [Serial] attach on previously attached volumes should work test/e2e/storage/pd.go:461 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] Pod Disks [Feature:StorageProvider] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:28.368 Apr 11 18:32:28.368: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pod-disks 04/11/24 18:32:28.369 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:28.38 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:28.384 [BeforeEach] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/storage/pd.go:76 Apr 11 18:32:28.388: INFO: Requires at least 2 nodes (not 1) [AfterEach] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/framework/node/init/init.go:32 Apr 11 18:32:28.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] Pod Disks [Feature:StorageProvider] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] Pod Disks [Feature:StorageProvider] tear down framework | framework.go:193 STEP: Destroying namespace "pod-disks-9688" for this suite. 04/11/24 18:32:28.393 << End Captured GinkgoWriter Output Requires at least 2 nodes (not 1) In [BeforeEach] at: test/e2e/storage/pd.go:77 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVC should create prometheus metrics for volume provisioning errors [Slow] test/e2e/storage/volume_metrics.go:471 [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:28.408 Apr 11 18:32:28.408: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:32:28.409 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:28.42 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:28.424 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:32:28.428: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:32:28.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-5003" for this suite. 04/11/24 18:32:28.433 ------------------------------ S [SKIPPED] [0.030 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 PVC test/e2e/storage/volume_metrics.go:491 should create prometheus metrics for volume provisioning errors [Slow] test/e2e/storage/volume_metrics.go:471 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:28.408 Apr 11 18:32:28.408: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:32:28.409 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:28.42 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:28.424 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:32:28.428: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:32:28.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-5003" for this suite. 04/11/24 18:32:28.433 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func33.2() test/e2e/storage/volume_metrics.go:102 +0x6c ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running test/e2e/storage/persistent_volumes-local.go:656 [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:28.443 Apr 11 18:32:28.443: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/11/24 18:32:28.445 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:28.455 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:28.459 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] Pods sharing a single local PV [Serial] test/e2e/storage/persistent_volumes-local.go:633 [It] all pods should be running test/e2e/storage/persistent_volumes-local.go:656 STEP: Create a PVC 04/11/24 18:32:28.472 STEP: Create 2 pods to use this PVC 04/11/24 18:32:28.48 STEP: Wait for all pods are running 04/11/24 18:32:28.495 [AfterEach] Pods sharing a single local PV [Serial] test/e2e/storage/persistent_volumes-local.go:647 STEP: Clean PV local-pvfkshm 04/11/24 18:32:31.503 [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 11 18:32:31.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-9979" for this suite. 04/11/24 18:32:31.513 ------------------------------ • [3.076 seconds] [sig-storage] PersistentVolumes-local test/e2e/storage/utils/framework.go:23 Pods sharing a single local PV [Serial] test/e2e/storage/persistent_volumes-local.go:628 all pods should be running test/e2e/storage/persistent_volumes-local.go:656 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:28.443 Apr 11 18:32:28.443: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/11/24 18:32:28.445 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:28.455 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:28.459 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] Pods sharing a single local PV [Serial] test/e2e/storage/persistent_volumes-local.go:633 [It] all pods should be running test/e2e/storage/persistent_volumes-local.go:656 STEP: Create a PVC 04/11/24 18:32:28.472 STEP: Create 2 pods to use this PVC 04/11/24 18:32:28.48 STEP: Wait for all pods are running 04/11/24 18:32:28.495 [AfterEach] Pods sharing a single local PV [Serial] test/e2e/storage/persistent_volumes-local.go:647 STEP: Clean PV local-pvfkshm 04/11/24 18:32:31.503 [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 11 18:32:31.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-9979" for this suite. 04/11/24 18:32:31.513 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Set fsGroup for local volume should set fsGroup for one pod [Slow] test/e2e/storage/persistent_volumes-local.go:270 [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:31.541 Apr 11 18:32:31.542: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/11/24 18:32:31.543 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:31.555 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:31.559 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:198 Apr 11 18:32:31.577: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker2-mcklz" in namespace "persistent-local-volumes-test-4434" to be "running" Apr 11 18:32:31.580: INFO: Pod "hostexec-v126-worker2-mcklz": Phase="Pending", Reason="", readiness=false. Elapsed: 3.006544ms Apr 11 18:32:33.584: INFO: Pod "hostexec-v126-worker2-mcklz": Phase="Running", Reason="", readiness=true. Elapsed: 2.007514442s Apr 11 18:32:33.584: INFO: Pod "hostexec-v126-worker2-mcklz" satisfied condition "running" Apr 11 18:32:33.584: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-4434 PodName:hostexec-v126-worker2-mcklz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:32:33.584: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:32:33.586: INFO: ExecWithOptions: Clientset creation Apr 11 18:32:33.586: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-4434/pods/hostexec-v126-worker2-mcklz/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=ls+-1+%2Fmnt%2Fdisks%2Fby-uuid%2Fgoogle-local-ssds-scsi-fs%2F+%7C+wc+-l&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 11 18:32:33.747: INFO: exec v126-worker2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Apr 11 18:32:33.747: INFO: exec v126-worker2: stdout: "0\n" Apr 11 18:32:33.747: INFO: exec v126-worker2: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Apr 11 18:32:33.747: INFO: exec v126-worker2: exit code: 0 Apr 11 18:32:33.747: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:207 STEP: Cleaning up PVC and PV 04/11/24 18:32:33.747 [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 11 18:32:33.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-4434" for this suite. 04/11/24 18:32:33.753 ------------------------------ S [SKIPPED] [2.216 seconds] [sig-storage] PersistentVolumes-local test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] [BeforeEach] test/e2e/storage/persistent_volumes-local.go:198 Set fsGroup for local volume test/e2e/storage/persistent_volumes-local.go:263 should set fsGroup for one pod [Slow] test/e2e/storage/persistent_volumes-local.go:270 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:31.541 Apr 11 18:32:31.542: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/11/24 18:32:31.543 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:31.555 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:31.559 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:198 Apr 11 18:32:31.577: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker2-mcklz" in namespace "persistent-local-volumes-test-4434" to be "running" Apr 11 18:32:31.580: INFO: Pod "hostexec-v126-worker2-mcklz": Phase="Pending", Reason="", readiness=false. Elapsed: 3.006544ms Apr 11 18:32:33.584: INFO: Pod "hostexec-v126-worker2-mcklz": Phase="Running", Reason="", readiness=true. Elapsed: 2.007514442s Apr 11 18:32:33.584: INFO: Pod "hostexec-v126-worker2-mcklz" satisfied condition "running" Apr 11 18:32:33.584: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-4434 PodName:hostexec-v126-worker2-mcklz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:32:33.584: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:32:33.586: INFO: ExecWithOptions: Clientset creation Apr 11 18:32:33.586: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-4434/pods/hostexec-v126-worker2-mcklz/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=ls+-1+%2Fmnt%2Fdisks%2Fby-uuid%2Fgoogle-local-ssds-scsi-fs%2F+%7C+wc+-l&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 11 18:32:33.747: INFO: exec v126-worker2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Apr 11 18:32:33.747: INFO: exec v126-worker2: stdout: "0\n" Apr 11 18:32:33.747: INFO: exec v126-worker2: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Apr 11 18:32:33.747: INFO: exec v126-worker2: exit code: 0 Apr 11 18:32:33.747: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:207 STEP: Cleaning up PVC and PV 04/11/24 18:32:33.747 [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 11 18:32:33.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-4434" for this suite. 04/11/24 18:32:33.753 << End Captured GinkgoWriter Output Requires at least 1 scsi fs localSSD In [BeforeEach] at: test/e2e/storage/persistent_volumes-local.go:1255 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.cleanupLocalPVCsPVs(0xc0054ce090, {0xc003d93f58, 0x1, 0x22?}) test/e2e/storage/persistent_volumes-local.go:854 +0xa9 k8s.io/kubernetes/test/e2e/storage.cleanupLocalVolumes(0xc0054ce090, {0xc003d93f58?, 0x1, 0x200?}) test/e2e/storage/persistent_volumes-local.go:863 +0x2d k8s.io/kubernetes/test/e2e/storage.glob..func25.2.2() test/e2e/storage/persistent_volumes-local.go:208 +0x47 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVC should create volume metrics with the correct BlockMode PVC ref test/e2e/storage/volume_metrics.go:477 [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:33.761 Apr 11 18:32:33.762: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:32:33.763 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:33.775 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:33.779 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:32:33.783: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:32:33.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-8952" for this suite. 04/11/24 18:32:33.788 ------------------------------ S [SKIPPED] [0.032 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 PVC test/e2e/storage/volume_metrics.go:491 should create volume metrics with the correct BlockMode PVC ref test/e2e/storage/volume_metrics.go:477 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:33.761 Apr 11 18:32:33.762: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:32:33.763 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:33.775 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:33.779 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:32:33.783: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:32:33.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-8952" for this suite. 04/11/24 18:32:33.788 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func33.2() test/e2e/storage/volume_metrics.go:102 +0x6c ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create unbound pvc count metrics for pvc controller after creating pvc only test/e2e/storage/volume_metrics.go:611 [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:33.8 Apr 11 18:32:33.800: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:32:33.802 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:33.812 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:33.816 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:32:33.820: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:32:33.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-1578" for this suite. 04/11/24 18:32:33.826 ------------------------------ S [SKIPPED] [0.032 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 PVController test/e2e/storage/volume_metrics.go:500 should create unbound pvc count metrics for pvc controller after creating pvc only test/e2e/storage/volume_metrics.go:611 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:33.8 Apr 11 18:32:33.800: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:32:33.802 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:33.812 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:33.816 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:32:33.820: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:32:33.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-1578" for this suite. 04/11/24 18:32:33.826 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func33.2() test/e2e/storage/volume_metrics.go:102 +0x6c ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 test/e2e/storage/persistent_volumes-local.go:258 [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:33.897 Apr 11 18:32:33.897: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/11/24 18:32:33.899 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:33.909 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:33.918 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:198 Apr 11 18:32:33.936: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker2-p5cs5" in namespace "persistent-local-volumes-test-7645" to be "running" Apr 11 18:32:33.939: INFO: Pod "hostexec-v126-worker2-p5cs5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.926908ms Apr 11 18:32:35.943: INFO: Pod "hostexec-v126-worker2-p5cs5": Phase="Running", Reason="", readiness=true. Elapsed: 2.007173907s Apr 11 18:32:35.943: INFO: Pod "hostexec-v126-worker2-p5cs5" satisfied condition "running" Apr 11 18:32:35.944: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-7645 PodName:hostexec-v126-worker2-p5cs5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:32:35.944: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:32:35.945: INFO: ExecWithOptions: Clientset creation Apr 11 18:32:35.945: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-7645/pods/hostexec-v126-worker2-p5cs5/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=ls+-1+%2Fmnt%2Fdisks%2Fby-uuid%2Fgoogle-local-ssds-scsi-fs%2F+%7C+wc+-l&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 11 18:32:36.107: INFO: exec v126-worker2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Apr 11 18:32:36.107: INFO: exec v126-worker2: stdout: "0\n" Apr 11 18:32:36.107: INFO: exec v126-worker2: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Apr 11 18:32:36.107: INFO: exec v126-worker2: exit code: 0 Apr 11 18:32:36.107: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:207 STEP: Cleaning up PVC and PV 04/11/24 18:32:36.108 [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 11 18:32:36.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-7645" for this suite. 04/11/24 18:32:36.113 ------------------------------ S [SKIPPED] [2.221 seconds] [sig-storage] PersistentVolumes-local test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] [BeforeEach] test/e2e/storage/persistent_volumes-local.go:198 Two pods mounting a local volume one after the other test/e2e/storage/persistent_volumes-local.go:257 should be able to write from pod1 and read from pod2 test/e2e/storage/persistent_volumes-local.go:258 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:33.897 Apr 11 18:32:33.897: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/11/24 18:32:33.899 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:33.909 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:33.918 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:198 Apr 11 18:32:33.936: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker2-p5cs5" in namespace "persistent-local-volumes-test-7645" to be "running" Apr 11 18:32:33.939: INFO: Pod "hostexec-v126-worker2-p5cs5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.926908ms Apr 11 18:32:35.943: INFO: Pod "hostexec-v126-worker2-p5cs5": Phase="Running", Reason="", readiness=true. Elapsed: 2.007173907s Apr 11 18:32:35.943: INFO: Pod "hostexec-v126-worker2-p5cs5" satisfied condition "running" Apr 11 18:32:35.944: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-7645 PodName:hostexec-v126-worker2-p5cs5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:32:35.944: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:32:35.945: INFO: ExecWithOptions: Clientset creation Apr 11 18:32:35.945: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-7645/pods/hostexec-v126-worker2-p5cs5/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=ls+-1+%2Fmnt%2Fdisks%2Fby-uuid%2Fgoogle-local-ssds-scsi-fs%2F+%7C+wc+-l&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 11 18:32:36.107: INFO: exec v126-worker2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Apr 11 18:32:36.107: INFO: exec v126-worker2: stdout: "0\n" Apr 11 18:32:36.107: INFO: exec v126-worker2: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Apr 11 18:32:36.107: INFO: exec v126-worker2: exit code: 0 Apr 11 18:32:36.107: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:207 STEP: Cleaning up PVC and PV 04/11/24 18:32:36.108 [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 11 18:32:36.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-7645" for this suite. 04/11/24 18:32:36.113 << End Captured GinkgoWriter Output Requires at least 1 scsi fs localSSD In [BeforeEach] at: test/e2e/storage/persistent_volumes-local.go:1255 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.cleanupLocalPVCsPVs(0xc0054f0cf0, {0xc004a81f58, 0x1, 0x22?}) test/e2e/storage/persistent_volumes-local.go:854 +0xa9 k8s.io/kubernetes/test/e2e/storage.cleanupLocalVolumes(0xc0054f0cf0, {0xc004a81f58?, 0x1, 0x200?}) test/e2e/storage/persistent_volumes-local.go:863 +0x2d k8s.io/kubernetes/test/e2e/storage.glob..func25.2.2() test/e2e/storage/persistent_volumes-local.go:208 +0x47 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow] test/e2e/storage/persistent_volumes-local.go:277 [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:36.124 Apr 11 18:32:36.124: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/11/24 18:32:36.126 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:36.137 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:36.142 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:198 Apr 11 18:32:36.158: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker2-gms9q" in namespace "persistent-local-volumes-test-3828" to be "running" Apr 11 18:32:36.162: INFO: Pod "hostexec-v126-worker2-gms9q": Phase="Pending", Reason="", readiness=false. Elapsed: 3.207147ms Apr 11 18:32:38.167: INFO: Pod "hostexec-v126-worker2-gms9q": Phase="Running", Reason="", readiness=true. Elapsed: 2.008032968s Apr 11 18:32:38.167: INFO: Pod "hostexec-v126-worker2-gms9q" satisfied condition "running" Apr 11 18:32:38.167: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-3828 PodName:hostexec-v126-worker2-gms9q ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:32:38.167: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:32:38.168: INFO: ExecWithOptions: Clientset creation Apr 11 18:32:38.168: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-3828/pods/hostexec-v126-worker2-gms9q/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=ls+-1+%2Fmnt%2Fdisks%2Fby-uuid%2Fgoogle-local-ssds-scsi-fs%2F+%7C+wc+-l&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 11 18:32:38.318: INFO: exec v126-worker2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Apr 11 18:32:38.318: INFO: exec v126-worker2: stdout: "0\n" Apr 11 18:32:38.318: INFO: exec v126-worker2: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Apr 11 18:32:38.318: INFO: exec v126-worker2: exit code: 0 Apr 11 18:32:38.318: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:207 STEP: Cleaning up PVC and PV 04/11/24 18:32:38.318 [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 11 18:32:38.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-3828" for this suite. 04/11/24 18:32:38.323 ------------------------------ S [SKIPPED] [2.204 seconds] [sig-storage] PersistentVolumes-local test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] [BeforeEach] test/e2e/storage/persistent_volumes-local.go:198 Set fsGroup for local volume test/e2e/storage/persistent_volumes-local.go:263 should set same fsGroup for two pods simultaneously [Slow] test/e2e/storage/persistent_volumes-local.go:277 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:36.124 Apr 11 18:32:36.124: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/11/24 18:32:36.126 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:36.137 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:36.142 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:198 Apr 11 18:32:36.158: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker2-gms9q" in namespace "persistent-local-volumes-test-3828" to be "running" Apr 11 18:32:36.162: INFO: Pod "hostexec-v126-worker2-gms9q": Phase="Pending", Reason="", readiness=false. Elapsed: 3.207147ms Apr 11 18:32:38.167: INFO: Pod "hostexec-v126-worker2-gms9q": Phase="Running", Reason="", readiness=true. Elapsed: 2.008032968s Apr 11 18:32:38.167: INFO: Pod "hostexec-v126-worker2-gms9q" satisfied condition "running" Apr 11 18:32:38.167: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-3828 PodName:hostexec-v126-worker2-gms9q ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:32:38.167: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:32:38.168: INFO: ExecWithOptions: Clientset creation Apr 11 18:32:38.168: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-3828/pods/hostexec-v126-worker2-gms9q/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=ls+-1+%2Fmnt%2Fdisks%2Fby-uuid%2Fgoogle-local-ssds-scsi-fs%2F+%7C+wc+-l&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 11 18:32:38.318: INFO: exec v126-worker2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Apr 11 18:32:38.318: INFO: exec v126-worker2: stdout: "0\n" Apr 11 18:32:38.318: INFO: exec v126-worker2: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Apr 11 18:32:38.318: INFO: exec v126-worker2: exit code: 0 Apr 11 18:32:38.318: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:207 STEP: Cleaning up PVC and PV 04/11/24 18:32:38.318 [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 11 18:32:38.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-3828" for this suite. 04/11/24 18:32:38.323 << End Captured GinkgoWriter Output Requires at least 1 scsi fs localSSD In [BeforeEach] at: test/e2e/storage/persistent_volumes-local.go:1255 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.cleanupLocalPVCsPVs(0xc004370480, {0xc004a83f58, 0x1, 0x0?}) test/e2e/storage/persistent_volumes-local.go:854 +0xa9 k8s.io/kubernetes/test/e2e/storage.cleanupLocalVolumes(0xc004370480, {0xc004a83f58?, 0x1, 0xc006530a00?}) test/e2e/storage/persistent_volumes-local.go:863 +0x2d k8s.io/kubernetes/test/e2e/storage.glob..func25.2.2() test/e2e/storage/persistent_volumes-local.go:208 +0x47 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics Ephemeral should create prometheus metrics for volume provisioning errors [Slow] test/e2e/storage/volume_metrics.go:471 [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:38.331 Apr 11 18:32:38.331: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:32:38.333 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:38.344 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:38.347 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:32:38.351: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:32:38.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-7821" for this suite. 04/11/24 18:32:38.357 ------------------------------ S [SKIPPED] [0.031 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 Ephemeral test/e2e/storage/volume_metrics.go:495 should create prometheus metrics for volume provisioning errors [Slow] test/e2e/storage/volume_metrics.go:471 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:38.331 Apr 11 18:32:38.331: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:32:38.333 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:38.344 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:38.347 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:32:38.351: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:32:38.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-7821" for this suite. 04/11/24 18:32:38.357 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func33.2() test/e2e/storage/volume_metrics.go:102 +0x6c ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics Ephemeral should create volume metrics with the correct FilesystemMode PVC ref test/e2e/storage/volume_metrics.go:474 [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:38.366 Apr 11 18:32:38.366: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:32:38.368 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:38.379 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:38.383 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:32:38.387: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:32:38.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-194" for this suite. 04/11/24 18:32:38.393 ------------------------------ S [SKIPPED] [0.032 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 Ephemeral test/e2e/storage/volume_metrics.go:495 should create volume metrics with the correct FilesystemMode PVC ref test/e2e/storage/volume_metrics.go:474 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:38.366 Apr 11 18:32:38.366: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:32:38.368 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:38.379 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:38.383 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:32:38.387: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:32:38.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-194" for this suite. 04/11/24 18:32:38.393 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func33.2() test/e2e/storage/volume_metrics.go:102 +0x6c ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] One pod requesting one prebound PVC should be able to mount volume and write from pod1 test/e2e/storage/persistent_volumes-local.go:241 [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:38.401 Apr 11 18:32:38.401: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/11/24 18:32:38.403 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:38.416 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:38.42 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:198 Apr 11 18:32:38.437: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker2-74m49" in namespace "persistent-local-volumes-test-5046" to be "running" Apr 11 18:32:38.440: INFO: Pod "hostexec-v126-worker2-74m49": Phase="Pending", Reason="", readiness=false. Elapsed: 3.486499ms Apr 11 18:32:40.445: INFO: Pod "hostexec-v126-worker2-74m49": Phase="Running", Reason="", readiness=true. Elapsed: 2.008234824s Apr 11 18:32:40.445: INFO: Pod "hostexec-v126-worker2-74m49" satisfied condition "running" Apr 11 18:32:40.445: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-5046 PodName:hostexec-v126-worker2-74m49 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:32:40.445: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:32:40.447: INFO: ExecWithOptions: Clientset creation Apr 11 18:32:40.447: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-5046/pods/hostexec-v126-worker2-74m49/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=ls+-1+%2Fmnt%2Fdisks%2Fby-uuid%2Fgoogle-local-ssds-scsi-fs%2F+%7C+wc+-l&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 11 18:32:40.619: INFO: exec v126-worker2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Apr 11 18:32:40.619: INFO: exec v126-worker2: stdout: "0\n" Apr 11 18:32:40.619: INFO: exec v126-worker2: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Apr 11 18:32:40.619: INFO: exec v126-worker2: exit code: 0 Apr 11 18:32:40.619: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:207 STEP: Cleaning up PVC and PV 04/11/24 18:32:40.62 [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 11 18:32:40.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-5046" for this suite. 04/11/24 18:32:40.625 ------------------------------ S [SKIPPED] [2.229 seconds] [sig-storage] PersistentVolumes-local test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] [BeforeEach] test/e2e/storage/persistent_volumes-local.go:198 One pod requesting one prebound PVC test/e2e/storage/persistent_volumes-local.go:212 should be able to mount volume and write from pod1 test/e2e/storage/persistent_volumes-local.go:241 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:38.401 Apr 11 18:32:38.401: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/11/24 18:32:38.403 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:38.416 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:38.42 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:198 Apr 11 18:32:38.437: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker2-74m49" in namespace "persistent-local-volumes-test-5046" to be "running" Apr 11 18:32:38.440: INFO: Pod "hostexec-v126-worker2-74m49": Phase="Pending", Reason="", readiness=false. Elapsed: 3.486499ms Apr 11 18:32:40.445: INFO: Pod "hostexec-v126-worker2-74m49": Phase="Running", Reason="", readiness=true. Elapsed: 2.008234824s Apr 11 18:32:40.445: INFO: Pod "hostexec-v126-worker2-74m49" satisfied condition "running" Apr 11 18:32:40.445: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-5046 PodName:hostexec-v126-worker2-74m49 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:32:40.445: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:32:40.447: INFO: ExecWithOptions: Clientset creation Apr 11 18:32:40.447: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-5046/pods/hostexec-v126-worker2-74m49/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=ls+-1+%2Fmnt%2Fdisks%2Fby-uuid%2Fgoogle-local-ssds-scsi-fs%2F+%7C+wc+-l&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 11 18:32:40.619: INFO: exec v126-worker2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Apr 11 18:32:40.619: INFO: exec v126-worker2: stdout: "0\n" Apr 11 18:32:40.619: INFO: exec v126-worker2: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Apr 11 18:32:40.619: INFO: exec v126-worker2: exit code: 0 Apr 11 18:32:40.619: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:207 STEP: Cleaning up PVC and PV 04/11/24 18:32:40.62 [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 11 18:32:40.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-5046" for this suite. 04/11/24 18:32:40.625 << End Captured GinkgoWriter Output Requires at least 1 scsi fs localSSD In [BeforeEach] at: test/e2e/storage/persistent_volumes-local.go:1255 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.cleanupLocalPVCsPVs(0xc002949290, {0xc003d91f58, 0x1, 0x22?}) test/e2e/storage/persistent_volumes-local.go:854 +0xa9 k8s.io/kubernetes/test/e2e/storage.cleanupLocalVolumes(0xc002949290, {0xc003d91f58?, 0x1, 0x200?}) test/e2e/storage/persistent_volumes-local.go:863 +0x2d k8s.io/kubernetes/test/e2e/storage.glob..func25.2.2() test/e2e/storage/persistent_volumes-local.go:208 +0x47 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] One pod requesting one prebound PVC should be able to mount volume and read from pod1 test/e2e/storage/persistent_volumes-local.go:235 [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:40.634 Apr 11 18:32:40.634: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/11/24 18:32:40.636 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:40.648 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:40.651 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:198 Apr 11 18:32:40.668: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker2-s4jwx" in namespace "persistent-local-volumes-test-5310" to be "running" Apr 11 18:32:40.671: INFO: Pod "hostexec-v126-worker2-s4jwx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.954317ms Apr 11 18:32:42.676: INFO: Pod "hostexec-v126-worker2-s4jwx": Phase="Running", Reason="", readiness=true. Elapsed: 2.007281615s Apr 11 18:32:42.676: INFO: Pod "hostexec-v126-worker2-s4jwx" satisfied condition "running" Apr 11 18:32:42.676: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-5310 PodName:hostexec-v126-worker2-s4jwx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:32:42.676: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:32:42.677: INFO: ExecWithOptions: Clientset creation Apr 11 18:32:42.677: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-5310/pods/hostexec-v126-worker2-s4jwx/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=ls+-1+%2Fmnt%2Fdisks%2Fby-uuid%2Fgoogle-local-ssds-scsi-fs%2F+%7C+wc+-l&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 11 18:32:42.824: INFO: exec v126-worker2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Apr 11 18:32:42.824: INFO: exec v126-worker2: stdout: "0\n" Apr 11 18:32:42.824: INFO: exec v126-worker2: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Apr 11 18:32:42.824: INFO: exec v126-worker2: exit code: 0 Apr 11 18:32:42.824: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:207 STEP: Cleaning up PVC and PV 04/11/24 18:32:42.824 [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 11 18:32:42.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-5310" for this suite. 04/11/24 18:32:42.829 ------------------------------ S [SKIPPED] [2.201 seconds] [sig-storage] PersistentVolumes-local test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] [BeforeEach] test/e2e/storage/persistent_volumes-local.go:198 One pod requesting one prebound PVC test/e2e/storage/persistent_volumes-local.go:212 should be able to mount volume and read from pod1 test/e2e/storage/persistent_volumes-local.go:235 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:40.634 Apr 11 18:32:40.634: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/11/24 18:32:40.636 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:40.648 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:40.651 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:198 Apr 11 18:32:40.668: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker2-s4jwx" in namespace "persistent-local-volumes-test-5310" to be "running" Apr 11 18:32:40.671: INFO: Pod "hostexec-v126-worker2-s4jwx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.954317ms Apr 11 18:32:42.676: INFO: Pod "hostexec-v126-worker2-s4jwx": Phase="Running", Reason="", readiness=true. Elapsed: 2.007281615s Apr 11 18:32:42.676: INFO: Pod "hostexec-v126-worker2-s4jwx" satisfied condition "running" Apr 11 18:32:42.676: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-5310 PodName:hostexec-v126-worker2-s4jwx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:32:42.676: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:32:42.677: INFO: ExecWithOptions: Clientset creation Apr 11 18:32:42.677: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-5310/pods/hostexec-v126-worker2-s4jwx/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=ls+-1+%2Fmnt%2Fdisks%2Fby-uuid%2Fgoogle-local-ssds-scsi-fs%2F+%7C+wc+-l&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 11 18:32:42.824: INFO: exec v126-worker2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Apr 11 18:32:42.824: INFO: exec v126-worker2: stdout: "0\n" Apr 11 18:32:42.824: INFO: exec v126-worker2: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Apr 11 18:32:42.824: INFO: exec v126-worker2: exit code: 0 Apr 11 18:32:42.824: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:207 STEP: Cleaning up PVC and PV 04/11/24 18:32:42.824 [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 11 18:32:42.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-5310" for this suite. 04/11/24 18:32:42.829 << End Captured GinkgoWriter Output Requires at least 1 scsi fs localSSD In [BeforeEach] at: test/e2e/storage/persistent_volumes-local.go:1255 There were additional failures detected after the initial failure: [PANICKED] Test Panicked In [AfterEach] at: /usr/local/go/src/runtime/panic.go:260 runtime error: invalid memory address or nil pointer dereference Full Stack Trace k8s.io/kubernetes/test/e2e/storage.cleanupLocalPVCsPVs(0xc0032f62d0, {0xc00066df58, 0x1, 0x3bdba89?}) test/e2e/storage/persistent_volumes-local.go:854 +0xa9 k8s.io/kubernetes/test/e2e/storage.cleanupLocalVolumes(0xc0032f62d0, {0xc00066df58?, 0x1, 0x300000000000000?}) test/e2e/storage/persistent_volumes-local.go:863 +0x2d k8s.io/kubernetes/test/e2e/storage.glob..func25.2.2() test/e2e/storage/persistent_volumes-local.go:208 +0x47 ------------------------------ SSS ------------------------------ [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes test/e2e/storage/persistent_volumes-local.go:534 [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:42.836 Apr 11 18:32:42.836: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/11/24 18:32:42.838 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:42.85 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:42.854 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] Stress with local volumes [Serial] test/e2e/storage/persistent_volumes-local.go:458 STEP: Setting up 10 local volumes on node "v126-worker2" 04/11/24 18:32:42.866 STEP: Creating tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-c821aab8-0cc2-4484-81f9-3aeed0836837" 04/11/24 18:32:42.867 Apr 11 18:32:42.875: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker2-gx475" in namespace "persistent-local-volumes-test-1854" to be "running" Apr 11 18:32:42.878: INFO: Pod "hostexec-v126-worker2-gx475": Phase="Pending", Reason="", readiness=false. Elapsed: 3.187484ms Apr 11 18:32:44.883: INFO: Pod "hostexec-v126-worker2-gx475": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007804751s Apr 11 18:32:46.883: INFO: Pod "hostexec-v126-worker2-gx475": Phase="Running", Reason="", readiness=true. Elapsed: 4.008286335s Apr 11 18:32:46.883: INFO: Pod "hostexec-v126-worker2-gx475" satisfied condition "running" Apr 11 18:32:46.884: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-c821aab8-0cc2-4484-81f9-3aeed0836837" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-c821aab8-0cc2-4484-81f9-3aeed0836837" "/tmp/local-volume-test-c821aab8-0cc2-4484-81f9-3aeed0836837"] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:32:46.884: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:32:46.885: INFO: ExecWithOptions: Clientset creation Apr 11 18:32:46.885: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-c821aab8-0cc2-4484-81f9-3aeed0836837%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-c821aab8-0cc2-4484-81f9-3aeed0836837%22+%22%2Ftmp%2Flocal-volume-test-c821aab8-0cc2-4484-81f9-3aeed0836837%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-8d42d021-b0bd-46e1-b56d-df195168a11d" 04/11/24 18:32:47.058 Apr 11 18:32:47.058: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-8d42d021-b0bd-46e1-b56d-df195168a11d" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-8d42d021-b0bd-46e1-b56d-df195168a11d" "/tmp/local-volume-test-8d42d021-b0bd-46e1-b56d-df195168a11d"] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:32:47.058: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:32:47.059: INFO: ExecWithOptions: Clientset creation Apr 11 18:32:47.059: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-8d42d021-b0bd-46e1-b56d-df195168a11d%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-8d42d021-b0bd-46e1-b56d-df195168a11d%22+%22%2Ftmp%2Flocal-volume-test-8d42d021-b0bd-46e1-b56d-df195168a11d%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-8d92484b-f0a5-4ebc-bd3f-c64ec8b45596" 04/11/24 18:32:47.229 Apr 11 18:32:47.229: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-8d92484b-f0a5-4ebc-bd3f-c64ec8b45596" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-8d92484b-f0a5-4ebc-bd3f-c64ec8b45596" "/tmp/local-volume-test-8d92484b-f0a5-4ebc-bd3f-c64ec8b45596"] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:32:47.229: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:32:47.231: INFO: ExecWithOptions: Clientset creation Apr 11 18:32:47.231: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-8d92484b-f0a5-4ebc-bd3f-c64ec8b45596%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-8d92484b-f0a5-4ebc-bd3f-c64ec8b45596%22+%22%2Ftmp%2Flocal-volume-test-8d92484b-f0a5-4ebc-bd3f-c64ec8b45596%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-10a9c0b0-7f9e-4ca8-aad9-8e62cdb8d3fc" 04/11/24 18:32:47.382 Apr 11 18:32:47.382: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-10a9c0b0-7f9e-4ca8-aad9-8e62cdb8d3fc" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-10a9c0b0-7f9e-4ca8-aad9-8e62cdb8d3fc" "/tmp/local-volume-test-10a9c0b0-7f9e-4ca8-aad9-8e62cdb8d3fc"] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:32:47.382: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:32:47.383: INFO: ExecWithOptions: Clientset creation Apr 11 18:32:47.383: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-10a9c0b0-7f9e-4ca8-aad9-8e62cdb8d3fc%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-10a9c0b0-7f9e-4ca8-aad9-8e62cdb8d3fc%22+%22%2Ftmp%2Flocal-volume-test-10a9c0b0-7f9e-4ca8-aad9-8e62cdb8d3fc%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-7902c96c-bebc-46c0-9cc4-969f8c6a4d63" 04/11/24 18:32:47.535 Apr 11 18:32:47.535: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-7902c96c-bebc-46c0-9cc4-969f8c6a4d63" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-7902c96c-bebc-46c0-9cc4-969f8c6a4d63" "/tmp/local-volume-test-7902c96c-bebc-46c0-9cc4-969f8c6a4d63"] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:32:47.535: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:32:47.537: INFO: ExecWithOptions: Clientset creation Apr 11 18:32:47.537: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-7902c96c-bebc-46c0-9cc4-969f8c6a4d63%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-7902c96c-bebc-46c0-9cc4-969f8c6a4d63%22+%22%2Ftmp%2Flocal-volume-test-7902c96c-bebc-46c0-9cc4-969f8c6a4d63%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-8dffc439-4c64-4942-bf4b-15cbdd29866f" 04/11/24 18:32:47.697 Apr 11 18:32:47.697: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-8dffc439-4c64-4942-bf4b-15cbdd29866f" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-8dffc439-4c64-4942-bf4b-15cbdd29866f" "/tmp/local-volume-test-8dffc439-4c64-4942-bf4b-15cbdd29866f"] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:32:47.697: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:32:47.698: INFO: ExecWithOptions: Clientset creation Apr 11 18:32:47.698: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-8dffc439-4c64-4942-bf4b-15cbdd29866f%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-8dffc439-4c64-4942-bf4b-15cbdd29866f%22+%22%2Ftmp%2Flocal-volume-test-8dffc439-4c64-4942-bf4b-15cbdd29866f%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-5937c208-d670-4bdf-8528-faf91b1976ba" 04/11/24 18:32:47.853 Apr 11 18:32:47.854: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-5937c208-d670-4bdf-8528-faf91b1976ba" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-5937c208-d670-4bdf-8528-faf91b1976ba" "/tmp/local-volume-test-5937c208-d670-4bdf-8528-faf91b1976ba"] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:32:47.854: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:32:47.855: INFO: ExecWithOptions: Clientset creation Apr 11 18:32:47.855: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-5937c208-d670-4bdf-8528-faf91b1976ba%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-5937c208-d670-4bdf-8528-faf91b1976ba%22+%22%2Ftmp%2Flocal-volume-test-5937c208-d670-4bdf-8528-faf91b1976ba%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-d7ba3742-0857-441a-ba89-c6199dd133ed" 04/11/24 18:32:47.991 Apr 11 18:32:47.991: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-d7ba3742-0857-441a-ba89-c6199dd133ed" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-d7ba3742-0857-441a-ba89-c6199dd133ed" "/tmp/local-volume-test-d7ba3742-0857-441a-ba89-c6199dd133ed"] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:32:47.991: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:32:47.992: INFO: ExecWithOptions: Clientset creation Apr 11 18:32:47.992: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-d7ba3742-0857-441a-ba89-c6199dd133ed%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-d7ba3742-0857-441a-ba89-c6199dd133ed%22+%22%2Ftmp%2Flocal-volume-test-d7ba3742-0857-441a-ba89-c6199dd133ed%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-0c753d90-60a1-4276-baac-f72ce21da01f" 04/11/24 18:32:48.145 Apr 11 18:32:48.146: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-0c753d90-60a1-4276-baac-f72ce21da01f" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-0c753d90-60a1-4276-baac-f72ce21da01f" "/tmp/local-volume-test-0c753d90-60a1-4276-baac-f72ce21da01f"] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:32:48.146: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:32:48.148: INFO: ExecWithOptions: Clientset creation Apr 11 18:32:48.148: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-0c753d90-60a1-4276-baac-f72ce21da01f%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-0c753d90-60a1-4276-baac-f72ce21da01f%22+%22%2Ftmp%2Flocal-volume-test-0c753d90-60a1-4276-baac-f72ce21da01f%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-326fbeee-a3f3-4310-b09b-986692b47588" 04/11/24 18:32:48.283 Apr 11 18:32:48.283: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-326fbeee-a3f3-4310-b09b-986692b47588" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-326fbeee-a3f3-4310-b09b-986692b47588" "/tmp/local-volume-test-326fbeee-a3f3-4310-b09b-986692b47588"] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:32:48.283: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:32:48.284: INFO: ExecWithOptions: Clientset creation Apr 11 18:32:48.284: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-326fbeee-a3f3-4310-b09b-986692b47588%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-326fbeee-a3f3-4310-b09b-986692b47588%22+%22%2Ftmp%2Flocal-volume-test-326fbeee-a3f3-4310-b09b-986692b47588%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Create 10 PVs 04/11/24 18:32:48.367 STEP: Start a goroutine to recycle unbound PVs 04/11/24 18:32:48.407 [It] should be able to process many pods and reuse local volumes test/e2e/storage/persistent_volumes-local.go:534 STEP: Creating 4 pods periodically 04/11/24 18:32:48.407 STEP: Waiting for all pods to complete successfully 04/11/24 18:32:48.407 Apr 11 18:32:57.479: INFO: Deleting pod pod-b9be9e0e-2d80-4f80-85ba-c67e8bb1b114 Apr 11 18:32:57.488: INFO: Deleting PersistentVolumeClaim "pvc-lm9x9" Apr 11 18:32:57.494: INFO: Deleting PersistentVolumeClaim "pvc-v958r" Apr 11 18:32:57.499: INFO: Deleting PersistentVolumeClaim "pvc-rnclw" Apr 11 18:32:57.504: INFO: 1/16 pods finished STEP: Delete "local-pv9nmq9" and create a new PV for same local volume storage 04/11/24 18:32:57.515 STEP: Delete "local-pv9nmq9" and create a new PV for same local volume storage 04/11/24 18:32:57.529 STEP: Delete "local-pvltp4n" and create a new PV for same local volume storage 04/11/24 18:32:57.534 STEP: Delete "local-pvt5w6w" and create a new PV for same local volume storage 04/11/24 18:32:57.546 Apr 11 18:32:59.479: INFO: Deleting pod pod-ff488f99-17ff-4922-94aa-3570ef79d09d Apr 11 18:32:59.489: INFO: Deleting PersistentVolumeClaim "pvc-g4xgv" Apr 11 18:32:59.494: INFO: Deleting PersistentVolumeClaim "pvc-j5hrt" Apr 11 18:32:59.499: INFO: Deleting PersistentVolumeClaim "pvc-5ds89" Apr 11 18:32:59.504: INFO: 2/16 pods finished STEP: Delete "local-pvrvrnd" and create a new PV for same local volume storage 04/11/24 18:32:59.515 STEP: Delete "local-pvbmcbw" and create a new PV for same local volume storage 04/11/24 18:32:59.53 STEP: Delete "local-pvp95kk" and create a new PV for same local volume storage 04/11/24 18:32:59.544 Apr 11 18:33:06.480: INFO: Deleting pod pod-e76924b7-22b5-4866-964a-3d9bd04636ac Apr 11 18:33:06.489: INFO: Deleting PersistentVolumeClaim "pvc-fpj9v" Apr 11 18:33:06.494: INFO: Deleting PersistentVolumeClaim "pvc-p2zf5" Apr 11 18:33:06.499: INFO: Deleting PersistentVolumeClaim "pvc-9hz77" Apr 11 18:33:06.503: INFO: 3/16 pods finished STEP: Delete "local-pvhvk5s" and create a new PV for same local volume storage 04/11/24 18:33:06.513 STEP: Delete "local-pvl7tng" and create a new PV for same local volume storage 04/11/24 18:33:06.527 STEP: Delete "local-pvk9h8w" and create a new PV for same local volume storage 04/11/24 18:33:06.541 Apr 11 18:33:11.479: INFO: Deleting pod pod-8c3b1cdf-6ee9-44b5-860b-b6f998e28875 Apr 11 18:33:11.493: INFO: Deleting PersistentVolumeClaim "pvc-h554z" Apr 11 18:33:11.498: INFO: Deleting PersistentVolumeClaim "pvc-xtrff" Apr 11 18:33:11.503: INFO: Deleting PersistentVolumeClaim "pvc-8s2b5" Apr 11 18:33:11.509: INFO: 4/16 pods finished STEP: Delete "local-pvsghcf" and create a new PV for same local volume storage 04/11/24 18:33:11.521 STEP: Delete "local-pvx8qv5" and create a new PV for same local volume storage 04/11/24 18:33:11.554 STEP: Delete "local-pv7z8lq" and create a new PV for same local volume storage 04/11/24 18:33:11.571 Apr 11 18:33:13.480: INFO: Deleting pod pod-473472f1-fd30-4c38-b22f-cb9dbb5a828f Apr 11 18:33:13.489: INFO: Deleting PersistentVolumeClaim "pvc-rkz4f" Apr 11 18:33:13.495: INFO: Deleting PersistentVolumeClaim "pvc-8q9kv" Apr 11 18:33:13.500: INFO: Deleting PersistentVolumeClaim "pvc-zjnb6" Apr 11 18:33:13.505: INFO: 5/16 pods finished STEP: Delete "local-pv2zsfj" and create a new PV for same local volume storage 04/11/24 18:33:13.52 STEP: Delete "local-pvc282m" and create a new PV for same local volume storage 04/11/24 18:33:13.535 STEP: Delete "local-pvfj27s" and create a new PV for same local volume storage 04/11/24 18:33:13.55 Apr 11 18:33:20.479: INFO: Deleting pod pod-c386cde0-0640-4cac-8fc9-414799a401fc Apr 11 18:33:20.489: INFO: Deleting PersistentVolumeClaim "pvc-4mxcj" Apr 11 18:33:20.494: INFO: Deleting PersistentVolumeClaim "pvc-p7nqb" Apr 11 18:33:20.499: INFO: Deleting PersistentVolumeClaim "pvc-hcnfm" Apr 11 18:33:20.504: INFO: 6/16 pods finished STEP: Delete "local-pvqj7lq" and create a new PV for same local volume storage 04/11/24 18:33:20.517 STEP: Delete "local-pvh5h5m" and create a new PV for same local volume storage 04/11/24 18:33:20.532 STEP: Delete "local-pvlpwtt" and create a new PV for same local volume storage 04/11/24 18:33:20.545 Apr 11 18:33:29.479: INFO: Deleting pod pod-d9a4bfb1-40ae-4c88-9322-35aa364562c1 Apr 11 18:33:29.489: INFO: Deleting PersistentVolumeClaim "pvc-7r7kg" Apr 11 18:33:29.493: INFO: Deleting PersistentVolumeClaim "pvc-8p97b" Apr 11 18:33:29.498: INFO: Deleting PersistentVolumeClaim "pvc-l6m5t" Apr 11 18:33:29.503: INFO: 7/16 pods finished STEP: Delete "local-pvghs9j" and create a new PV for same local volume storage 04/11/24 18:33:29.517 STEP: Delete "local-pvhp72w" and create a new PV for same local volume storage 04/11/24 18:33:29.532 STEP: Delete "local-pvmdz2q" and create a new PV for same local volume storage 04/11/24 18:33:29.546 Apr 11 18:33:30.480: INFO: Deleting pod pod-16006972-6962-4ede-9562-6ad564800827 Apr 11 18:33:30.488: INFO: Deleting PersistentVolumeClaim "pvc-ctw7r" Apr 11 18:33:30.493: INFO: Deleting PersistentVolumeClaim "pvc-bt62n" Apr 11 18:33:30.498: INFO: Deleting PersistentVolumeClaim "pvc-n7nbj" Apr 11 18:33:30.503: INFO: 8/16 pods finished STEP: Delete "local-pv7xgsv" and create a new PV for same local volume storage 04/11/24 18:33:30.515 STEP: Delete "local-pvtvnnw" and create a new PV for same local volume storage 04/11/24 18:33:30.531 STEP: Delete "local-pv6nb7x" and create a new PV for same local volume storage 04/11/24 18:33:30.543 Apr 11 18:33:33.479: INFO: Deleting pod pod-69526187-5636-43ff-b4a1-7befef5c4eba Apr 11 18:33:33.487: INFO: Deleting PersistentVolumeClaim "pvc-qqlcr" Apr 11 18:33:33.492: INFO: Deleting PersistentVolumeClaim "pvc-bvjxn" Apr 11 18:33:33.497: INFO: Deleting PersistentVolumeClaim "pvc-mq8nz" Apr 11 18:33:33.503: INFO: 9/16 pods finished STEP: Delete "local-pvw6rn2" and create a new PV for same local volume storage 04/11/24 18:33:33.513 STEP: Delete "local-pvg2kns" and create a new PV for same local volume storage 04/11/24 18:33:33.528 STEP: Delete "local-pvmb4j4" and create a new PV for same local volume storage 04/11/24 18:33:33.542 Apr 11 18:33:44.479: INFO: Deleting pod pod-524a6951-bf09-40a2-b942-132d3e893cad Apr 11 18:33:44.488: INFO: Deleting PersistentVolumeClaim "pvc-tpfz9" Apr 11 18:33:44.494: INFO: Deleting PersistentVolumeClaim "pvc-qfsth" Apr 11 18:33:44.499: INFO: Deleting PersistentVolumeClaim "pvc-cpg9c" Apr 11 18:33:44.504: INFO: 10/16 pods finished Apr 11 18:33:44.504: INFO: Deleting pod pod-d5869120-a5a9-4b2e-9421-f3c7fe418ffb Apr 11 18:33:44.512: INFO: Deleting PersistentVolumeClaim "pvc-d9pcz" Apr 11 18:33:44.517: INFO: Deleting PersistentVolumeClaim "pvc-2sr5g" STEP: Delete "local-pvmls9v" and create a new PV for same local volume storage 04/11/24 18:33:44.519 Apr 11 18:33:44.521: INFO: Deleting PersistentVolumeClaim "pvc-gfb4f" Apr 11 18:33:44.526: INFO: 11/16 pods finished STEP: Delete "local-pvmls9v" and create a new PV for same local volume storage 04/11/24 18:33:44.532 STEP: Delete "local-pv7fzh5" and create a new PV for same local volume storage 04/11/24 18:33:44.535 STEP: Delete "local-pv4r7l4" and create a new PV for same local volume storage 04/11/24 18:33:44.547 STEP: Delete "local-pvv9kkw" and create a new PV for same local volume storage 04/11/24 18:33:44.562 STEP: Delete "local-pvthgvv" and create a new PV for same local volume storage 04/11/24 18:33:44.578 STEP: Delete "local-pv6kv4s" and create a new PV for same local volume storage 04/11/24 18:33:44.593 Apr 11 18:33:48.479: INFO: Deleting pod pod-0bf26b7e-8829-4e9d-8185-c0d2ce047ee7 Apr 11 18:33:48.487: INFO: Deleting PersistentVolumeClaim "pvc-n4ssd" Apr 11 18:33:48.492: INFO: Deleting PersistentVolumeClaim "pvc-t4tvf" Apr 11 18:33:48.497: INFO: Deleting PersistentVolumeClaim "pvc-jfgwx" Apr 11 18:33:48.501: INFO: 12/16 pods finished STEP: Delete "local-pvnm2wz" and create a new PV for same local volume storage 04/11/24 18:33:48.514 STEP: Delete "local-pv5ltt8" and create a new PV for same local volume storage 04/11/24 18:33:48.53 STEP: Delete "local-pvqgg86" and create a new PV for same local volume storage 04/11/24 18:33:48.542 Apr 11 18:33:56.479: INFO: Deleting pod pod-f144abb6-829b-44b0-b9ca-352f16f97ca2 Apr 11 18:33:56.493: INFO: Deleting PersistentVolumeClaim "pvc-9vgrf" Apr 11 18:33:56.499: INFO: Deleting PersistentVolumeClaim "pvc-wk697" Apr 11 18:33:56.504: INFO: Deleting PersistentVolumeClaim "pvc-zpr68" Apr 11 18:33:56.509: INFO: 13/16 pods finished STEP: Delete "local-pvtbvv2" and create a new PV for same local volume storage 04/11/24 18:33:56.523 STEP: Delete "local-pvssv66" and create a new PV for same local volume storage 04/11/24 18:33:56.538 STEP: Delete "local-pv77z5r" and create a new PV for same local volume storage 04/11/24 18:33:56.553 Apr 11 18:33:57.481: INFO: Deleting pod pod-88eb423c-d1bb-4b25-9b20-27fb1853fd73 Apr 11 18:33:57.492: INFO: Deleting PersistentVolumeClaim "pvc-xjwmv" Apr 11 18:33:57.498: INFO: Deleting PersistentVolumeClaim "pvc-2bm8v" Apr 11 18:33:57.504: INFO: Deleting PersistentVolumeClaim "pvc-j4pnl" Apr 11 18:33:57.509: INFO: 14/16 pods finished STEP: Delete "local-pvvzhzn" and create a new PV for same local volume storage 04/11/24 18:33:57.521 STEP: Delete "local-pvs2g6d" and create a new PV for same local volume storage 04/11/24 18:33:57.536 STEP: Delete "local-pvqqr6v" and create a new PV for same local volume storage 04/11/24 18:33:57.547 Apr 11 18:34:05.479: INFO: Deleting pod pod-5304b08b-b2b6-4a8c-8ef0-1e1ab02f889a Apr 11 18:34:05.486: INFO: Deleting PersistentVolumeClaim "pvc-6wbj9" Apr 11 18:34:05.491: INFO: Deleting PersistentVolumeClaim "pvc-fxzj7" Apr 11 18:34:05.497: INFO: Deleting PersistentVolumeClaim "pvc-gxph8" Apr 11 18:34:05.501: INFO: 15/16 pods finished STEP: Delete "local-pv4t2ws" and create a new PV for same local volume storage 04/11/24 18:34:05.512 STEP: Delete "local-pv275gr" and create a new PV for same local volume storage 04/11/24 18:34:05.528 STEP: Delete "local-pvqqjkj" and create a new PV for same local volume storage 04/11/24 18:34:05.541 Apr 11 18:34:09.479: INFO: Deleting pod pod-ec008659-2bf3-4538-8ed8-fd5fabc7026c Apr 11 18:34:09.487: INFO: Deleting PersistentVolumeClaim "pvc-qv72n" Apr 11 18:34:09.493: INFO: Deleting PersistentVolumeClaim "pvc-rwn86" Apr 11 18:34:09.498: INFO: Deleting PersistentVolumeClaim "pvc-qv5gv" Apr 11 18:34:09.503: INFO: 16/16 pods finished [AfterEach] Stress with local volumes [Serial] test/e2e/storage/persistent_volumes-local.go:522 STEP: Stop and wait for recycle goroutine to finish 04/11/24 18:34:09.503 STEP: Clean all PVs 04/11/24 18:34:09.503 STEP: Cleaning up 10 local volumes on node "v126-worker2" 04/11/24 18:34:09.503 STEP: Cleaning up PVC and PV 04/11/24 18:34:09.503 Apr 11 18:34:09.504: INFO: pvc is nil Apr 11 18:34:09.504: INFO: Deleting PersistentVolume "local-pvl25m8" STEP: Cleaning up PVC and PV 04/11/24 18:34:09.509 Apr 11 18:34:09.509: INFO: pvc is nil Apr 11 18:34:09.509: INFO: Deleting PersistentVolume "local-pvcqnsh" STEP: Cleaning up PVC and PV 04/11/24 18:34:09.514 Apr 11 18:34:09.514: INFO: pvc is nil Apr 11 18:34:09.514: INFO: Deleting PersistentVolume "local-pvdfhzn" STEP: Cleaning up PVC and PV 04/11/24 18:34:09.519 Apr 11 18:34:09.519: INFO: pvc is nil Apr 11 18:34:09.519: INFO: Deleting PersistentVolume "local-pvqlvnm" STEP: Cleaning up PVC and PV 04/11/24 18:34:09.524 Apr 11 18:34:09.524: INFO: pvc is nil Apr 11 18:34:09.524: INFO: Deleting PersistentVolume "local-pvvgw62" STEP: Cleaning up PVC and PV 04/11/24 18:34:09.529 Apr 11 18:34:09.529: INFO: pvc is nil Apr 11 18:34:09.529: INFO: Deleting PersistentVolume "local-pvrvm8q" STEP: Cleaning up PVC and PV 04/11/24 18:34:09.534 Apr 11 18:34:09.534: INFO: pvc is nil Apr 11 18:34:09.534: INFO: Deleting PersistentVolume "local-pv5m9bz" STEP: Cleaning up PVC and PV 04/11/24 18:34:09.538 Apr 11 18:34:09.539: INFO: pvc is nil Apr 11 18:34:09.539: INFO: Deleting PersistentVolume "local-pvl7q4l" STEP: Cleaning up PVC and PV 04/11/24 18:34:09.544 Apr 11 18:34:09.544: INFO: pvc is nil Apr 11 18:34:09.544: INFO: Deleting PersistentVolume "local-pvsvjtz" STEP: Cleaning up PVC and PV 04/11/24 18:34:09.549 Apr 11 18:34:09.549: INFO: pvc is nil Apr 11 18:34:09.549: INFO: Deleting PersistentVolume "local-pvfvfwm" STEP: Unmount tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-c821aab8-0cc2-4484-81f9-3aeed0836837" 04/11/24 18:34:09.554 Apr 11 18:34:09.554: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-c821aab8-0cc2-4484-81f9-3aeed0836837"] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:34:09.554: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:34:09.556: INFO: ExecWithOptions: Clientset creation Apr 11 18:34:09.556: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-c821aab8-0cc2-4484-81f9-3aeed0836837%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/11/24 18:34:09.714 Apr 11 18:34:09.714: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c821aab8-0cc2-4484-81f9-3aeed0836837] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:34:09.714: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:34:09.716: INFO: ExecWithOptions: Clientset creation Apr 11 18:34:09.716: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-c821aab8-0cc2-4484-81f9-3aeed0836837&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-8d42d021-b0bd-46e1-b56d-df195168a11d" 04/11/24 18:34:09.867 Apr 11 18:34:09.867: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-8d42d021-b0bd-46e1-b56d-df195168a11d"] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:34:09.867: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:34:09.868: INFO: ExecWithOptions: Clientset creation Apr 11 18:34:09.868: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-8d42d021-b0bd-46e1-b56d-df195168a11d%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/11/24 18:34:10.008 Apr 11 18:34:10.008: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8d42d021-b0bd-46e1-b56d-df195168a11d] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:34:10.008: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:34:10.009: INFO: ExecWithOptions: Clientset creation Apr 11 18:34:10.009: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-8d42d021-b0bd-46e1-b56d-df195168a11d&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-8d92484b-f0a5-4ebc-bd3f-c64ec8b45596" 04/11/24 18:34:10.145 Apr 11 18:34:10.145: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-8d92484b-f0a5-4ebc-bd3f-c64ec8b45596"] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:34:10.145: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:34:10.146: INFO: ExecWithOptions: Clientset creation Apr 11 18:34:10.146: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-8d92484b-f0a5-4ebc-bd3f-c64ec8b45596%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/11/24 18:34:10.297 Apr 11 18:34:10.297: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8d92484b-f0a5-4ebc-bd3f-c64ec8b45596] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:34:10.297: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:34:10.298: INFO: ExecWithOptions: Clientset creation Apr 11 18:34:10.298: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-8d92484b-f0a5-4ebc-bd3f-c64ec8b45596&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-10a9c0b0-7f9e-4ca8-aad9-8e62cdb8d3fc" 04/11/24 18:34:10.452 Apr 11 18:34:10.452: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-10a9c0b0-7f9e-4ca8-aad9-8e62cdb8d3fc"] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:34:10.452: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:34:10.453: INFO: ExecWithOptions: Clientset creation Apr 11 18:34:10.453: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-10a9c0b0-7f9e-4ca8-aad9-8e62cdb8d3fc%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/11/24 18:34:10.589 Apr 11 18:34:10.590: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-10a9c0b0-7f9e-4ca8-aad9-8e62cdb8d3fc] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:34:10.590: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:34:10.591: INFO: ExecWithOptions: Clientset creation Apr 11 18:34:10.591: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-10a9c0b0-7f9e-4ca8-aad9-8e62cdb8d3fc&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-7902c96c-bebc-46c0-9cc4-969f8c6a4d63" 04/11/24 18:34:10.746 Apr 11 18:34:10.746: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-7902c96c-bebc-46c0-9cc4-969f8c6a4d63"] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:34:10.746: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:34:10.748: INFO: ExecWithOptions: Clientset creation Apr 11 18:34:10.748: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-7902c96c-bebc-46c0-9cc4-969f8c6a4d63%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/11/24 18:34:10.915 Apr 11 18:34:10.916: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-7902c96c-bebc-46c0-9cc4-969f8c6a4d63] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:34:10.916: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:34:10.917: INFO: ExecWithOptions: Clientset creation Apr 11 18:34:10.917: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-7902c96c-bebc-46c0-9cc4-969f8c6a4d63&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-8dffc439-4c64-4942-bf4b-15cbdd29866f" 04/11/24 18:34:11.088 Apr 11 18:34:11.088: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-8dffc439-4c64-4942-bf4b-15cbdd29866f"] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:34:11.088: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:34:11.089: INFO: ExecWithOptions: Clientset creation Apr 11 18:34:11.089: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-8dffc439-4c64-4942-bf4b-15cbdd29866f%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/11/24 18:34:11.256 Apr 11 18:34:11.257: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8dffc439-4c64-4942-bf4b-15cbdd29866f] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:34:11.257: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:34:11.258: INFO: ExecWithOptions: Clientset creation Apr 11 18:34:11.258: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-8dffc439-4c64-4942-bf4b-15cbdd29866f&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-5937c208-d670-4bdf-8528-faf91b1976ba" 04/11/24 18:34:11.398 Apr 11 18:34:11.398: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-5937c208-d670-4bdf-8528-faf91b1976ba"] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:34:11.398: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:34:11.399: INFO: ExecWithOptions: Clientset creation Apr 11 18:34:11.399: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-5937c208-d670-4bdf-8528-faf91b1976ba%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/11/24 18:34:11.559 Apr 11 18:34:11.559: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-5937c208-d670-4bdf-8528-faf91b1976ba] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:34:11.559: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:34:11.561: INFO: ExecWithOptions: Clientset creation Apr 11 18:34:11.561: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-5937c208-d670-4bdf-8528-faf91b1976ba&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-d7ba3742-0857-441a-ba89-c6199dd133ed" 04/11/24 18:34:11.716 Apr 11 18:34:11.716: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-d7ba3742-0857-441a-ba89-c6199dd133ed"] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:34:11.716: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:34:11.718: INFO: ExecWithOptions: Clientset creation Apr 11 18:34:11.718: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-d7ba3742-0857-441a-ba89-c6199dd133ed%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/11/24 18:34:11.873 Apr 11 18:34:11.873: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-d7ba3742-0857-441a-ba89-c6199dd133ed] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:34:11.873: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:34:11.875: INFO: ExecWithOptions: Clientset creation Apr 11 18:34:11.875: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-d7ba3742-0857-441a-ba89-c6199dd133ed&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-0c753d90-60a1-4276-baac-f72ce21da01f" 04/11/24 18:34:12.031 Apr 11 18:34:12.031: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-0c753d90-60a1-4276-baac-f72ce21da01f"] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:34:12.031: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:34:12.033: INFO: ExecWithOptions: Clientset creation Apr 11 18:34:12.033: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-0c753d90-60a1-4276-baac-f72ce21da01f%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/11/24 18:34:12.158 Apr 11 18:34:12.158: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0c753d90-60a1-4276-baac-f72ce21da01f] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:34:12.158: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:34:12.160: INFO: ExecWithOptions: Clientset creation Apr 11 18:34:12.160: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-0c753d90-60a1-4276-baac-f72ce21da01f&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-326fbeee-a3f3-4310-b09b-986692b47588" 04/11/24 18:34:12.305 Apr 11 18:34:12.305: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-326fbeee-a3f3-4310-b09b-986692b47588"] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:34:12.305: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:34:12.306: INFO: ExecWithOptions: Clientset creation Apr 11 18:34:12.306: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-326fbeee-a3f3-4310-b09b-986692b47588%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/11/24 18:34:12.448 Apr 11 18:34:12.449: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-326fbeee-a3f3-4310-b09b-986692b47588] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:34:12.449: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:34:12.450: INFO: ExecWithOptions: Clientset creation Apr 11 18:34:12.450: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-326fbeee-a3f3-4310-b09b-986692b47588&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 11 18:34:12.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-1854" for this suite. 04/11/24 18:34:12.602 ------------------------------ • [SLOW TEST] [89.772 seconds] [sig-storage] PersistentVolumes-local test/e2e/storage/utils/framework.go:23 Stress with local volumes [Serial] test/e2e/storage/persistent_volumes-local.go:444 should be able to process many pods and reuse local volumes test/e2e/storage/persistent_volumes-local.go:534 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:32:42.836 Apr 11 18:32:42.836: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/11/24 18:32:42.838 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:32:42.85 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:32:42.854 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] Stress with local volumes [Serial] test/e2e/storage/persistent_volumes-local.go:458 STEP: Setting up 10 local volumes on node "v126-worker2" 04/11/24 18:32:42.866 STEP: Creating tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-c821aab8-0cc2-4484-81f9-3aeed0836837" 04/11/24 18:32:42.867 Apr 11 18:32:42.875: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker2-gx475" in namespace "persistent-local-volumes-test-1854" to be "running" Apr 11 18:32:42.878: INFO: Pod "hostexec-v126-worker2-gx475": Phase="Pending", Reason="", readiness=false. Elapsed: 3.187484ms Apr 11 18:32:44.883: INFO: Pod "hostexec-v126-worker2-gx475": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007804751s Apr 11 18:32:46.883: INFO: Pod "hostexec-v126-worker2-gx475": Phase="Running", Reason="", readiness=true. Elapsed: 4.008286335s Apr 11 18:32:46.883: INFO: Pod "hostexec-v126-worker2-gx475" satisfied condition "running" Apr 11 18:32:46.884: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-c821aab8-0cc2-4484-81f9-3aeed0836837" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-c821aab8-0cc2-4484-81f9-3aeed0836837" "/tmp/local-volume-test-c821aab8-0cc2-4484-81f9-3aeed0836837"] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:32:46.884: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:32:46.885: INFO: ExecWithOptions: Clientset creation Apr 11 18:32:46.885: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-c821aab8-0cc2-4484-81f9-3aeed0836837%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-c821aab8-0cc2-4484-81f9-3aeed0836837%22+%22%2Ftmp%2Flocal-volume-test-c821aab8-0cc2-4484-81f9-3aeed0836837%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-8d42d021-b0bd-46e1-b56d-df195168a11d" 04/11/24 18:32:47.058 Apr 11 18:32:47.058: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-8d42d021-b0bd-46e1-b56d-df195168a11d" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-8d42d021-b0bd-46e1-b56d-df195168a11d" "/tmp/local-volume-test-8d42d021-b0bd-46e1-b56d-df195168a11d"] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:32:47.058: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:32:47.059: INFO: ExecWithOptions: Clientset creation Apr 11 18:32:47.059: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-8d42d021-b0bd-46e1-b56d-df195168a11d%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-8d42d021-b0bd-46e1-b56d-df195168a11d%22+%22%2Ftmp%2Flocal-volume-test-8d42d021-b0bd-46e1-b56d-df195168a11d%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-8d92484b-f0a5-4ebc-bd3f-c64ec8b45596" 04/11/24 18:32:47.229 Apr 11 18:32:47.229: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-8d92484b-f0a5-4ebc-bd3f-c64ec8b45596" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-8d92484b-f0a5-4ebc-bd3f-c64ec8b45596" "/tmp/local-volume-test-8d92484b-f0a5-4ebc-bd3f-c64ec8b45596"] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:32:47.229: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:32:47.231: INFO: ExecWithOptions: Clientset creation Apr 11 18:32:47.231: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-8d92484b-f0a5-4ebc-bd3f-c64ec8b45596%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-8d92484b-f0a5-4ebc-bd3f-c64ec8b45596%22+%22%2Ftmp%2Flocal-volume-test-8d92484b-f0a5-4ebc-bd3f-c64ec8b45596%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-10a9c0b0-7f9e-4ca8-aad9-8e62cdb8d3fc" 04/11/24 18:32:47.382 Apr 11 18:32:47.382: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-10a9c0b0-7f9e-4ca8-aad9-8e62cdb8d3fc" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-10a9c0b0-7f9e-4ca8-aad9-8e62cdb8d3fc" "/tmp/local-volume-test-10a9c0b0-7f9e-4ca8-aad9-8e62cdb8d3fc"] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:32:47.382: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:32:47.383: INFO: ExecWithOptions: Clientset creation Apr 11 18:32:47.383: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-10a9c0b0-7f9e-4ca8-aad9-8e62cdb8d3fc%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-10a9c0b0-7f9e-4ca8-aad9-8e62cdb8d3fc%22+%22%2Ftmp%2Flocal-volume-test-10a9c0b0-7f9e-4ca8-aad9-8e62cdb8d3fc%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-7902c96c-bebc-46c0-9cc4-969f8c6a4d63" 04/11/24 18:32:47.535 Apr 11 18:32:47.535: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-7902c96c-bebc-46c0-9cc4-969f8c6a4d63" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-7902c96c-bebc-46c0-9cc4-969f8c6a4d63" "/tmp/local-volume-test-7902c96c-bebc-46c0-9cc4-969f8c6a4d63"] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:32:47.535: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:32:47.537: INFO: ExecWithOptions: Clientset creation Apr 11 18:32:47.537: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-7902c96c-bebc-46c0-9cc4-969f8c6a4d63%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-7902c96c-bebc-46c0-9cc4-969f8c6a4d63%22+%22%2Ftmp%2Flocal-volume-test-7902c96c-bebc-46c0-9cc4-969f8c6a4d63%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-8dffc439-4c64-4942-bf4b-15cbdd29866f" 04/11/24 18:32:47.697 Apr 11 18:32:47.697: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-8dffc439-4c64-4942-bf4b-15cbdd29866f" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-8dffc439-4c64-4942-bf4b-15cbdd29866f" "/tmp/local-volume-test-8dffc439-4c64-4942-bf4b-15cbdd29866f"] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:32:47.697: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:32:47.698: INFO: ExecWithOptions: Clientset creation Apr 11 18:32:47.698: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-8dffc439-4c64-4942-bf4b-15cbdd29866f%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-8dffc439-4c64-4942-bf4b-15cbdd29866f%22+%22%2Ftmp%2Flocal-volume-test-8dffc439-4c64-4942-bf4b-15cbdd29866f%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-5937c208-d670-4bdf-8528-faf91b1976ba" 04/11/24 18:32:47.853 Apr 11 18:32:47.854: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-5937c208-d670-4bdf-8528-faf91b1976ba" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-5937c208-d670-4bdf-8528-faf91b1976ba" "/tmp/local-volume-test-5937c208-d670-4bdf-8528-faf91b1976ba"] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:32:47.854: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:32:47.855: INFO: ExecWithOptions: Clientset creation Apr 11 18:32:47.855: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-5937c208-d670-4bdf-8528-faf91b1976ba%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-5937c208-d670-4bdf-8528-faf91b1976ba%22+%22%2Ftmp%2Flocal-volume-test-5937c208-d670-4bdf-8528-faf91b1976ba%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-d7ba3742-0857-441a-ba89-c6199dd133ed" 04/11/24 18:32:47.991 Apr 11 18:32:47.991: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-d7ba3742-0857-441a-ba89-c6199dd133ed" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-d7ba3742-0857-441a-ba89-c6199dd133ed" "/tmp/local-volume-test-d7ba3742-0857-441a-ba89-c6199dd133ed"] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:32:47.991: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:32:47.992: INFO: ExecWithOptions: Clientset creation Apr 11 18:32:47.992: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-d7ba3742-0857-441a-ba89-c6199dd133ed%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-d7ba3742-0857-441a-ba89-c6199dd133ed%22+%22%2Ftmp%2Flocal-volume-test-d7ba3742-0857-441a-ba89-c6199dd133ed%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-0c753d90-60a1-4276-baac-f72ce21da01f" 04/11/24 18:32:48.145 Apr 11 18:32:48.146: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-0c753d90-60a1-4276-baac-f72ce21da01f" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-0c753d90-60a1-4276-baac-f72ce21da01f" "/tmp/local-volume-test-0c753d90-60a1-4276-baac-f72ce21da01f"] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:32:48.146: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:32:48.148: INFO: ExecWithOptions: Clientset creation Apr 11 18:32:48.148: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-0c753d90-60a1-4276-baac-f72ce21da01f%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-0c753d90-60a1-4276-baac-f72ce21da01f%22+%22%2Ftmp%2Flocal-volume-test-0c753d90-60a1-4276-baac-f72ce21da01f%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-326fbeee-a3f3-4310-b09b-986692b47588" 04/11/24 18:32:48.283 Apr 11 18:32:48.283: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-326fbeee-a3f3-4310-b09b-986692b47588" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-326fbeee-a3f3-4310-b09b-986692b47588" "/tmp/local-volume-test-326fbeee-a3f3-4310-b09b-986692b47588"] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:32:48.283: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:32:48.284: INFO: ExecWithOptions: Clientset creation Apr 11 18:32:48.284: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%22%2Ftmp%2Flocal-volume-test-326fbeee-a3f3-4310-b09b-986692b47588%22+%26%26+mount+-t+tmpfs+-o+size%3D10m+tmpfs-%22%2Ftmp%2Flocal-volume-test-326fbeee-a3f3-4310-b09b-986692b47588%22+%22%2Ftmp%2Flocal-volume-test-326fbeee-a3f3-4310-b09b-986692b47588%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Create 10 PVs 04/11/24 18:32:48.367 STEP: Start a goroutine to recycle unbound PVs 04/11/24 18:32:48.407 [It] should be able to process many pods and reuse local volumes test/e2e/storage/persistent_volumes-local.go:534 STEP: Creating 4 pods periodically 04/11/24 18:32:48.407 STEP: Waiting for all pods to complete successfully 04/11/24 18:32:48.407 Apr 11 18:32:57.479: INFO: Deleting pod pod-b9be9e0e-2d80-4f80-85ba-c67e8bb1b114 Apr 11 18:32:57.488: INFO: Deleting PersistentVolumeClaim "pvc-lm9x9" Apr 11 18:32:57.494: INFO: Deleting PersistentVolumeClaim "pvc-v958r" Apr 11 18:32:57.499: INFO: Deleting PersistentVolumeClaim "pvc-rnclw" Apr 11 18:32:57.504: INFO: 1/16 pods finished STEP: Delete "local-pv9nmq9" and create a new PV for same local volume storage 04/11/24 18:32:57.515 STEP: Delete "local-pv9nmq9" and create a new PV for same local volume storage 04/11/24 18:32:57.529 STEP: Delete "local-pvltp4n" and create a new PV for same local volume storage 04/11/24 18:32:57.534 STEP: Delete "local-pvt5w6w" and create a new PV for same local volume storage 04/11/24 18:32:57.546 Apr 11 18:32:59.479: INFO: Deleting pod pod-ff488f99-17ff-4922-94aa-3570ef79d09d Apr 11 18:32:59.489: INFO: Deleting PersistentVolumeClaim "pvc-g4xgv" Apr 11 18:32:59.494: INFO: Deleting PersistentVolumeClaim "pvc-j5hrt" Apr 11 18:32:59.499: INFO: Deleting PersistentVolumeClaim "pvc-5ds89" Apr 11 18:32:59.504: INFO: 2/16 pods finished STEP: Delete "local-pvrvrnd" and create a new PV for same local volume storage 04/11/24 18:32:59.515 STEP: Delete "local-pvbmcbw" and create a new PV for same local volume storage 04/11/24 18:32:59.53 STEP: Delete "local-pvp95kk" and create a new PV for same local volume storage 04/11/24 18:32:59.544 Apr 11 18:33:06.480: INFO: Deleting pod pod-e76924b7-22b5-4866-964a-3d9bd04636ac Apr 11 18:33:06.489: INFO: Deleting PersistentVolumeClaim "pvc-fpj9v" Apr 11 18:33:06.494: INFO: Deleting PersistentVolumeClaim "pvc-p2zf5" Apr 11 18:33:06.499: INFO: Deleting PersistentVolumeClaim "pvc-9hz77" Apr 11 18:33:06.503: INFO: 3/16 pods finished STEP: Delete "local-pvhvk5s" and create a new PV for same local volume storage 04/11/24 18:33:06.513 STEP: Delete "local-pvl7tng" and create a new PV for same local volume storage 04/11/24 18:33:06.527 STEP: Delete "local-pvk9h8w" and create a new PV for same local volume storage 04/11/24 18:33:06.541 Apr 11 18:33:11.479: INFO: Deleting pod pod-8c3b1cdf-6ee9-44b5-860b-b6f998e28875 Apr 11 18:33:11.493: INFO: Deleting PersistentVolumeClaim "pvc-h554z" Apr 11 18:33:11.498: INFO: Deleting PersistentVolumeClaim "pvc-xtrff" Apr 11 18:33:11.503: INFO: Deleting PersistentVolumeClaim "pvc-8s2b5" Apr 11 18:33:11.509: INFO: 4/16 pods finished STEP: Delete "local-pvsghcf" and create a new PV for same local volume storage 04/11/24 18:33:11.521 STEP: Delete "local-pvx8qv5" and create a new PV for same local volume storage 04/11/24 18:33:11.554 STEP: Delete "local-pv7z8lq" and create a new PV for same local volume storage 04/11/24 18:33:11.571 Apr 11 18:33:13.480: INFO: Deleting pod pod-473472f1-fd30-4c38-b22f-cb9dbb5a828f Apr 11 18:33:13.489: INFO: Deleting PersistentVolumeClaim "pvc-rkz4f" Apr 11 18:33:13.495: INFO: Deleting PersistentVolumeClaim "pvc-8q9kv" Apr 11 18:33:13.500: INFO: Deleting PersistentVolumeClaim "pvc-zjnb6" Apr 11 18:33:13.505: INFO: 5/16 pods finished STEP: Delete "local-pv2zsfj" and create a new PV for same local volume storage 04/11/24 18:33:13.52 STEP: Delete "local-pvc282m" and create a new PV for same local volume storage 04/11/24 18:33:13.535 STEP: Delete "local-pvfj27s" and create a new PV for same local volume storage 04/11/24 18:33:13.55 Apr 11 18:33:20.479: INFO: Deleting pod pod-c386cde0-0640-4cac-8fc9-414799a401fc Apr 11 18:33:20.489: INFO: Deleting PersistentVolumeClaim "pvc-4mxcj" Apr 11 18:33:20.494: INFO: Deleting PersistentVolumeClaim "pvc-p7nqb" Apr 11 18:33:20.499: INFO: Deleting PersistentVolumeClaim "pvc-hcnfm" Apr 11 18:33:20.504: INFO: 6/16 pods finished STEP: Delete "local-pvqj7lq" and create a new PV for same local volume storage 04/11/24 18:33:20.517 STEP: Delete "local-pvh5h5m" and create a new PV for same local volume storage 04/11/24 18:33:20.532 STEP: Delete "local-pvlpwtt" and create a new PV for same local volume storage 04/11/24 18:33:20.545 Apr 11 18:33:29.479: INFO: Deleting pod pod-d9a4bfb1-40ae-4c88-9322-35aa364562c1 Apr 11 18:33:29.489: INFO: Deleting PersistentVolumeClaim "pvc-7r7kg" Apr 11 18:33:29.493: INFO: Deleting PersistentVolumeClaim "pvc-8p97b" Apr 11 18:33:29.498: INFO: Deleting PersistentVolumeClaim "pvc-l6m5t" Apr 11 18:33:29.503: INFO: 7/16 pods finished STEP: Delete "local-pvghs9j" and create a new PV for same local volume storage 04/11/24 18:33:29.517 STEP: Delete "local-pvhp72w" and create a new PV for same local volume storage 04/11/24 18:33:29.532 STEP: Delete "local-pvmdz2q" and create a new PV for same local volume storage 04/11/24 18:33:29.546 Apr 11 18:33:30.480: INFO: Deleting pod pod-16006972-6962-4ede-9562-6ad564800827 Apr 11 18:33:30.488: INFO: Deleting PersistentVolumeClaim "pvc-ctw7r" Apr 11 18:33:30.493: INFO: Deleting PersistentVolumeClaim "pvc-bt62n" Apr 11 18:33:30.498: INFO: Deleting PersistentVolumeClaim "pvc-n7nbj" Apr 11 18:33:30.503: INFO: 8/16 pods finished STEP: Delete "local-pv7xgsv" and create a new PV for same local volume storage 04/11/24 18:33:30.515 STEP: Delete "local-pvtvnnw" and create a new PV for same local volume storage 04/11/24 18:33:30.531 STEP: Delete "local-pv6nb7x" and create a new PV for same local volume storage 04/11/24 18:33:30.543 Apr 11 18:33:33.479: INFO: Deleting pod pod-69526187-5636-43ff-b4a1-7befef5c4eba Apr 11 18:33:33.487: INFO: Deleting PersistentVolumeClaim "pvc-qqlcr" Apr 11 18:33:33.492: INFO: Deleting PersistentVolumeClaim "pvc-bvjxn" Apr 11 18:33:33.497: INFO: Deleting PersistentVolumeClaim "pvc-mq8nz" Apr 11 18:33:33.503: INFO: 9/16 pods finished STEP: Delete "local-pvw6rn2" and create a new PV for same local volume storage 04/11/24 18:33:33.513 STEP: Delete "local-pvg2kns" and create a new PV for same local volume storage 04/11/24 18:33:33.528 STEP: Delete "local-pvmb4j4" and create a new PV for same local volume storage 04/11/24 18:33:33.542 Apr 11 18:33:44.479: INFO: Deleting pod pod-524a6951-bf09-40a2-b942-132d3e893cad Apr 11 18:33:44.488: INFO: Deleting PersistentVolumeClaim "pvc-tpfz9" Apr 11 18:33:44.494: INFO: Deleting PersistentVolumeClaim "pvc-qfsth" Apr 11 18:33:44.499: INFO: Deleting PersistentVolumeClaim "pvc-cpg9c" Apr 11 18:33:44.504: INFO: 10/16 pods finished Apr 11 18:33:44.504: INFO: Deleting pod pod-d5869120-a5a9-4b2e-9421-f3c7fe418ffb Apr 11 18:33:44.512: INFO: Deleting PersistentVolumeClaim "pvc-d9pcz" Apr 11 18:33:44.517: INFO: Deleting PersistentVolumeClaim "pvc-2sr5g" STEP: Delete "local-pvmls9v" and create a new PV for same local volume storage 04/11/24 18:33:44.519 Apr 11 18:33:44.521: INFO: Deleting PersistentVolumeClaim "pvc-gfb4f" Apr 11 18:33:44.526: INFO: 11/16 pods finished STEP: Delete "local-pvmls9v" and create a new PV for same local volume storage 04/11/24 18:33:44.532 STEP: Delete "local-pv7fzh5" and create a new PV for same local volume storage 04/11/24 18:33:44.535 STEP: Delete "local-pv4r7l4" and create a new PV for same local volume storage 04/11/24 18:33:44.547 STEP: Delete "local-pvv9kkw" and create a new PV for same local volume storage 04/11/24 18:33:44.562 STEP: Delete "local-pvthgvv" and create a new PV for same local volume storage 04/11/24 18:33:44.578 STEP: Delete "local-pv6kv4s" and create a new PV for same local volume storage 04/11/24 18:33:44.593 Apr 11 18:33:48.479: INFO: Deleting pod pod-0bf26b7e-8829-4e9d-8185-c0d2ce047ee7 Apr 11 18:33:48.487: INFO: Deleting PersistentVolumeClaim "pvc-n4ssd" Apr 11 18:33:48.492: INFO: Deleting PersistentVolumeClaim "pvc-t4tvf" Apr 11 18:33:48.497: INFO: Deleting PersistentVolumeClaim "pvc-jfgwx" Apr 11 18:33:48.501: INFO: 12/16 pods finished STEP: Delete "local-pvnm2wz" and create a new PV for same local volume storage 04/11/24 18:33:48.514 STEP: Delete "local-pv5ltt8" and create a new PV for same local volume storage 04/11/24 18:33:48.53 STEP: Delete "local-pvqgg86" and create a new PV for same local volume storage 04/11/24 18:33:48.542 Apr 11 18:33:56.479: INFO: Deleting pod pod-f144abb6-829b-44b0-b9ca-352f16f97ca2 Apr 11 18:33:56.493: INFO: Deleting PersistentVolumeClaim "pvc-9vgrf" Apr 11 18:33:56.499: INFO: Deleting PersistentVolumeClaim "pvc-wk697" Apr 11 18:33:56.504: INFO: Deleting PersistentVolumeClaim "pvc-zpr68" Apr 11 18:33:56.509: INFO: 13/16 pods finished STEP: Delete "local-pvtbvv2" and create a new PV for same local volume storage 04/11/24 18:33:56.523 STEP: Delete "local-pvssv66" and create a new PV for same local volume storage 04/11/24 18:33:56.538 STEP: Delete "local-pv77z5r" and create a new PV for same local volume storage 04/11/24 18:33:56.553 Apr 11 18:33:57.481: INFO: Deleting pod pod-88eb423c-d1bb-4b25-9b20-27fb1853fd73 Apr 11 18:33:57.492: INFO: Deleting PersistentVolumeClaim "pvc-xjwmv" Apr 11 18:33:57.498: INFO: Deleting PersistentVolumeClaim "pvc-2bm8v" Apr 11 18:33:57.504: INFO: Deleting PersistentVolumeClaim "pvc-j4pnl" Apr 11 18:33:57.509: INFO: 14/16 pods finished STEP: Delete "local-pvvzhzn" and create a new PV for same local volume storage 04/11/24 18:33:57.521 STEP: Delete "local-pvs2g6d" and create a new PV for same local volume storage 04/11/24 18:33:57.536 STEP: Delete "local-pvqqr6v" and create a new PV for same local volume storage 04/11/24 18:33:57.547 Apr 11 18:34:05.479: INFO: Deleting pod pod-5304b08b-b2b6-4a8c-8ef0-1e1ab02f889a Apr 11 18:34:05.486: INFO: Deleting PersistentVolumeClaim "pvc-6wbj9" Apr 11 18:34:05.491: INFO: Deleting PersistentVolumeClaim "pvc-fxzj7" Apr 11 18:34:05.497: INFO: Deleting PersistentVolumeClaim "pvc-gxph8" Apr 11 18:34:05.501: INFO: 15/16 pods finished STEP: Delete "local-pv4t2ws" and create a new PV for same local volume storage 04/11/24 18:34:05.512 STEP: Delete "local-pv275gr" and create a new PV for same local volume storage 04/11/24 18:34:05.528 STEP: Delete "local-pvqqjkj" and create a new PV for same local volume storage 04/11/24 18:34:05.541 Apr 11 18:34:09.479: INFO: Deleting pod pod-ec008659-2bf3-4538-8ed8-fd5fabc7026c Apr 11 18:34:09.487: INFO: Deleting PersistentVolumeClaim "pvc-qv72n" Apr 11 18:34:09.493: INFO: Deleting PersistentVolumeClaim "pvc-rwn86" Apr 11 18:34:09.498: INFO: Deleting PersistentVolumeClaim "pvc-qv5gv" Apr 11 18:34:09.503: INFO: 16/16 pods finished [AfterEach] Stress with local volumes [Serial] test/e2e/storage/persistent_volumes-local.go:522 STEP: Stop and wait for recycle goroutine to finish 04/11/24 18:34:09.503 STEP: Clean all PVs 04/11/24 18:34:09.503 STEP: Cleaning up 10 local volumes on node "v126-worker2" 04/11/24 18:34:09.503 STEP: Cleaning up PVC and PV 04/11/24 18:34:09.503 Apr 11 18:34:09.504: INFO: pvc is nil Apr 11 18:34:09.504: INFO: Deleting PersistentVolume "local-pvl25m8" STEP: Cleaning up PVC and PV 04/11/24 18:34:09.509 Apr 11 18:34:09.509: INFO: pvc is nil Apr 11 18:34:09.509: INFO: Deleting PersistentVolume "local-pvcqnsh" STEP: Cleaning up PVC and PV 04/11/24 18:34:09.514 Apr 11 18:34:09.514: INFO: pvc is nil Apr 11 18:34:09.514: INFO: Deleting PersistentVolume "local-pvdfhzn" STEP: Cleaning up PVC and PV 04/11/24 18:34:09.519 Apr 11 18:34:09.519: INFO: pvc is nil Apr 11 18:34:09.519: INFO: Deleting PersistentVolume "local-pvqlvnm" STEP: Cleaning up PVC and PV 04/11/24 18:34:09.524 Apr 11 18:34:09.524: INFO: pvc is nil Apr 11 18:34:09.524: INFO: Deleting PersistentVolume "local-pvvgw62" STEP: Cleaning up PVC and PV 04/11/24 18:34:09.529 Apr 11 18:34:09.529: INFO: pvc is nil Apr 11 18:34:09.529: INFO: Deleting PersistentVolume "local-pvrvm8q" STEP: Cleaning up PVC and PV 04/11/24 18:34:09.534 Apr 11 18:34:09.534: INFO: pvc is nil Apr 11 18:34:09.534: INFO: Deleting PersistentVolume "local-pv5m9bz" STEP: Cleaning up PVC and PV 04/11/24 18:34:09.538 Apr 11 18:34:09.539: INFO: pvc is nil Apr 11 18:34:09.539: INFO: Deleting PersistentVolume "local-pvl7q4l" STEP: Cleaning up PVC and PV 04/11/24 18:34:09.544 Apr 11 18:34:09.544: INFO: pvc is nil Apr 11 18:34:09.544: INFO: Deleting PersistentVolume "local-pvsvjtz" STEP: Cleaning up PVC and PV 04/11/24 18:34:09.549 Apr 11 18:34:09.549: INFO: pvc is nil Apr 11 18:34:09.549: INFO: Deleting PersistentVolume "local-pvfvfwm" STEP: Unmount tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-c821aab8-0cc2-4484-81f9-3aeed0836837" 04/11/24 18:34:09.554 Apr 11 18:34:09.554: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-c821aab8-0cc2-4484-81f9-3aeed0836837"] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:34:09.554: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:34:09.556: INFO: ExecWithOptions: Clientset creation Apr 11 18:34:09.556: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-c821aab8-0cc2-4484-81f9-3aeed0836837%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/11/24 18:34:09.714 Apr 11 18:34:09.714: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c821aab8-0cc2-4484-81f9-3aeed0836837] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:34:09.714: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:34:09.716: INFO: ExecWithOptions: Clientset creation Apr 11 18:34:09.716: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-c821aab8-0cc2-4484-81f9-3aeed0836837&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-8d42d021-b0bd-46e1-b56d-df195168a11d" 04/11/24 18:34:09.867 Apr 11 18:34:09.867: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-8d42d021-b0bd-46e1-b56d-df195168a11d"] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:34:09.867: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:34:09.868: INFO: ExecWithOptions: Clientset creation Apr 11 18:34:09.868: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-8d42d021-b0bd-46e1-b56d-df195168a11d%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/11/24 18:34:10.008 Apr 11 18:34:10.008: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8d42d021-b0bd-46e1-b56d-df195168a11d] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:34:10.008: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:34:10.009: INFO: ExecWithOptions: Clientset creation Apr 11 18:34:10.009: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-8d42d021-b0bd-46e1-b56d-df195168a11d&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-8d92484b-f0a5-4ebc-bd3f-c64ec8b45596" 04/11/24 18:34:10.145 Apr 11 18:34:10.145: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-8d92484b-f0a5-4ebc-bd3f-c64ec8b45596"] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:34:10.145: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:34:10.146: INFO: ExecWithOptions: Clientset creation Apr 11 18:34:10.146: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-8d92484b-f0a5-4ebc-bd3f-c64ec8b45596%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/11/24 18:34:10.297 Apr 11 18:34:10.297: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8d92484b-f0a5-4ebc-bd3f-c64ec8b45596] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:34:10.297: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:34:10.298: INFO: ExecWithOptions: Clientset creation Apr 11 18:34:10.298: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-8d92484b-f0a5-4ebc-bd3f-c64ec8b45596&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-10a9c0b0-7f9e-4ca8-aad9-8e62cdb8d3fc" 04/11/24 18:34:10.452 Apr 11 18:34:10.452: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-10a9c0b0-7f9e-4ca8-aad9-8e62cdb8d3fc"] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:34:10.452: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:34:10.453: INFO: ExecWithOptions: Clientset creation Apr 11 18:34:10.453: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-10a9c0b0-7f9e-4ca8-aad9-8e62cdb8d3fc%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/11/24 18:34:10.589 Apr 11 18:34:10.590: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-10a9c0b0-7f9e-4ca8-aad9-8e62cdb8d3fc] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:34:10.590: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:34:10.591: INFO: ExecWithOptions: Clientset creation Apr 11 18:34:10.591: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-10a9c0b0-7f9e-4ca8-aad9-8e62cdb8d3fc&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-7902c96c-bebc-46c0-9cc4-969f8c6a4d63" 04/11/24 18:34:10.746 Apr 11 18:34:10.746: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-7902c96c-bebc-46c0-9cc4-969f8c6a4d63"] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:34:10.746: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:34:10.748: INFO: ExecWithOptions: Clientset creation Apr 11 18:34:10.748: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-7902c96c-bebc-46c0-9cc4-969f8c6a4d63%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/11/24 18:34:10.915 Apr 11 18:34:10.916: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-7902c96c-bebc-46c0-9cc4-969f8c6a4d63] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:34:10.916: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:34:10.917: INFO: ExecWithOptions: Clientset creation Apr 11 18:34:10.917: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-7902c96c-bebc-46c0-9cc4-969f8c6a4d63&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-8dffc439-4c64-4942-bf4b-15cbdd29866f" 04/11/24 18:34:11.088 Apr 11 18:34:11.088: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-8dffc439-4c64-4942-bf4b-15cbdd29866f"] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:34:11.088: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:34:11.089: INFO: ExecWithOptions: Clientset creation Apr 11 18:34:11.089: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-8dffc439-4c64-4942-bf4b-15cbdd29866f%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/11/24 18:34:11.256 Apr 11 18:34:11.257: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8dffc439-4c64-4942-bf4b-15cbdd29866f] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:34:11.257: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:34:11.258: INFO: ExecWithOptions: Clientset creation Apr 11 18:34:11.258: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-8dffc439-4c64-4942-bf4b-15cbdd29866f&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-5937c208-d670-4bdf-8528-faf91b1976ba" 04/11/24 18:34:11.398 Apr 11 18:34:11.398: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-5937c208-d670-4bdf-8528-faf91b1976ba"] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:34:11.398: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:34:11.399: INFO: ExecWithOptions: Clientset creation Apr 11 18:34:11.399: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-5937c208-d670-4bdf-8528-faf91b1976ba%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/11/24 18:34:11.559 Apr 11 18:34:11.559: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-5937c208-d670-4bdf-8528-faf91b1976ba] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:34:11.559: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:34:11.561: INFO: ExecWithOptions: Clientset creation Apr 11 18:34:11.561: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-5937c208-d670-4bdf-8528-faf91b1976ba&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-d7ba3742-0857-441a-ba89-c6199dd133ed" 04/11/24 18:34:11.716 Apr 11 18:34:11.716: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-d7ba3742-0857-441a-ba89-c6199dd133ed"] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:34:11.716: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:34:11.718: INFO: ExecWithOptions: Clientset creation Apr 11 18:34:11.718: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-d7ba3742-0857-441a-ba89-c6199dd133ed%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/11/24 18:34:11.873 Apr 11 18:34:11.873: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-d7ba3742-0857-441a-ba89-c6199dd133ed] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:34:11.873: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:34:11.875: INFO: ExecWithOptions: Clientset creation Apr 11 18:34:11.875: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-d7ba3742-0857-441a-ba89-c6199dd133ed&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-0c753d90-60a1-4276-baac-f72ce21da01f" 04/11/24 18:34:12.031 Apr 11 18:34:12.031: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-0c753d90-60a1-4276-baac-f72ce21da01f"] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:34:12.031: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:34:12.033: INFO: ExecWithOptions: Clientset creation Apr 11 18:34:12.033: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-0c753d90-60a1-4276-baac-f72ce21da01f%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/11/24 18:34:12.158 Apr 11 18:34:12.158: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0c753d90-60a1-4276-baac-f72ce21da01f] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:34:12.158: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:34:12.160: INFO: ExecWithOptions: Clientset creation Apr 11 18:34:12.160: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-0c753d90-60a1-4276-baac-f72ce21da01f&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Unmount tmpfs mount point on node "v126-worker2" at path "/tmp/local-volume-test-326fbeee-a3f3-4310-b09b-986692b47588" 04/11/24 18:34:12.305 Apr 11 18:34:12.305: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-326fbeee-a3f3-4310-b09b-986692b47588"] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:34:12.305: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:34:12.306: INFO: ExecWithOptions: Clientset creation Apr 11 18:34:12.306: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%22%2Ftmp%2Flocal-volume-test-326fbeee-a3f3-4310-b09b-986692b47588%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/11/24 18:34:12.448 Apr 11 18:34:12.449: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-326fbeee-a3f3-4310-b09b-986692b47588] Namespace:persistent-local-volumes-test-1854 PodName:hostexec-v126-worker2-gx475 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:34:12.449: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:34:12.450: INFO: ExecWithOptions: Clientset creation Apr 11 18:34:12.450: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1854/pods/hostexec-v126-worker2-gx475/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-326fbeee-a3f3-4310-b09b-986692b47588&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 11 18:34:12.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-1854" for this suite. 04/11/24 18:34:12.602 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [SynchronizedAfterSuite] test/e2e/e2e.go:88 [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:88 [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:88 Apr 11 18:34:12.705: INFO: Running AfterSuite actions on node 1 Apr 11 18:34:12.705: INFO: Skipping dumping logs from cluster ------------------------------ [SynchronizedAfterSuite] PASSED [0.000 seconds] [SynchronizedAfterSuite] test/e2e/e2e.go:88 Begin Captured GinkgoWriter Output >> [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:88 [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:88 Apr 11 18:34:12.705: INFO: Running AfterSuite actions on node 1 Apr 11 18:34:12.705: INFO: Skipping dumping logs from cluster << End Captured GinkgoWriter Output ------------------------------ [ReportAfterSuite] Kubernetes e2e suite report test/e2e/e2e_test.go:153 [ReportAfterSuite] TOP-LEVEL test/e2e/e2e_test.go:153 ------------------------------ [ReportAfterSuite] PASSED [0.000 seconds] [ReportAfterSuite] Kubernetes e2e suite report test/e2e/e2e_test.go:153 Begin Captured GinkgoWriter Output >> [ReportAfterSuite] TOP-LEVEL test/e2e/e2e_test.go:153 << End Captured GinkgoWriter Output ------------------------------ [ReportAfterSuite] Kubernetes e2e JUnit report test/e2e/framework/test_context.go:529 [ReportAfterSuite] TOP-LEVEL test/e2e/framework/test_context.go:529 ------------------------------ [ReportAfterSuite] PASSED [0.270 seconds] [ReportAfterSuite] Kubernetes e2e JUnit report test/e2e/framework/test_context.go:529 Begin Captured GinkgoWriter Output >> [ReportAfterSuite] TOP-LEVEL test/e2e/framework/test_context.go:529 << End Captured GinkgoWriter Output ------------------------------ Ran 2 of 7069 Specs in 107.499 seconds SUCCESS! -- 2 Passed | 0 Failed | 0 Pending | 7067 Skipped PASS Ginkgo ran 1 suite in 1m48.20258838s Test Suite Passed