Running Suite: Kubernetes e2e suite - /usr/local/bin ==================================================== Random Seed: 1712858857 - will randomize all specs Will run 229 of 7069 specs Running in parallel across 10 processes SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ S [SKIPPED] [0.025 seconds] [sig-storage] Pod Disks [Feature:StorageProvider] [BeforeEach] test/e2e/storage/pd.go:76 schedule a pod w/ RW PD(s) mounted to 1 or more containers, write to PD, verify content, delete pod, and repeat in rapid succession [Slow] test/e2e/storage/pd.go:233 using 4 containers and 1 PDs test/e2e/storage/pd.go:257 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] Pod Disks [Feature:StorageProvider] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:07:37.805 Apr 11 18:07:37.805: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pod-disks 04/11/24 18:07:37.806 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:07:37.815 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:07:37.819 [BeforeEach] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/storage/pd.go:76 Apr 11 18:07:37.822: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/framework/node/init/init.go:32 Apr 11 18:07:37.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] Pod Disks [Feature:StorageProvider] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] Pod Disks [Feature:StorageProvider] tear down framework | framework.go:193 STEP: Destroying namespace "pod-disks-4911" for this suite. 04/11/24 18:07:37.827 << End Captured GinkgoWriter Output Requires at least 2 nodes (not -1) In [BeforeEach] at: test/e2e/storage/pd.go:77 ------------------------------ SSSSSS ------------------------------ S [SKIPPED] [0.024 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 PVC test/e2e/storage/volume_metrics.go:491 should create metrics for total number of volumes in A/D Controller test/e2e/storage/volume_metrics.go:486 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:07:37.811 Apr 11 18:07:37.811: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:07:37.812 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:07:37.821 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:07:37.823 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:07:37.827: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:07:37.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-4407" for this suite. 04/11/24 18:07:37.831 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure. Here's a summary - for full details run Ginkgo in verbose mode: [PANICKED] in [AfterEach] at /usr/local/go/src/runtime/panic.go:260 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ S [SKIPPED] [0.017 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 Ephemeral test/e2e/storage/volume_metrics.go:495 should create volume metrics in Volume Manager test/e2e/storage/volume_metrics.go:483 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:07:37.841 Apr 11 18:07:37.841: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:07:37.841 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:07:37.847 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:07:37.85 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:07:37.852: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:07:37.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-5947" for this suite. 04/11/24 18:07:37.855 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure. Here's a summary - for full details run Ginkgo in verbose mode: [PANICKED] in [AfterEach] at /usr/local/go/src/runtime/panic.go:260 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ S [SKIPPED] [10.539 seconds] [sig-storage] PersistentVolumes-local test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] [BeforeEach] test/e2e/storage/persistent_volumes-local.go:198 Set fsGroup for local volume test/e2e/storage/persistent_volumes-local.go:263 should set same fsGroup for two pods simultaneously [Slow] test/e2e/storage/persistent_volumes-local.go:277 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:07:37.848 Apr 11 18:07:37.848: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/11/24 18:07:37.849 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:07:37.855 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:07:37.857 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:198 Apr 11 18:07:37.866: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker2-6sgh7" in namespace "persistent-local-volumes-test-3597" to be "running" Apr 11 18:07:37.867: INFO: Pod "hostexec-v126-worker2-6sgh7": Phase="Pending", Reason="", readiness=false. Elapsed: 1.508337ms Apr 11 18:07:39.871: INFO: Pod "hostexec-v126-worker2-6sgh7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0054433s Apr 11 18:07:41.871: INFO: Pod "hostexec-v126-worker2-6sgh7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.004850605s Apr 11 18:07:43.871: INFO: Pod "hostexec-v126-worker2-6sgh7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.005175669s Apr 11 18:07:45.871: INFO: Pod "hostexec-v126-worker2-6sgh7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.005453295s Apr 11 18:07:47.872: INFO: Pod "hostexec-v126-worker2-6sgh7": Phase="Running", Reason="", readiness=true. Elapsed: 10.005996395s Apr 11 18:07:47.872: INFO: Pod "hostexec-v126-worker2-6sgh7" satisfied condition "running" Apr 11 18:07:47.872: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-3597 PodName:hostexec-v126-worker2-6sgh7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:07:47.872: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:07:47.873: INFO: ExecWithOptions: Clientset creation Apr 11 18:07:47.873: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-3597/pods/hostexec-v126-worker2-6sgh7/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=ls+-1+%2Fmnt%2Fdisks%2Fby-uuid%2Fgoogle-local-ssds-scsi-fs%2F+%7C+wc+-l&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 11 18:07:48.376: INFO: exec v126-worker2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Apr 11 18:07:48.376: INFO: exec v126-worker2: stdout: "0\n" Apr 11 18:07:48.376: INFO: exec v126-worker2: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Apr 11 18:07:48.376: INFO: exec v126-worker2: exit code: 0 Apr 11 18:07:48.376: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:207 STEP: Cleaning up PVC and PV 04/11/24 18:07:48.376 [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 11 18:07:48.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-3597" for this suite. 04/11/24 18:07:48.382 << End Captured GinkgoWriter Output Requires at least 1 scsi fs localSSD In [BeforeEach] at: test/e2e/storage/persistent_volumes-local.go:1255 There were additional failures detected after the initial failure. Here's a summary - for full details run Ginkgo in verbose mode: [PANICKED] in [AfterEach] at /usr/local/go/src/runtime/panic.go:260 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ S [SKIPPED] [0.032 seconds] [sig-storage] Volumes [BeforeEach] test/e2e/common/storage/volumes.go:66 NFSv3 test/e2e/common/storage/volumes.go:100 should be mountable for NFSv3 test/e2e/common/storage/volumes.go:101 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] Volumes set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:07:48.409 Apr 11 18:07:48.409: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename volume 04/11/24 18:07:48.41 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:07:48.422 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:07:48.427 [BeforeEach] [sig-storage] Volumes test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] Volumes test/e2e/common/storage/volumes.go:66 Apr 11 18:07:48.431: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [sig-storage] Volumes test/e2e/framework/node/init/init.go:32 Apr 11 18:07:48.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] Volumes test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] Volumes dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] Volumes tear down framework | framework.go:193 STEP: Destroying namespace "volume-659" for this suite. 04/11/24 18:07:48.436 << End Captured GinkgoWriter Output Only supported for node OS distro [gci ubuntu custom] (not debian) In [BeforeEach] at: test/e2e/common/storage/volumes.go:67 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ S [SKIPPED] [18.190 seconds] [sig-storage] PersistentVolumes-local test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] [BeforeEach] test/e2e/storage/persistent_volumes-local.go:198 One pod requesting one prebound PVC test/e2e/storage/persistent_volumes-local.go:212 should be able to mount volume and write from pod1 test/e2e/storage/persistent_volumes-local.go:241 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:07:37.877 Apr 11 18:07:37.877: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/11/24 18:07:37.878 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:07:37.885 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:07:37.887 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:198 Apr 11 18:07:37.895: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker2-vvjjn" in namespace "persistent-local-volumes-test-411" to be "running" Apr 11 18:07:37.897: INFO: Pod "hostexec-v126-worker2-vvjjn": Phase="Pending", Reason="", readiness=false. Elapsed: 1.897769ms Apr 11 18:07:39.901: INFO: Pod "hostexec-v126-worker2-vvjjn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005994586s Apr 11 18:07:41.900: INFO: Pod "hostexec-v126-worker2-vvjjn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.005552791s Apr 11 18:07:43.901: INFO: Pod "hostexec-v126-worker2-vvjjn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.006205145s Apr 11 18:07:45.902: INFO: Pod "hostexec-v126-worker2-vvjjn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.006951639s Apr 11 18:07:47.900: INFO: Pod "hostexec-v126-worker2-vvjjn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.005552399s Apr 11 18:07:49.901: INFO: Pod "hostexec-v126-worker2-vvjjn": Phase="Pending", Reason="", readiness=false. Elapsed: 12.005999425s Apr 11 18:07:51.900: INFO: Pod "hostexec-v126-worker2-vvjjn": Phase="Pending", Reason="", readiness=false. Elapsed: 14.00514828s Apr 11 18:07:53.900: INFO: Pod "hostexec-v126-worker2-vvjjn": Phase="Pending", Reason="", readiness=false. Elapsed: 16.005338237s Apr 11 18:07:55.901: INFO: Pod "hostexec-v126-worker2-vvjjn": Phase="Running", Reason="", readiness=true. Elapsed: 18.005979348s Apr 11 18:07:55.901: INFO: Pod "hostexec-v126-worker2-vvjjn" satisfied condition "running" Apr 11 18:07:55.901: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-411 PodName:hostexec-v126-worker2-vvjjn ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:07:55.901: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:07:55.902: INFO: ExecWithOptions: Clientset creation Apr 11 18:07:55.902: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-411/pods/hostexec-v126-worker2-vvjjn/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=ls+-1+%2Fmnt%2Fdisks%2Fby-uuid%2Fgoogle-local-ssds-scsi-fs%2F+%7C+wc+-l&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 11 18:07:56.056: INFO: exec v126-worker2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Apr 11 18:07:56.056: INFO: exec v126-worker2: stdout: "0\n" Apr 11 18:07:56.056: INFO: exec v126-worker2: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Apr 11 18:07:56.056: INFO: exec v126-worker2: exit code: 0 Apr 11 18:07:56.056: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:207 STEP: Cleaning up PVC and PV 04/11/24 18:07:56.056 [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 11 18:07:56.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-411" for this suite. 04/11/24 18:07:56.062 << End Captured GinkgoWriter Output Requires at least 1 scsi fs localSSD In [BeforeEach] at: test/e2e/storage/persistent_volumes-local.go:1255 There were additional failures detected after the initial failure. Here's a summary - for full details run Ginkgo in verbose mode: [PANICKED] in [AfterEach] at /usr/local/go/src/runtime/panic.go:260 ------------------------------ SS ------------------------------ • [SLOW TEST] [30.084 seconds] [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathFile test/e2e/storage/host_path_type.go:86 ------------------------------ SSSSSS ------------------------------ S [SKIPPED] [0.025 seconds] [sig-storage] Pod Disks [Feature:StorageProvider] [BeforeEach] test/e2e/storage/pd.go:76 schedule a pod w/ RW PD(s) mounted to 1 or more containers, write to PD, verify content, delete pod, and repeat in rapid succession [Slow] test/e2e/storage/pd.go:233 using 1 containers and 2 PDs test/e2e/storage/pd.go:257 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] Pod Disks [Feature:StorageProvider] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:08:07.919 Apr 11 18:08:07.919: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pod-disks 04/11/24 18:08:07.92 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:08:07.929 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:08:07.933 [BeforeEach] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/storage/pd.go:76 Apr 11 18:08:07.936: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/framework/node/init/init.go:32 Apr 11 18:08:07.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] Pod Disks [Feature:StorageProvider] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] Pod Disks [Feature:StorageProvider] tear down framework | framework.go:193 STEP: Destroying namespace "pod-disks-4160" for this suite. 04/11/24 18:08:07.94 << End Captured GinkgoWriter Output Requires at least 2 nodes (not -1) In [BeforeEach] at: test/e2e/storage/pd.go:77 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ S [SKIPPED] [0.024 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 PVController test/e2e/storage/volume_metrics.go:500 should create unbound pvc count metrics for pvc controller after creating pvc only test/e2e/storage/volume_metrics.go:611 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:08:07.956 Apr 11 18:08:07.957: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:08:07.958 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:08:07.966 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:08:07.969 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:08:07.973: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:08:07.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-9881" for this suite. 04/11/24 18:08:07.977 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure. Here's a summary - for full details run Ginkgo in verbose mode: [PANICKED] in [AfterEach] at /usr/local/go/src/runtime/panic.go:260 ------------------------------ SSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [32.090 seconds] [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathCharDev test/e2e/storage/host_path_type.go:164 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [55.179 seconds] [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 test/e2e/storage/persistent_volumes-local.go:252 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [50.559 seconds] [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1 test/e2e/storage/persistent_volumes-local.go:235 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [53.584 seconds] [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 test/e2e/storage/persistent_volumes-local.go:252 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [75.910 seconds] [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false test/e2e/storage/csi_mock_volume.go:549 ------------------------------ SS ------------------------------ • [SLOW TEST] [76.838 seconds] [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 test/e2e/storage/persistent_volumes-local.go:258 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [28.089 seconds] [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathCharDev test/e2e/storage/host_path_type.go:96 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [12.072 seconds] [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] test/e2e/common/storage/projected_configmap.go:62 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [100.641 seconds] [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should retry NodeStage after NodeStage ephemeral error test/e2e/storage/csi_mock_volume.go:942 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ S [SKIPPED] [0.030 seconds] [sig-storage] PersistentVolumes GCEPD [Feature:StorageProvider] [BeforeEach] test/e2e/storage/persistent_volumes-gce.go:79 should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach test/e2e/storage/persistent_volumes-gce.go:144 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] PersistentVolumes GCEPD [Feature:StorageProvider] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:09:18.588 Apr 11 18:09:18.588: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:09:18.59 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:09:18.6 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:09:18.604 [BeforeEach] [sig-storage] PersistentVolumes GCEPD [Feature:StorageProvider] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes GCEPD [Feature:StorageProvider] test/e2e/storage/persistent_volumes-gce.go:79 Apr 11 18:09:18.608: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] PersistentVolumes GCEPD [Feature:StorageProvider] test/e2e/framework/node/init/init.go:32 Apr 11 18:09:18.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] PersistentVolumes GCEPD [Feature:StorageProvider] test/e2e/storage/persistent_volumes-gce.go:113 Apr 11 18:09:18.612: INFO: AfterEach: Cleaning up test resources Apr 11 18:09:18.612: INFO: pvc is nil Apr 11 18:09:18.612: INFO: pv is nil [DeferCleanup (Each)] [sig-storage] PersistentVolumes GCEPD [Feature:StorageProvider] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes GCEPD [Feature:StorageProvider] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes GCEPD [Feature:StorageProvider] tear down framework | framework.go:193 STEP: Destroying namespace "pv-8175" for this suite. 04/11/24 18:09:18.612 << End Captured GinkgoWriter Output Only supported for providers [gce gke] (not local) In [BeforeEach] at: test/e2e/storage/persistent_volumes-gce.go:87 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [6.201 seconds] [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathSocket test/e2e/storage/host_path_type.go:370 ------------------------------ SSSSSSSS ------------------------------ • [SLOW TEST] [103.947 seconds] [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on test/e2e/storage/csi_mock_volume.go:700 ------------------------------ SSSSSSSSSS ------------------------------ S [SKIPPED] [0.033 seconds] [sig-storage] Pod Disks [Feature:StorageProvider] [BeforeEach] test/e2e/storage/pd.go:76 schedule pods each with a PD, delete pod and verify detach [Slow] test/e2e/storage/pd.go:95 for read-only PD with pod delete grace period of "default (30s)" test/e2e/storage/pd.go:137 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] Pod Disks [Feature:StorageProvider] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:09:21.815 Apr 11 18:09:21.815: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pod-disks 04/11/24 18:09:21.817 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:09:21.829 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:09:21.833 [BeforeEach] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/storage/pd.go:76 Apr 11 18:09:21.837: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/framework/node/init/init.go:32 Apr 11 18:09:21.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] Pod Disks [Feature:StorageProvider] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] Pod Disks [Feature:StorageProvider] tear down framework | framework.go:193 STEP: Destroying namespace "pod-disks-5473" for this suite. 04/11/24 18:09:21.843 << End Captured GinkgoWriter Output Requires at least 2 nodes (not -1) In [BeforeEach] at: test/e2e/storage/pd.go:77 ------------------------------ SS ------------------------------ S [SKIPPED] [0.032 seconds] [sig-storage] PersistentVolumes GCEPD [Feature:StorageProvider] [BeforeEach] test/e2e/storage/persistent_volumes-gce.go:79 should test that deleting the Namespace of a PVC and Pod causes the successful detach of Persistent Disk test/e2e/storage/persistent_volumes-gce.go:158 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] PersistentVolumes GCEPD [Feature:StorageProvider] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:09:21.852 Apr 11 18:09:21.852: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:09:21.854 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:09:21.865 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:09:21.869 [BeforeEach] [sig-storage] PersistentVolumes GCEPD [Feature:StorageProvider] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes GCEPD [Feature:StorageProvider] test/e2e/storage/persistent_volumes-gce.go:79 Apr 11 18:09:21.873: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] PersistentVolumes GCEPD [Feature:StorageProvider] test/e2e/framework/node/init/init.go:32 Apr 11 18:09:21.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] PersistentVolumes GCEPD [Feature:StorageProvider] test/e2e/storage/persistent_volumes-gce.go:113 Apr 11 18:09:21.878: INFO: AfterEach: Cleaning up test resources Apr 11 18:09:21.878: INFO: pvc is nil Apr 11 18:09:21.878: INFO: pv is nil [DeferCleanup (Each)] [sig-storage] PersistentVolumes GCEPD [Feature:StorageProvider] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes GCEPD [Feature:StorageProvider] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes GCEPD [Feature:StorageProvider] tear down framework | framework.go:193 STEP: Destroying namespace "pv-613" for this suite. 04/11/24 18:09:21.879 << End Captured GinkgoWriter Output Only supported for providers [gce gke] (not local) In [BeforeEach] at: test/e2e/storage/persistent_volumes-gce.go:87 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [75.933 seconds] [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false test/e2e/storage/csi_mock_volume.go:1638 ------------------------------ SSSSSSSSSSSS ------------------------------ • [SLOW TEST] [49.921 seconds] [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 test/e2e/storage/persistent_volumes-local.go:258 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ S [SKIPPED] [10.200 seconds] [sig-storage] PersistentVolumes-local test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] [BeforeEach] test/e2e/storage/persistent_volumes-local.go:198 Two pods mounting a local volume at the same time test/e2e/storage/persistent_volumes-local.go:251 should be able to write from pod1 and read from pod2 test/e2e/storage/persistent_volumes-local.go:252 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:09:19.432 Apr 11 18:09:19.432: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/11/24 18:09:19.434 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:09:19.444 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:09:19.448 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:198 Apr 11 18:09:19.465: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker2-ph2fb" in namespace "persistent-local-volumes-test-3036" to be "running" Apr 11 18:09:19.468: INFO: Pod "hostexec-v126-worker2-ph2fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.827248ms Apr 11 18:09:21.471: INFO: Pod "hostexec-v126-worker2-ph2fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005639056s Apr 11 18:09:23.473: INFO: Pod "hostexec-v126-worker2-ph2fb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007336085s Apr 11 18:09:25.472: INFO: Pod "hostexec-v126-worker2-ph2fb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.006249424s Apr 11 18:09:27.473: INFO: Pod "hostexec-v126-worker2-ph2fb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.007235587s Apr 11 18:09:29.472: INFO: Pod "hostexec-v126-worker2-ph2fb": Phase="Running", Reason="", readiness=true. Elapsed: 10.006838814s Apr 11 18:09:29.472: INFO: Pod "hostexec-v126-worker2-ph2fb" satisfied condition "running" Apr 11 18:09:29.472: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-3036 PodName:hostexec-v126-worker2-ph2fb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:09:29.472: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:09:29.474: INFO: ExecWithOptions: Clientset creation Apr 11 18:09:29.474: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-3036/pods/hostexec-v126-worker2-ph2fb/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=ls+-1+%2Fmnt%2Fdisks%2Fby-uuid%2Fgoogle-local-ssds-scsi-fs%2F+%7C+wc+-l&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 11 18:09:29.621: INFO: exec v126-worker2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Apr 11 18:09:29.621: INFO: exec v126-worker2: stdout: "0\n" Apr 11 18:09:29.621: INFO: exec v126-worker2: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Apr 11 18:09:29.621: INFO: exec v126-worker2: exit code: 0 Apr 11 18:09:29.621: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:207 STEP: Cleaning up PVC and PV 04/11/24 18:09:29.622 [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 11 18:09:29.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-3036" for this suite. 04/11/24 18:09:29.626 << End Captured GinkgoWriter Output Requires at least 1 scsi fs localSSD In [BeforeEach] at: test/e2e/storage/persistent_volumes-local.go:1255 There were additional failures detected after the initial failure. Here's a summary - for full details run Ginkgo in verbose mode: [PANICKED] in [AfterEach] at /usr/local/go/src/runtime/panic.go:260 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [14.070 seconds] [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] test/e2e/common/storage/configmap_volume.go:62 ------------------------------ SSSSSSS ------------------------------ • [SLOW TEST] [45.232 seconds] [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1 test/e2e/storage/persistent_volumes-local.go:235 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [16.220 seconds] [sig-storage] HostPathType Character Device [Slow] Should fail on mounting non-existent character device 'does-not-exist-char-dev' when HostPathType is HostPathCharDev test/e2e/storage/host_path_type.go:276 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [124.620 seconds] [sig-storage] CSI mock volume Delegate FSGroup to CSI driver [LinuxOnly] should pass FSGroup to CSI driver if it is set in pod and driver supports VOLUME_MOUNT_GROUP test/e2e/storage/csi_mock_volume.go:1771 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [99.959 seconds] [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil test/e2e/storage/csi_mock_volume.go:549 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ S [SKIPPED] [0.033 seconds] [sig-storage] Pod Disks [Feature:StorageProvider] [BeforeEach] test/e2e/storage/pd.go:76 schedule pods each with a PD, delete pod and verify detach [Slow] test/e2e/storage/pd.go:95 for read-only PD with pod delete grace period of "immediate (0s)" test/e2e/storage/pd.go:137 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] Pod Disks [Feature:StorageProvider] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:09:49.905 Apr 11 18:09:49.905: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pod-disks 04/11/24 18:09:49.907 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:09:49.919 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:09:49.923 [BeforeEach] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/storage/pd.go:76 Apr 11 18:09:49.927: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/framework/node/init/init.go:32 Apr 11 18:09:49.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] Pod Disks [Feature:StorageProvider] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] Pod Disks [Feature:StorageProvider] tear down framework | framework.go:193 STEP: Destroying namespace "pod-disks-2071" for this suite. 04/11/24 18:09:49.933 << End Captured GinkgoWriter Output Requires at least 2 nodes (not -1) In [BeforeEach] at: test/e2e/storage/pd.go:77 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ S [SKIPPED] [0.033 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 Ephemeral test/e2e/storage/volume_metrics.go:495 should create prometheus metrics for volume provisioning and attach/detach test/e2e/storage/volume_metrics.go:466 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:09:49.997 Apr 11 18:09:49.997: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:09:49.999 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:09:50.01 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:09:50.014 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:09:50.018: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:09:50.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-5922" for this suite. 04/11/24 18:09:50.024 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure. Here's a summary - for full details run Ginkgo in verbose mode: [PANICKED] in [AfterEach] at /usr/local/go/src/runtime/panic.go:260 ------------------------------ SSSS ------------------------------ • [SLOW TEST] [20.850 seconds] [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1 test/e2e/storage/persistent_volumes-local.go:235 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [32.771 seconds] [sig-storage] PersistentVolumes-local [Volume type: dir] Set fsGroup for local volume should set fsGroup for one pod [Slow] test/e2e/storage/persistent_volumes-local.go:270 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [65.935 seconds] [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment test/e2e/storage/csi_mock_volume.go:392 ------------------------------ SSSSSSSSSS ------------------------------ • [SLOW TEST] [6.074 seconds] [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] test/e2e/common/storage/configmap_volume.go:113 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [36.662 seconds] [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and read from pod1 test/e2e/storage/persistent_volumes-local.go:235 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [49.044 seconds] [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 test/e2e/storage/persistent_volumes-local.go:252 ------------------------------ SSS ------------------------------ • [SLOW TEST] [28.091 seconds] [sig-storage] HostPathType File [Slow] Should fail on mounting non-existent file 'does-not-exist-file' when HostPathType is HostPathFile test/e2e/storage/host_path_type.go:140 ------------------------------ SS ------------------------------ • [SLOW TEST] [30.093 seconds] [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathDirectory test/e2e/storage/host_path_type.go:154 ------------------------------ SSS ------------------------------ • [SLOW TEST] [12.194 seconds] [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathFile test/e2e/storage/host_path_type.go:295 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [50.180 seconds] [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 test/e2e/storage/persistent_volumes-local.go:252 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [12.067 seconds] [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] test/e2e/common/storage/host_path.go:51 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [52.972 seconds] [sig-storage] PersistentVolumes-local [Volume type: dir] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow] test/e2e/storage/persistent_volumes-local.go:277 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ S [SKIPPED] [0.031 seconds] [sig-storage] Dynamic Provisioning test/e2e/storage/utils/framework.go:23 DynamicProvisioner [Slow] [Feature:StorageProvider] test/e2e/storage/volume_provisioning.go:150 [It] should provision storage with non-default reclaim policy Retain test/e2e/storage/volume_provisioning.go:375 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] Dynamic Provisioning set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:10:46.618 Apr 11 18:10:46.618: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename volume-provisioning 04/11/24 18:10:46.62 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:10:46.631 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:10:46.635 [BeforeEach] [sig-storage] Dynamic Provisioning test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] Dynamic Provisioning test/e2e/storage/volume_provisioning.go:144 [It] should provision storage with non-default reclaim policy Retain test/e2e/storage/volume_provisioning.go:375 Apr 11 18:10:46.639: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Dynamic Provisioning test/e2e/framework/node/init/init.go:32 Apr 11 18:10:46.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] Dynamic Provisioning test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] Dynamic Provisioning dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] Dynamic Provisioning tear down framework | framework.go:193 STEP: Destroying namespace "volume-provisioning-3360" for this suite. 04/11/24 18:10:46.644 << End Captured GinkgoWriter Output Only supported for providers [gce gke] (not local) In [It] at: test/e2e/storage/volume_provisioning.go:376 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [113.732 seconds] [sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage transient error test/e2e/storage/csi_mock_volume.go:1075 ------------------------------ SSS ------------------------------ S [SKIPPED] [0.032 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 Ephemeral test/e2e/storage/volume_metrics.go:495 should create prometheus metrics for volume provisioning errors [Slow] test/e2e/storage/volume_metrics.go:471 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:10:47.471 Apr 11 18:10:47.471: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:10:47.473 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:10:47.484 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:10:47.488 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:10:47.492: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:10:47.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-1976" for this suite. 04/11/24 18:10:47.497 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure. Here's a summary - for full details run Ginkgo in verbose mode: [PANICKED] in [AfterEach] at /usr/local/go/src/runtime/panic.go:260 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [24.628 seconds] [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Set fsGroup for local volume should set fsGroup for one pod [Slow] test/e2e/storage/persistent_volumes-local.go:270 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSS ------------------------------ • [SLOW TEST] [38.783 seconds] [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1 test/e2e/storage/persistent_volumes-local.go:241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ S [SKIPPED] [0.028 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 PVC test/e2e/storage/volume_metrics.go:491 should create prometheus metrics for volume provisioning and attach/detach test/e2e/storage/volume_metrics.go:466 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:11:07.883 Apr 11 18:11:07.883: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:11:07.884 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:11:07.894 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:11:07.898 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:11:07.901: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:11:07.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-7809" for this suite. 04/11/24 18:11:07.906 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure. Here's a summary - for full details run Ginkgo in verbose mode: [PANICKED] in [AfterEach] at /usr/local/go/src/runtime/panic.go:260 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [18.644 seconds] [sig-storage] PersistentVolumes-local [Volume type: dir] Set fsGroup for local volume should set fsGroup for one pod [Slow] test/e2e/storage/persistent_volumes-local.go:270 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [29.876 seconds] [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Set fsGroup for local volume should set fsGroup for one pod [Slow] test/e2e/storage/persistent_volumes-local.go:270 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [52.837 seconds] [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 test/e2e/storage/persistent_volumes-local.go:258 ------------------------------ SSSSSSSSSSSSSS ------------------------------ S [SKIPPED] [0.027 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 PVController test/e2e/storage/volume_metrics.go:500 should create none metrics for pvc controller before creating any PV or PVC test/e2e/storage/volume_metrics.go:598 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:11:23.01 Apr 11 18:11:23.010: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:11:23.012 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:11:23.022 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:11:23.025 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:11:23.028: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:11:23.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-3025" for this suite. 04/11/24 18:11:23.033 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure. Here's a summary - for full details run Ginkgo in verbose mode: [PANICKED] in [AfterEach] at /usr/local/go/src/runtime/panic.go:260 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [6.069 seconds] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup test/e2e/common/storage/empty_dir.go:76 ------------------------------ SSSS ------------------------------ S [SKIPPED] [0.031 seconds] [sig-storage] Regional PD [BeforeEach] test/e2e/storage/regional_pd.go:70 RegionalPD test/e2e/storage/regional_pd.go:78 should provision storage with delayed binding [Slow] test/e2e/storage/regional_pd.go:83 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] Regional PD set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:11:25.449 Apr 11 18:11:25.450: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename regional-pd 04/11/24 18:11:25.451 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:11:25.462 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:11:25.466 [BeforeEach] [sig-storage] Regional PD test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] Regional PD test/e2e/storage/regional_pd.go:70 Apr 11 18:11:25.470: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Regional PD test/e2e/framework/node/init/init.go:32 Apr 11 18:11:25.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] Regional PD test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] Regional PD dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] Regional PD tear down framework | framework.go:193 STEP: Destroying namespace "regional-pd-4866" for this suite. 04/11/24 18:11:25.475 << End Captured GinkgoWriter Output Only supported for providers [gce gke] (not local) In [BeforeEach] at: test/e2e/storage/regional_pd.go:74 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [99.471 seconds] [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File test/e2e/storage/csi_mock_volume.go:1696 ------------------------------ SSS ------------------------------ • [SLOW TEST] [62.169 seconds] [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist test/e2e/storage/csi_mock_volume.go:549 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [43.989 seconds] [sig-storage] PersistentVolumes-local StatefulSet with pod affinity [Slow] should use volumes on one node when pod management is parallel and pod has affinity test/e2e/storage/persistent_volumes-local.go:437 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ S [SKIPPED] [0.031 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 Ephemeral test/e2e/storage/volume_metrics.go:495 should create volume metrics with the correct FilesystemMode PVC ref test/e2e/storage/volume_metrics.go:474 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:11:52.255 Apr 11 18:11:52.256: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:11:52.257 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:11:52.268 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:11:52.272 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:11:52.276: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:11:52.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-4021" for this suite. 04/11/24 18:11:52.282 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure. Here's a summary - for full details run Ginkgo in verbose mode: [PANICKED] in [AfterEach] at /usr/local/go/src/runtime/panic.go:260 ------------------------------ SSSSS ------------------------------ • [SLOW TEST] [20.166 seconds] [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathFile test/e2e/storage/host_path_type.go:365 ------------------------------ SSSS ------------------------------ • [SLOW TEST] [30.824 seconds] [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Set fsGroup for local volume should set fsGroup for one pod [Slow] test/e2e/storage/persistent_volumes-local.go:270 ------------------------------ SSSSSS ------------------------------ S [SKIPPED] [0.030 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 PVC test/e2e/storage/volume_metrics.go:491 should create volume metrics with the correct BlockMode PVC ref test/e2e/storage/volume_metrics.go:477 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:12:00.456 Apr 11 18:12:00.456: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:12:00.458 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:12:00.469 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:12:00.473 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:12:00.476: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:12:00.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-2055" for this suite. 04/11/24 18:12:00.481 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure. Here's a summary - for full details run Ginkgo in verbose mode: [PANICKED] in [AfterEach] at /usr/local/go/src/runtime/panic.go:260 ------------------------------ SSSSS ------------------------------ • [SLOW TEST] [10.207 seconds] [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathDirectory test/e2e/storage/host_path_type.go:360 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [69.917 seconds] [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present test/e2e/storage/csi_mock_volume.go:392 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ S [SKIPPED] [0.032 seconds] [sig-storage] Dynamic Provisioning test/e2e/storage/utils/framework.go:23 DynamicProvisioner [Slow] [Feature:StorageProvider] test/e2e/storage/volume_provisioning.go:150 [It] deletion should be idempotent test/e2e/storage/volume_provisioning.go:468 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] Dynamic Provisioning set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:12:17.878 Apr 11 18:12:17.878: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename volume-provisioning 04/11/24 18:12:17.88 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:12:17.891 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:12:17.895 [BeforeEach] [sig-storage] Dynamic Provisioning test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] Dynamic Provisioning test/e2e/storage/volume_provisioning.go:144 [It] deletion should be idempotent test/e2e/storage/volume_provisioning.go:468 Apr 11 18:12:17.899: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] Dynamic Provisioning test/e2e/framework/node/init/init.go:32 Apr 11 18:12:17.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] Dynamic Provisioning test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] Dynamic Provisioning dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] Dynamic Provisioning tear down framework | framework.go:193 STEP: Destroying namespace "volume-provisioning-3431" for this suite. 04/11/24 18:12:17.904 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [It] at: test/e2e/storage/volume_provisioning.go:474 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ S [SKIPPED] [0.032 seconds] [sig-storage] Dynamic Provisioning test/e2e/storage/utils/framework.go:23 Invalid AWS KMS key test/e2e/storage/volume_provisioning.go:704 [It] should report an error and create no PV test/e2e/storage/volume_provisioning.go:705 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] Dynamic Provisioning set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:12:17.948 Apr 11 18:12:17.948: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename volume-provisioning 04/11/24 18:12:17.949 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:12:17.96 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:12:17.964 [BeforeEach] [sig-storage] Dynamic Provisioning test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] Dynamic Provisioning test/e2e/storage/volume_provisioning.go:144 [It] should report an error and create no PV test/e2e/storage/volume_provisioning.go:705 Apr 11 18:12:17.968: INFO: Only supported for providers [aws] (not local) [AfterEach] [sig-storage] Dynamic Provisioning test/e2e/framework/node/init/init.go:32 Apr 11 18:12:17.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] Dynamic Provisioning test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] Dynamic Provisioning dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] Dynamic Provisioning tear down framework | framework.go:193 STEP: Destroying namespace "volume-provisioning-280" for this suite. 04/11/24 18:12:17.974 << End Captured GinkgoWriter Output Only supported for providers [aws] (not local) In [It] at: test/e2e/storage/volume_provisioning.go:706 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [30.759 seconds] [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1 test/e2e/storage/persistent_volumes-local.go:241 ------------------------------ SSSSSSSSSSSS ------------------------------ • [SLOW TEST] [28.677 seconds] [sig-storage] PersistentVolumes-local [Volume type: dir] Set fsGroup for local volume should set fsGroup for one pod [Slow] test/e2e/storage/persistent_volumes-local.go:270 ------------------------------ SS ------------------------------ S [SKIPPED] [0.031 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 Ephemeral test/e2e/storage/volume_metrics.go:495 should create metrics for total number of volumes in A/D Controller test/e2e/storage/volume_metrics.go:486 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:12:24.471 Apr 11 18:12:24.471: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:12:24.473 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:12:24.484 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:12:24.488 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:12:24.492: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:12:24.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-2344" for this suite. 04/11/24 18:12:24.497 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure. Here's a summary - for full details run Ginkgo in verbose mode: [PANICKED] in [AfterEach] at /usr/local/go/src/runtime/panic.go:260 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [67.910 seconds] [sig-storage] PersistentVolumes-local StatefulSet with pod affinity [Slow] should use volumes on one node when pod has affinity test/e2e/storage/persistent_volumes-local.go:422 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ S [SKIPPED] [0.032 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 Ephemeral test/e2e/storage/volume_metrics.go:495 should create metrics for total time taken in volume operations in P/V Controller test/e2e/storage/volume_metrics.go:480 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:12:31.007 Apr 11 18:12:31.007: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:12:31.008 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:12:31.02 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:12:31.023 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:12:31.027: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:12:31.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-6039" for this suite. 04/11/24 18:12:31.033 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure. Here's a summary - for full details run Ginkgo in verbose mode: [PANICKED] in [AfterEach] at /usr/local/go/src/runtime/panic.go:260 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [8.073 seconds] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root test/e2e/common/storage/empty_dir.go:60 ------------------------------ SSSSSSSSS ------------------------------ • [SLOW TEST] [20.666 seconds] [sig-storage] PersistentVolumes-local [Volume type: dir] Set fsGroup for local volume should set fsGroup for one pod [Slow] test/e2e/storage/persistent_volumes-local.go:270 ------------------------------ SSSSSSSS ------------------------------ S [SKIPPED] [0.034 seconds] [sig-storage] Flexvolumes [BeforeEach] test/e2e/storage/flexvolume.go:171 should be mountable when non-attachable test/e2e/storage/flexvolume.go:190 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] Flexvolumes set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:12:38.694 Apr 11 18:12:38.694: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename flexvolume 04/11/24 18:12:38.696 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:12:38.707 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:12:38.711 [BeforeEach] [sig-storage] Flexvolumes test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] Flexvolumes test/e2e/storage/flexvolume.go:171 Apr 11 18:12:38.715: INFO: No SSH Key for provider local: 'error reading SSH key /home/xtesting/.ssh/id_rsa: 'open /home/xtesting/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-storage] Flexvolumes test/e2e/framework/node/init/init.go:32 Apr 11 18:12:38.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] Flexvolumes test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] Flexvolumes dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] Flexvolumes tear down framework | framework.go:193 STEP: Destroying namespace "flexvolume-6405" for this suite. 04/11/24 18:12:38.721 << End Captured GinkgoWriter Output No SSH Key for provider local: 'error reading SSH key /home/xtesting/.ssh/id_rsa: 'open /home/xtesting/.ssh/id_rsa: no such file or directory'' In [BeforeEach] at: test/e2e/storage/flexvolume.go:175 ------------------------------ • [SLOW TEST] [181.988 seconds] [sig-storage] CSI mock volume CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on test/e2e/storage/csi_mock_volume.go:700 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ S [SKIPPED] [0.035 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 PVC test/e2e/storage/volume_metrics.go:491 should create volume metrics in Volume Manager test/e2e/storage/volume_metrics.go:483 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:12:44.489 Apr 11 18:12:44.490: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:12:44.492 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:12:44.509 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:12:44.512 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:12:44.515: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:12:44.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-5427" for this suite. 04/11/24 18:12:44.52 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure. Here's a summary - for full details run Ginkgo in verbose mode: [PANICKED] in [AfterEach] at /usr/local/go/src/runtime/panic.go:260 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ S [SKIPPED] [16.435 seconds] [sig-storage] PersistentVolumes-local test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] [BeforeEach] test/e2e/storage/persistent_volumes-local.go:198 One pod requesting one prebound PVC test/e2e/storage/persistent_volumes-local.go:212 should be able to mount volume and write from pod1 test/e2e/storage/persistent_volumes-local.go:241 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:12:38.729 Apr 11 18:12:38.729: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/11/24 18:12:38.731 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:12:38.741 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:12:38.745 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:198 Apr 11 18:12:38.761: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker2-t52jt" in namespace "persistent-local-volumes-test-8871" to be "running" Apr 11 18:12:38.764: INFO: Pod "hostexec-v126-worker2-t52jt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.803723ms Apr 11 18:12:40.768: INFO: Pod "hostexec-v126-worker2-t52jt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006975048s Apr 11 18:12:42.768: INFO: Pod "hostexec-v126-worker2-t52jt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007345582s Apr 11 18:12:44.768: INFO: Pod "hostexec-v126-worker2-t52jt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.006712823s Apr 11 18:12:46.768: INFO: Pod "hostexec-v126-worker2-t52jt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.007214156s Apr 11 18:12:48.768: INFO: Pod "hostexec-v126-worker2-t52jt": Phase="Pending", Reason="", readiness=false. Elapsed: 10.00678515s Apr 11 18:12:50.768: INFO: Pod "hostexec-v126-worker2-t52jt": Phase="Pending", Reason="", readiness=false. Elapsed: 12.007225342s Apr 11 18:12:52.769: INFO: Pod "hostexec-v126-worker2-t52jt": Phase="Pending", Reason="", readiness=false. Elapsed: 14.007394241s Apr 11 18:12:54.767: INFO: Pod "hostexec-v126-worker2-t52jt": Phase="Running", Reason="", readiness=true. Elapsed: 16.006305292s Apr 11 18:12:54.767: INFO: Pod "hostexec-v126-worker2-t52jt" satisfied condition "running" Apr 11 18:12:54.767: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-8871 PodName:hostexec-v126-worker2-t52jt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:12:54.767: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:12:54.769: INFO: ExecWithOptions: Clientset creation Apr 11 18:12:54.769: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-8871/pods/hostexec-v126-worker2-t52jt/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=ls+-1+%2Fmnt%2Fdisks%2Fby-uuid%2Fgoogle-local-ssds-scsi-fs%2F+%7C+wc+-l&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 11 18:12:55.153: INFO: exec v126-worker2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Apr 11 18:12:55.153: INFO: exec v126-worker2: stdout: "0\n" Apr 11 18:12:55.153: INFO: exec v126-worker2: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Apr 11 18:12:55.153: INFO: exec v126-worker2: exit code: 0 Apr 11 18:12:55.153: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:207 STEP: Cleaning up PVC and PV 04/11/24 18:12:55.153 [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 11 18:12:55.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-8871" for this suite. 04/11/24 18:12:55.159 << End Captured GinkgoWriter Output Requires at least 1 scsi fs localSSD In [BeforeEach] at: test/e2e/storage/persistent_volumes-local.go:1255 There were additional failures detected after the initial failure. Here's a summary - for full details run Ginkgo in verbose mode: [PANICKED] in [AfterEach] at /usr/local/go/src/runtime/panic.go:260 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [24.071 seconds] [sig-storage] HostPathType Socket [Slow] Should be able to mount socket 'asocket' successfully when HostPathType is HostPathSocket test/e2e/storage/host_path_type.go:212 ------------------------------ SSSSSSSSSSSSSS ------------------------------ S [SKIPPED] [0.034 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 PVController test/e2e/storage/volume_metrics.go:500 should create bound pv/pvc count metrics for pvc controller after creating both pv and pvc test/e2e/storage/volume_metrics.go:620 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:12:55.193 Apr 11 18:12:55.193: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:12:55.195 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:12:55.208 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:12:55.212 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:12:55.216: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:12:55.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-6806" for this suite. 04/11/24 18:12:55.222 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure. Here's a summary - for full details run Ginkgo in verbose mode: [PANICKED] in [AfterEach] at /usr/local/go/src/runtime/panic.go:260 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [46.873 seconds] [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow] test/e2e/storage/persistent_volumes-local.go:277 ------------------------------ SS ------------------------------ S [SKIPPED] [0.025 seconds] [sig-storage] Mounted volume expand [Feature:StorageProvider] [BeforeEach] test/e2e/storage/mounted_volume_resize.go:61 Should verify mounted devices can be resized test/e2e/storage/mounted_volume_resize.go:107 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] Mounted volume expand [Feature:StorageProvider] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:12:57.61 Apr 11 18:12:57.611: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename mounted-volume-expand 04/11/24 18:12:57.612 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:12:57.62 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:12:57.624 [BeforeEach] [sig-storage] Mounted volume expand [Feature:StorageProvider] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] Mounted volume expand [Feature:StorageProvider] test/e2e/storage/mounted_volume_resize.go:61 Apr 11 18:12:57.627: INFO: Only supported for providers [aws gce] (not local) [AfterEach] [sig-storage] Mounted volume expand [Feature:StorageProvider] test/e2e/framework/node/init/init.go:32 Apr 11 18:12:57.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] Mounted volume expand [Feature:StorageProvider] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] Mounted volume expand [Feature:StorageProvider] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] Mounted volume expand [Feature:StorageProvider] tear down framework | framework.go:193 STEP: Destroying namespace "mounted-volume-expand-7185" for this suite. 04/11/24 18:12:57.631 << End Captured GinkgoWriter Output Only supported for providers [aws gce] (not local) In [BeforeEach] at: test/e2e/storage/mounted_volume_resize.go:62 ------------------------------ SSS ------------------------------ • [SLOW TEST] [160.159 seconds] [sig-storage] PersistentVolumes Default StorageClass [LinuxOnly] pods that use multiple volumes should be reschedulable [Slow] test/e2e/storage/persistent_volumes.go:334 ------------------------------ SSSSS ------------------------------ • [SLOW TEST] [8.071 seconds] [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] test/e2e/common/storage/downwardapi_volume.go:94 ------------------------------ SSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [10.181 seconds] [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathBlockDev test/e2e/storage/host_path_type.go:305 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSS ------------------------------ S [SKIPPED] [11.046 seconds] [sig-storage] PersistentVolumes-local test/e2e/storage/utils/framework.go:23 [Volume type: block] test/e2e/storage/persistent_volumes-local.go:195 Set fsGroup for local volume [BeforeEach] test/e2e/storage/persistent_volumes-local.go:264 should set fsGroup for one pod [Slow] test/e2e/storage/persistent_volumes-local.go:270 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:13:03.352 Apr 11 18:13:03.352: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/11/24 18:13:03.354 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:13:03.365 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:13:03.369 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] [Volume type: block] test/e2e/storage/persistent_volumes-local.go:198 STEP: Initializing test volumes 04/11/24 18:13:03.381 STEP: Creating block device on node "v126-worker2" using path "/tmp/local-volume-test-4ea7b166-c689-4b5a-8e21-4b5067e9b10c" 04/11/24 18:13:03.381 Apr 11 18:13:03.389: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker2-5fpmx" in namespace "persistent-local-volumes-test-7730" to be "running" Apr 11 18:13:03.392: INFO: Pod "hostexec-v126-worker2-5fpmx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.874163ms Apr 11 18:13:05.395: INFO: Pod "hostexec-v126-worker2-5fpmx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006169358s Apr 11 18:13:07.396: INFO: Pod "hostexec-v126-worker2-5fpmx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007102466s Apr 11 18:13:09.396: INFO: Pod "hostexec-v126-worker2-5fpmx": Phase="Running", Reason="", readiness=true. Elapsed: 6.007071496s Apr 11 18:13:09.396: INFO: Pod "hostexec-v126-worker2-5fpmx" satisfied condition "running" Apr 11 18:13:09.396: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-4ea7b166-c689-4b5a-8e21-4b5067e9b10c && dd if=/dev/zero of=/tmp/local-volume-test-4ea7b166-c689-4b5a-8e21-4b5067e9b10c/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-4ea7b166-c689-4b5a-8e21-4b5067e9b10c/file] Namespace:persistent-local-volumes-test-7730 PodName:hostexec-v126-worker2-5fpmx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:13:09.396: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:13:09.398: INFO: ExecWithOptions: Clientset creation Apr 11 18:13:09.398: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-7730/pods/hostexec-v126-worker2-5fpmx/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%2Ftmp%2Flocal-volume-test-4ea7b166-c689-4b5a-8e21-4b5067e9b10c+%26%26+dd+if%3D%2Fdev%2Fzero+of%3D%2Ftmp%2Flocal-volume-test-4ea7b166-c689-4b5a-8e21-4b5067e9b10c%2Ffile+bs%3D4096+count%3D5120+%26%26+losetup+-f+%2Ftmp%2Flocal-volume-test-4ea7b166-c689-4b5a-8e21-4b5067e9b10c%2Ffile&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 11 18:13:09.652: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-4ea7b166-c689-4b5a-8e21-4b5067e9b10c/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-7730 PodName:hostexec-v126-worker2-5fpmx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:13:09.652: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:13:09.653: INFO: ExecWithOptions: Clientset creation Apr 11 18:13:09.653: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-7730/pods/hostexec-v126-worker2-5fpmx/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=E2E_LOOP_DEV%3D%24%28losetup+%7C+grep+%2Ftmp%2Flocal-volume-test-4ea7b166-c689-4b5a-8e21-4b5067e9b10c%2Ffile+%7C+awk+%27%7B+print+%241+%7D%27%29+2%3E%261+%3E+%2Fdev%2Fnull+%26%26+echo+%24%7BE2E_LOOP_DEV%7D&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating local PVCs and PVs 04/11/24 18:13:09.824 Apr 11 18:13:09.824: INFO: Creating a PV followed by a PVC Apr 11 18:13:09.833: INFO: Waiting for PV local-pv47sz8 to bind to PVC pvc-hgczd Apr 11 18:13:09.834: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-hgczd] to have phase Bound Apr 11 18:13:09.836: INFO: PersistentVolumeClaim pvc-hgczd found but phase is Pending instead of Bound. Apr 11 18:13:11.840: INFO: PersistentVolumeClaim pvc-hgczd found but phase is Pending instead of Bound. Apr 11 18:13:13.844: INFO: PersistentVolumeClaim pvc-hgczd found and phase=Bound (4.010535778s) Apr 11 18:13:13.844: INFO: Waiting up to 3m0s for PersistentVolume local-pv47sz8 to have phase Bound Apr 11 18:13:13.847: INFO: PersistentVolume local-pv47sz8 found and phase=Bound (3.03786ms) [BeforeEach] Set fsGroup for local volume test/e2e/storage/persistent_volumes-local.go:264 Apr 11 18:13:13.853: INFO: We don't set fsGroup on block device, skipped. [AfterEach] [Volume type: block] test/e2e/storage/persistent_volumes-local.go:207 STEP: Cleaning up PVC and PV 04/11/24 18:13:13.854 Apr 11 18:13:13.854: INFO: Deleting PersistentVolumeClaim "pvc-hgczd" Apr 11 18:13:13.859: INFO: Deleting PersistentVolume "local-pv47sz8" Apr 11 18:13:13.864: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-4ea7b166-c689-4b5a-8e21-4b5067e9b10c/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-7730 PodName:hostexec-v126-worker2-5fpmx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:13:13.864: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:13:13.866: INFO: ExecWithOptions: Clientset creation Apr 11 18:13:13.866: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-7730/pods/hostexec-v126-worker2-5fpmx/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=E2E_LOOP_DEV%3D%24%28losetup+%7C+grep+%2Ftmp%2Flocal-volume-test-4ea7b166-c689-4b5a-8e21-4b5067e9b10c%2Ffile+%7C+awk+%27%7B+print+%241+%7D%27%29+2%3E%261+%3E+%2Fdev%2Fnull+%26%26+echo+%24%7BE2E_LOOP_DEV%7D&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Tear down block device "/dev/loop0" on node "v126-worker2" at path /tmp/local-volume-test-4ea7b166-c689-4b5a-8e21-4b5067e9b10c/file 04/11/24 18:13:14.017 Apr 11 18:13:14.017: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-7730 PodName:hostexec-v126-worker2-5fpmx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:13:14.017: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:13:14.018: INFO: ExecWithOptions: Clientset creation Apr 11 18:13:14.018: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-7730/pods/hostexec-v126-worker2-5fpmx/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=losetup+-d+%2Fdev%2Floop0&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory /tmp/local-volume-test-4ea7b166-c689-4b5a-8e21-4b5067e9b10c 04/11/24 18:13:14.22 Apr 11 18:13:14.220: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4ea7b166-c689-4b5a-8e21-4b5067e9b10c] Namespace:persistent-local-volumes-test-7730 PodName:hostexec-v126-worker2-5fpmx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:13:14.220: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:13:14.222: INFO: ExecWithOptions: Clientset creation Apr 11 18:13:14.222: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-7730/pods/hostexec-v126-worker2-5fpmx/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-4ea7b166-c689-4b5a-8e21-4b5067e9b10c&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 11 18:13:14.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-7730" for this suite. 04/11/24 18:13:14.393 << End Captured GinkgoWriter Output We don't set fsGroup on block device, skipped. In [BeforeEach] at: test/e2e/storage/persistent_volumes-local.go:266 ------------------------------ SSSSS ------------------------------ S [SKIPPED] [0.032 seconds] [sig-storage] Pod Disks [Feature:StorageProvider] [BeforeEach] test/e2e/storage/pd.go:76 [Serial] attach on previously attached volumes should work test/e2e/storage/pd.go:461 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] Pod Disks [Feature:StorageProvider] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:13:14.405 Apr 11 18:13:14.405: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pod-disks 04/11/24 18:13:14.406 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:13:14.417 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:13:14.421 [BeforeEach] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/storage/pd.go:76 Apr 11 18:13:14.425: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/framework/node/init/init.go:32 Apr 11 18:13:14.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] Pod Disks [Feature:StorageProvider] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] Pod Disks [Feature:StorageProvider] tear down framework | framework.go:193 STEP: Destroying namespace "pod-disks-8559" for this suite. 04/11/24 18:13:14.431 << End Captured GinkgoWriter Output Requires at least 2 nodes (not -1) In [BeforeEach] at: test/e2e/storage/pd.go:77 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [24.086 seconds] [sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately test/e2e/storage/pvc_protection.go:129 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [38.760 seconds] [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1 test/e2e/storage/persistent_volumes-local.go:241 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [5.090 seconds] [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately test/e2e/storage/pv_protection.go:110 ------------------------------ SSSSSSS ------------------------------ • [SLOW TEST] [77.986 seconds] [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true test/e2e/storage/csi_mock_volume.go:549 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [39.292 seconds] [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1 test/e2e/storage/persistent_volumes-local.go:235 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ S [SKIPPED] [0.029 seconds] [sig-storage] Pod Disks [Feature:StorageProvider] [BeforeEach] test/e2e/storage/pd.go:76 should be able to delete a non-existent PD without error test/e2e/storage/pd.go:452 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] Pod Disks [Feature:StorageProvider] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:13:48.878 Apr 11 18:13:48.878: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pod-disks 04/11/24 18:13:48.879 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:13:48.889 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:13:48.893 [BeforeEach] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/storage/pd.go:76 Apr 11 18:13:48.896: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/framework/node/init/init.go:32 Apr 11 18:13:48.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] Pod Disks [Feature:StorageProvider] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] Pod Disks [Feature:StorageProvider] tear down framework | framework.go:193 STEP: Destroying namespace "pod-disks-9380" for this suite. 04/11/24 18:13:48.901 << End Captured GinkgoWriter Output Requires at least 2 nodes (not -1) In [BeforeEach] at: test/e2e/storage/pd.go:77 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [80.964 seconds] [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity test/e2e/storage/csi_mock_volume.go:1413 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [34.100 seconds] [sig-storage] HostPathType Directory [Slow] Should fail on mounting non-existent directory 'does-not-exist-dir' when HostPathType is HostPathDirectory test/e2e/storage/host_path_type.go:72 ------------------------------ SSSSSSSSSSSS ------------------------------ • [SLOW TEST] [18.213 seconds] [sig-storage] HostPathType Block Device [Slow] Should fail on mounting non-existent block device 'does-not-exist-blk-dev' when HostPathType is HostPathBlockDev test/e2e/storage/host_path_type.go:346 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [20.194 seconds] [sig-storage] HostPathType Block Device [Slow] Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathBlockDev test/e2e/storage/host_path_type.go:352 ------------------------------ S ------------------------------ • [SLOW TEST] [300.058 seconds] [sig-storage] Projected configMap Should fail non-optional pod creation due to configMap object does not exist [Slow] test/e2e/common/storage/projected_configmap.go:463 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [99.841 seconds] [sig-storage] CSI mock volume CSIServiceAccountToken token should be plumbed down when csiServiceAccountTokenEnabled=true test/e2e/storage/csi_mock_volume.go:1638 ------------------------------ S [SKIPPED] [0.032 seconds] [sig-storage] Regional PD [BeforeEach] test/e2e/storage/regional_pd.go:70 RegionalPD test/e2e/storage/regional_pd.go:78 should provision storage in the allowedTopologies with delayed binding [Slow] test/e2e/storage/regional_pd.go:92 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] Regional PD set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:15:03.164 Apr 11 18:15:03.164: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename regional-pd 04/11/24 18:15:03.166 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:15:03.177 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:15:03.181 [BeforeEach] [sig-storage] Regional PD test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] Regional PD test/e2e/storage/regional_pd.go:70 Apr 11 18:15:03.185: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Regional PD test/e2e/framework/node/init/init.go:32 Apr 11 18:15:03.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] Regional PD test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] Regional PD dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] Regional PD tear down framework | framework.go:193 STEP: Destroying namespace "regional-pd-489" for this suite. 04/11/24 18:15:03.191 << End Captured GinkgoWriter Output Only supported for providers [gce gke] (not local) In [BeforeEach] at: test/e2e/storage/regional_pd.go:74 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [53.319 seconds] [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 test/e2e/storage/persistent_volumes-local.go:258 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [18.073 seconds] [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathFile test/e2e/storage/host_path_type.go:225 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [76.851 seconds] [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when CSIDriver is not deployed test/e2e/storage/csi_mock_volume.go:1638 ------------------------------ SSS ------------------------------ • [SLOW TEST] [134.591 seconds] [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes test/e2e/storage/persistent_volumes-local.go:534 ------------------------------ SSSSSSSSSSSS ------------------------------ • [SLOW TEST] [10.073 seconds] [sig-storage] HostPathType Socket [Slow] Should fail on mounting non-existent socket 'does-not-exist-socket' when HostPathType is HostPathSocket test/e2e/storage/host_path_type.go:206 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [300.053 seconds] [sig-storage] Secrets Should fail non-optional pod creation due to secret object does not exist [Slow] test/e2e/common/storage/secrets_volume.go:439 ------------------------------ SSSSSS ------------------------------ • [SLOW TEST] [110.181 seconds] [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should not modify fsGroup if fsGroupPolicy=None test/e2e/storage/csi_mock_volume.go:1696 ------------------------------ SSSSSSSS ------------------------------ • [SLOW TEST] [10.092 seconds] [sig-storage] HostPathType File [Slow] Should be able to mount file 'afile' successfully when HostPathType is HostPathUnset test/e2e/storage/host_path_type.go:150 ------------------------------ SSSSSSSSSSSS ------------------------------ • [SLOW TEST] [14.100 seconds] [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathBlockDev test/e2e/storage/host_path_type.go:169 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [132.056 seconds] [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret test/e2e/storage/ephemeral_volume.go:58 ------------------------------ SSSSSSSSSSSS ------------------------------ • [SLOW TEST] [20.205 seconds] [sig-storage] HostPathType Block Device [Slow] Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathUnset test/e2e/storage/host_path_type.go:356 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [18.177 seconds] [sig-storage] HostPathType Character Device [Slow] Should be able to mount character device 'achardev' successfully when HostPathType is HostPathCharDev test/e2e/storage/host_path_type.go:282 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ S [SKIPPED] [8.193 seconds] [sig-storage] PersistentVolumes-local test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] [BeforeEach] test/e2e/storage/persistent_volumes-local.go:198 Two pods mounting a local volume at the same time test/e2e/storage/persistent_volumes-local.go:251 should be able to write from pod1 and read from pod2 test/e2e/storage/persistent_volumes-local.go:252 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:15:51.701 Apr 11 18:15:51.701: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/11/24 18:15:51.703 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:15:51.713 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:15:51.717 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:198 Apr 11 18:15:51.732: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker2-g4rbk" in namespace "persistent-local-volumes-test-7647" to be "running" Apr 11 18:15:51.735: INFO: Pod "hostexec-v126-worker2-g4rbk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.416196ms Apr 11 18:15:53.739: INFO: Pod "hostexec-v126-worker2-g4rbk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006745645s Apr 11 18:15:55.739: INFO: Pod "hostexec-v126-worker2-g4rbk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006270031s Apr 11 18:15:57.738: INFO: Pod "hostexec-v126-worker2-g4rbk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.00611694s Apr 11 18:15:59.738: INFO: Pod "hostexec-v126-worker2-g4rbk": Phase="Running", Reason="", readiness=true. Elapsed: 8.006115653s Apr 11 18:15:59.738: INFO: Pod "hostexec-v126-worker2-g4rbk" satisfied condition "running" Apr 11 18:15:59.738: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-7647 PodName:hostexec-v126-worker2-g4rbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:15:59.738: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:15:59.740: INFO: ExecWithOptions: Clientset creation Apr 11 18:15:59.740: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-7647/pods/hostexec-v126-worker2-g4rbk/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=ls+-1+%2Fmnt%2Fdisks%2Fby-uuid%2Fgoogle-local-ssds-scsi-fs%2F+%7C+wc+-l&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 11 18:15:59.883: INFO: exec v126-worker2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Apr 11 18:15:59.883: INFO: exec v126-worker2: stdout: "0\n" Apr 11 18:15:59.883: INFO: exec v126-worker2: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Apr 11 18:15:59.883: INFO: exec v126-worker2: exit code: 0 Apr 11 18:15:59.883: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:207 STEP: Cleaning up PVC and PV 04/11/24 18:15:59.883 [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 11 18:15:59.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-7647" for this suite. 04/11/24 18:15:59.889 << End Captured GinkgoWriter Output Requires at least 1 scsi fs localSSD In [BeforeEach] at: test/e2e/storage/persistent_volumes-local.go:1255 There were additional failures detected after the initial failure. Here's a summary - for full details run Ginkgo in verbose mode: [PANICKED] in [AfterEach] at /usr/local/go/src/runtime/panic.go:260 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ S [SKIPPED] [20.084 seconds] [sig-storage] PersistentVolumes-local test/e2e/storage/utils/framework.go:23 StatefulSet with pod affinity [Slow] test/e2e/storage/persistent_volumes-local.go:387 [It] should use volumes spread across nodes when pod management is parallel and pod has anti-affinity test/e2e/storage/persistent_volumes-local.go:428 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:15:42.905 Apr 11 18:15:42.905: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/11/24 18:15:42.907 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:15:42.918 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:15:42.922 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] StatefulSet with pod affinity [Slow] test/e2e/storage/persistent_volumes-local.go:394 STEP: Setting up local volumes on node "v126-worker2" 04/11/24 18:15:42.935 STEP: Initializing test volumes 04/11/24 18:15:42.935 Apr 11 18:15:42.942: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker2-whxbk" in namespace "persistent-local-volumes-test-3731" to be "running" Apr 11 18:15:42.946: INFO: Pod "hostexec-v126-worker2-whxbk": Phase="Pending", Reason="", readiness=false. Elapsed: 3.808515ms Apr 11 18:15:44.950: INFO: Pod "hostexec-v126-worker2-whxbk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007803325s Apr 11 18:15:46.950: INFO: Pod "hostexec-v126-worker2-whxbk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007973901s Apr 11 18:15:48.950: INFO: Pod "hostexec-v126-worker2-whxbk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.007635781s Apr 11 18:15:50.950: INFO: Pod "hostexec-v126-worker2-whxbk": Phase="Running", Reason="", readiness=true. Elapsed: 8.00714862s Apr 11 18:15:50.950: INFO: Pod "hostexec-v126-worker2-whxbk" satisfied condition "running" Apr 11 18:15:50.950: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-f4c60ac7-91bd-4808-b7be-040398dd556e] Namespace:persistent-local-volumes-test-3731 PodName:hostexec-v126-worker2-whxbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:15:50.950: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:15:50.951: INFO: ExecWithOptions: Clientset creation Apr 11 18:15:50.951: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-3731/pods/hostexec-v126-worker2-whxbk/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%2Ftmp%2Flocal-volume-test-f4c60ac7-91bd-4808-b7be-040398dd556e&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 11 18:15:51.097: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-a59348d0-93d9-4e7b-84e4-7a35530ad951] Namespace:persistent-local-volumes-test-3731 PodName:hostexec-v126-worker2-whxbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:15:51.097: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:15:51.098: INFO: ExecWithOptions: Clientset creation Apr 11 18:15:51.098: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-3731/pods/hostexec-v126-worker2-whxbk/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%2Ftmp%2Flocal-volume-test-a59348d0-93d9-4e7b-84e4-7a35530ad951&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 11 18:15:51.260: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-3fa87383-e172-4c0b-a6d2-8435a2d9ef6d] Namespace:persistent-local-volumes-test-3731 PodName:hostexec-v126-worker2-whxbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:15:51.260: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:15:51.261: INFO: ExecWithOptions: Clientset creation Apr 11 18:15:51.261: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-3731/pods/hostexec-v126-worker2-whxbk/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%2Ftmp%2Flocal-volume-test-3fa87383-e172-4c0b-a6d2-8435a2d9ef6d&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 11 18:15:51.418: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-be602436-4935-4be6-8d33-416c078a10a7] Namespace:persistent-local-volumes-test-3731 PodName:hostexec-v126-worker2-whxbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:15:51.418: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:15:51.419: INFO: ExecWithOptions: Clientset creation Apr 11 18:15:51.419: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-3731/pods/hostexec-v126-worker2-whxbk/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%2Ftmp%2Flocal-volume-test-be602436-4935-4be6-8d33-416c078a10a7&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 11 18:15:51.577: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-e8be820e-899d-44d9-b0ac-183723fe7cb5] Namespace:persistent-local-volumes-test-3731 PodName:hostexec-v126-worker2-whxbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:15:51.578: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:15:51.579: INFO: ExecWithOptions: Clientset creation Apr 11 18:15:51.579: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-3731/pods/hostexec-v126-worker2-whxbk/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%2Ftmp%2Flocal-volume-test-e8be820e-899d-44d9-b0ac-183723fe7cb5&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 11 18:15:51.722: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-b59b758f-9a32-4398-8e5a-66e788ac8650] Namespace:persistent-local-volumes-test-3731 PodName:hostexec-v126-worker2-whxbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:15:51.722: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:15:51.723: INFO: ExecWithOptions: Clientset creation Apr 11 18:15:51.723: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-3731/pods/hostexec-v126-worker2-whxbk/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%2Ftmp%2Flocal-volume-test-b59b758f-9a32-4398-8e5a-66e788ac8650&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating local PVCs and PVs 04/11/24 18:15:51.887 Apr 11 18:15:51.887: INFO: Creating a PV followed by a PVC Apr 11 18:15:51.896: INFO: Creating a PV followed by a PVC Apr 11 18:15:51.904: INFO: Creating a PV followed by a PVC Apr 11 18:15:51.912: INFO: Creating a PV followed by a PVC Apr 11 18:15:51.918: INFO: Creating a PV followed by a PVC Apr 11 18:15:51.926: INFO: Creating a PV followed by a PVC Apr 11 18:16:01.984: INFO: PVCs were not bound within 10s (that's good) [It] should use volumes spread across nodes when pod management is parallel and pod has anti-affinity test/e2e/storage/persistent_volumes-local.go:428 Apr 11 18:16:01.984: INFO: Runs only when number of nodes >= 3 [AfterEach] StatefulSet with pod affinity [Slow] test/e2e/storage/persistent_volumes-local.go:406 STEP: Cleaning up PVC and PV 04/11/24 18:16:01.984 Apr 11 18:16:01.984: INFO: Deleting PersistentVolumeClaim "pvc-6n2qk" Apr 11 18:16:01.989: INFO: Deleting PersistentVolume "local-pv8hksg" STEP: Cleaning up PVC and PV 04/11/24 18:16:01.994 Apr 11 18:16:01.994: INFO: Deleting PersistentVolumeClaim "pvc-6r6kq" Apr 11 18:16:02.000: INFO: Deleting PersistentVolume "local-pv9wj87" STEP: Cleaning up PVC and PV 04/11/24 18:16:02.005 Apr 11 18:16:02.005: INFO: Deleting PersistentVolumeClaim "pvc-bkrbp" Apr 11 18:16:02.010: INFO: Deleting PersistentVolume "local-pvdfkzl" STEP: Cleaning up PVC and PV 04/11/24 18:16:02.015 Apr 11 18:16:02.015: INFO: Deleting PersistentVolumeClaim "pvc-nrxkj" Apr 11 18:16:02.021: INFO: Deleting PersistentVolume "local-pv4xfnm" STEP: Cleaning up PVC and PV 04/11/24 18:16:02.026 Apr 11 18:16:02.026: INFO: Deleting PersistentVolumeClaim "pvc-lstvf" Apr 11 18:16:02.030: INFO: Deleting PersistentVolume "local-pv4rc5j" STEP: Cleaning up PVC and PV 04/11/24 18:16:02.036 Apr 11 18:16:02.036: INFO: Deleting PersistentVolumeClaim "pvc-cjrvq" Apr 11 18:16:02.041: INFO: Deleting PersistentVolume "local-pvb7lvf" STEP: Removing the test directory 04/11/24 18:16:02.046 Apr 11 18:16:02.046: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f4c60ac7-91bd-4808-b7be-040398dd556e] Namespace:persistent-local-volumes-test-3731 PodName:hostexec-v126-worker2-whxbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:16:02.046: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:16:02.047: INFO: ExecWithOptions: Clientset creation Apr 11 18:16:02.047: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-3731/pods/hostexec-v126-worker2-whxbk/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-f4c60ac7-91bd-4808-b7be-040398dd556e&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/11/24 18:16:02.184 Apr 11 18:16:02.184: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a59348d0-93d9-4e7b-84e4-7a35530ad951] Namespace:persistent-local-volumes-test-3731 PodName:hostexec-v126-worker2-whxbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:16:02.184: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:16:02.186: INFO: ExecWithOptions: Clientset creation Apr 11 18:16:02.186: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-3731/pods/hostexec-v126-worker2-whxbk/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-a59348d0-93d9-4e7b-84e4-7a35530ad951&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/11/24 18:16:02.351 Apr 11 18:16:02.351: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-3fa87383-e172-4c0b-a6d2-8435a2d9ef6d] Namespace:persistent-local-volumes-test-3731 PodName:hostexec-v126-worker2-whxbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:16:02.352: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:16:02.353: INFO: ExecWithOptions: Clientset creation Apr 11 18:16:02.353: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-3731/pods/hostexec-v126-worker2-whxbk/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-3fa87383-e172-4c0b-a6d2-8435a2d9ef6d&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/11/24 18:16:02.494 Apr 11 18:16:02.494: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-be602436-4935-4be6-8d33-416c078a10a7] Namespace:persistent-local-volumes-test-3731 PodName:hostexec-v126-worker2-whxbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:16:02.494: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:16:02.495: INFO: ExecWithOptions: Clientset creation Apr 11 18:16:02.495: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-3731/pods/hostexec-v126-worker2-whxbk/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-be602436-4935-4be6-8d33-416c078a10a7&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/11/24 18:16:02.654 Apr 11 18:16:02.654: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e8be820e-899d-44d9-b0ac-183723fe7cb5] Namespace:persistent-local-volumes-test-3731 PodName:hostexec-v126-worker2-whxbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:16:02.655: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:16:02.655: INFO: ExecWithOptions: Clientset creation Apr 11 18:16:02.655: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-3731/pods/hostexec-v126-worker2-whxbk/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-e8be820e-899d-44d9-b0ac-183723fe7cb5&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/11/24 18:16:02.814 Apr 11 18:16:02.814: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-b59b758f-9a32-4398-8e5a-66e788ac8650] Namespace:persistent-local-volumes-test-3731 PodName:hostexec-v126-worker2-whxbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:16:02.814: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:16:02.815: INFO: ExecWithOptions: Clientset creation Apr 11 18:16:02.816: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-3731/pods/hostexec-v126-worker2-whxbk/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-b59b758f-9a32-4398-8e5a-66e788ac8650&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 11 18:16:02.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-3731" for this suite. 04/11/24 18:16:02.982 << End Captured GinkgoWriter Output Runs only when number of nodes >= 3 In [It] at: test/e2e/storage/persistent_volumes-local.go:430 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [28.092 seconds] [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathSocket test/e2e/storage/host_path_type.go:91 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ S [SKIPPED] [0.035 seconds] [sig-storage] PersistentVolumes-local test/e2e/storage/utils/framework.go:23 Pod with node different from PV's NodeAffinity [BeforeEach] test/e2e/storage/persistent_volumes-local.go:357 should fail scheduling due to different NodeAffinity test/e2e/storage/persistent_volumes-local.go:378 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:16:07.286 Apr 11 18:16:07.286: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/11/24 18:16:07.288 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:16:07.299 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:16:07.303 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] Pod with node different from PV's NodeAffinity test/e2e/storage/persistent_volumes-local.go:357 Apr 11 18:16:07.311: INFO: Runs only when number of nodes >= 2 [AfterEach] Pod with node different from PV's NodeAffinity test/e2e/storage/persistent_volumes-local.go:373 STEP: Cleaning up PVC and PV 04/11/24 18:16:07.311 [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 11 18:16:07.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-3782" for this suite. 04/11/24 18:16:07.316 << End Captured GinkgoWriter Output Runs only when number of nodes >= 2 In [BeforeEach] at: test/e2e/storage/persistent_volumes-local.go:359 There were additional failures detected after the initial failure. Here's a summary - for full details run Ginkgo in verbose mode: [PANICKED] in [AfterEach] at /usr/local/go/src/runtime/panic.go:260 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [8.070 seconds] [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] test/e2e/common/storage/downwardapi_volume.go:109 ------------------------------ SSSSSS ------------------------------ • [SLOW TEST] [55.575 seconds] [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 test/e2e/storage/persistent_volumes-local.go:252 ------------------------------ S [SKIPPED] [0.033 seconds] [sig-storage] Dynamic Provisioning test/e2e/storage/utils/framework.go:23 DynamicProvisioner Default test/e2e/storage/volume_provisioning.go:601 [It] should create and delete default persistent volumes [Slow] test/e2e/storage/volume_provisioning.go:602 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] Dynamic Provisioning set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:16:11.367 Apr 11 18:16:11.367: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename volume-provisioning 04/11/24 18:16:11.368 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:16:11.381 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:16:11.385 [BeforeEach] [sig-storage] Dynamic Provisioning test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] Dynamic Provisioning test/e2e/storage/volume_provisioning.go:144 [It] should create and delete default persistent volumes [Slow] test/e2e/storage/volume_provisioning.go:602 Apr 11 18:16:11.389: INFO: Only supported for providers [openstack gce aws gke vsphere azure] (not local) [AfterEach] [sig-storage] Dynamic Provisioning test/e2e/framework/node/init/init.go:32 Apr 11 18:16:11.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] Dynamic Provisioning test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] Dynamic Provisioning dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] Dynamic Provisioning tear down framework | framework.go:193 STEP: Destroying namespace "volume-provisioning-5547" for this suite. 04/11/24 18:16:11.394 << End Captured GinkgoWriter Output Only supported for providers [openstack gce aws gke vsphere azure] (not local) In [It] at: test/e2e/storage/volume_provisioning.go:603 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [24.095 seconds] [sig-storage] HostPathType Directory [Slow] Should be able to mount directory 'adir' successfully when HostPathType is HostPathUnset test/e2e/storage/host_path_type.go:82 ------------------------------ SSSS ------------------------------ S [SKIPPED] [0.029 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 PVC test/e2e/storage/volume_metrics.go:491 should create metrics for total time taken in volume operations in P/V Controller test/e2e/storage/volume_metrics.go:480 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:16:18.196 Apr 11 18:16:18.196: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:16:18.198 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:16:18.208 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:16:18.212 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:16:18.216: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:16:18.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-1978" for this suite. 04/11/24 18:16:18.221 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure. Here's a summary - for full details run Ginkgo in verbose mode: [PANICKED] in [AfterEach] at /usr/local/go/src/runtime/panic.go:260 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [202.688 seconds] [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should call NodeUnstage after NodeStage ephemeral error test/e2e/storage/csi_mock_volume.go:942 ------------------------------ SSSSSSS ------------------------------ • [SLOW TEST] [6.222 seconds] [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathDirectory test/e2e/storage/host_path_type.go:290 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ S [SKIPPED] [23.928 seconds] [sig-storage] PersistentVolumes-local test/e2e/storage/utils/framework.go:23 StatefulSet with pod affinity [Slow] test/e2e/storage/persistent_volumes-local.go:387 [It] should use volumes spread across nodes when pod has anti-affinity test/e2e/storage/persistent_volumes-local.go:413 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:16:08.011 Apr 11 18:16:08.011: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/11/24 18:16:08.013 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:16:08.024 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:16:08.028 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] StatefulSet with pod affinity [Slow] test/e2e/storage/persistent_volumes-local.go:394 STEP: Setting up local volumes on node "v126-worker2" 04/11/24 18:16:08.041 STEP: Initializing test volumes 04/11/24 18:16:08.041 Apr 11 18:16:08.049: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker2-s6m8p" in namespace "persistent-local-volumes-test-7184" to be "running" Apr 11 18:16:08.052: INFO: Pod "hostexec-v126-worker2-s6m8p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.855386ms Apr 11 18:16:10.055: INFO: Pod "hostexec-v126-worker2-s6m8p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006410969s Apr 11 18:16:12.056: INFO: Pod "hostexec-v126-worker2-s6m8p": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007002882s Apr 11 18:16:14.055: INFO: Pod "hostexec-v126-worker2-s6m8p": Phase="Pending", Reason="", readiness=false. Elapsed: 6.006240622s Apr 11 18:16:16.056: INFO: Pod "hostexec-v126-worker2-s6m8p": Phase="Pending", Reason="", readiness=false. Elapsed: 8.006934962s Apr 11 18:16:18.056: INFO: Pod "hostexec-v126-worker2-s6m8p": Phase="Pending", Reason="", readiness=false. Elapsed: 10.007081137s Apr 11 18:16:20.055: INFO: Pod "hostexec-v126-worker2-s6m8p": Phase="Running", Reason="", readiness=true. Elapsed: 12.00654954s Apr 11 18:16:20.055: INFO: Pod "hostexec-v126-worker2-s6m8p" satisfied condition "running" Apr 11 18:16:20.055: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-4a7ea1e4-edd4-4621-8cb4-45e2b5bff8b9] Namespace:persistent-local-volumes-test-7184 PodName:hostexec-v126-worker2-s6m8p ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:16:20.055: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:16:20.056: INFO: ExecWithOptions: Clientset creation Apr 11 18:16:20.056: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-7184/pods/hostexec-v126-worker2-s6m8p/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%2Ftmp%2Flocal-volume-test-4a7ea1e4-edd4-4621-8cb4-45e2b5bff8b9&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 11 18:16:20.169: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-88673fbb-288e-45df-9fe5-5466a5f10ebd] Namespace:persistent-local-volumes-test-7184 PodName:hostexec-v126-worker2-s6m8p ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:16:20.169: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:16:20.170: INFO: ExecWithOptions: Clientset creation Apr 11 18:16:20.170: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-7184/pods/hostexec-v126-worker2-s6m8p/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%2Ftmp%2Flocal-volume-test-88673fbb-288e-45df-9fe5-5466a5f10ebd&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 11 18:16:20.317: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-4825e9aa-0229-4499-8aed-dd441732b5b1] Namespace:persistent-local-volumes-test-7184 PodName:hostexec-v126-worker2-s6m8p ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:16:20.317: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:16:20.319: INFO: ExecWithOptions: Clientset creation Apr 11 18:16:20.319: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-7184/pods/hostexec-v126-worker2-s6m8p/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%2Ftmp%2Flocal-volume-test-4825e9aa-0229-4499-8aed-dd441732b5b1&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 11 18:16:20.437: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-00d355bd-a808-41d0-9b2c-8910d9c2b51c] Namespace:persistent-local-volumes-test-7184 PodName:hostexec-v126-worker2-s6m8p ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:16:20.437: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:16:20.439: INFO: ExecWithOptions: Clientset creation Apr 11 18:16:20.439: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-7184/pods/hostexec-v126-worker2-s6m8p/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%2Ftmp%2Flocal-volume-test-00d355bd-a808-41d0-9b2c-8910d9c2b51c&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 11 18:16:20.591: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-4f86f3c0-65a4-4f3e-913e-6bde75818b89] Namespace:persistent-local-volumes-test-7184 PodName:hostexec-v126-worker2-s6m8p ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:16:20.591: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:16:20.593: INFO: ExecWithOptions: Clientset creation Apr 11 18:16:20.593: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-7184/pods/hostexec-v126-worker2-s6m8p/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%2Ftmp%2Flocal-volume-test-4f86f3c0-65a4-4f3e-913e-6bde75818b89&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 11 18:16:20.757: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-7b3dbe2d-68d6-4e03-8bf8-f8b5e05c0945] Namespace:persistent-local-volumes-test-7184 PodName:hostexec-v126-worker2-s6m8p ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:16:20.757: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:16:20.758: INFO: ExecWithOptions: Clientset creation Apr 11 18:16:20.758: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-7184/pods/hostexec-v126-worker2-s6m8p/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%2Ftmp%2Flocal-volume-test-7b3dbe2d-68d6-4e03-8bf8-f8b5e05c0945&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating local PVCs and PVs 04/11/24 18:16:20.881 Apr 11 18:16:20.881: INFO: Creating a PV followed by a PVC Apr 11 18:16:20.890: INFO: Creating a PV followed by a PVC Apr 11 18:16:20.898: INFO: Creating a PV followed by a PVC Apr 11 18:16:20.906: INFO: Creating a PV followed by a PVC Apr 11 18:16:20.915: INFO: Creating a PV followed by a PVC Apr 11 18:16:20.922: INFO: Creating a PV followed by a PVC Apr 11 18:16:30.981: INFO: PVCs were not bound within 10s (that's good) [It] should use volumes spread across nodes when pod has anti-affinity test/e2e/storage/persistent_volumes-local.go:413 Apr 11 18:16:30.981: INFO: Runs only when number of nodes >= 3 [AfterEach] StatefulSet with pod affinity [Slow] test/e2e/storage/persistent_volumes-local.go:406 STEP: Cleaning up PVC and PV 04/11/24 18:16:30.981 Apr 11 18:16:30.981: INFO: Deleting PersistentVolumeClaim "pvc-ccztq" Apr 11 18:16:30.986: INFO: Deleting PersistentVolume "local-pvbmhrh" STEP: Cleaning up PVC and PV 04/11/24 18:16:30.992 Apr 11 18:16:30.992: INFO: Deleting PersistentVolumeClaim "pvc-nrnmg" Apr 11 18:16:30.997: INFO: Deleting PersistentVolume "local-pvtcjlm" STEP: Cleaning up PVC and PV 04/11/24 18:16:31.002 Apr 11 18:16:31.002: INFO: Deleting PersistentVolumeClaim "pvc-p46s4" Apr 11 18:16:31.007: INFO: Deleting PersistentVolume "local-pvh668l" STEP: Cleaning up PVC and PV 04/11/24 18:16:31.011 Apr 11 18:16:31.012: INFO: Deleting PersistentVolumeClaim "pvc-ntdf5" Apr 11 18:16:31.018: INFO: Deleting PersistentVolume "local-pv9j8d2" STEP: Cleaning up PVC and PV 04/11/24 18:16:31.022 Apr 11 18:16:31.022: INFO: Deleting PersistentVolumeClaim "pvc-cb4zk" Apr 11 18:16:31.027: INFO: Deleting PersistentVolume "local-pvfkt75" STEP: Cleaning up PVC and PV 04/11/24 18:16:31.031 Apr 11 18:16:31.032: INFO: Deleting PersistentVolumeClaim "pvc-ff97z" Apr 11 18:16:31.037: INFO: Deleting PersistentVolume "local-pvtgnln" STEP: Removing the test directory 04/11/24 18:16:31.042 Apr 11 18:16:31.042: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4a7ea1e4-edd4-4621-8cb4-45e2b5bff8b9] Namespace:persistent-local-volumes-test-7184 PodName:hostexec-v126-worker2-s6m8p ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:16:31.042: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:16:31.043: INFO: ExecWithOptions: Clientset creation Apr 11 18:16:31.043: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-7184/pods/hostexec-v126-worker2-s6m8p/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-4a7ea1e4-edd4-4621-8cb4-45e2b5bff8b9&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/11/24 18:16:31.208 Apr 11 18:16:31.208: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-88673fbb-288e-45df-9fe5-5466a5f10ebd] Namespace:persistent-local-volumes-test-7184 PodName:hostexec-v126-worker2-s6m8p ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:16:31.208: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:16:31.209: INFO: ExecWithOptions: Clientset creation Apr 11 18:16:31.209: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-7184/pods/hostexec-v126-worker2-s6m8p/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-88673fbb-288e-45df-9fe5-5466a5f10ebd&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/11/24 18:16:31.367 Apr 11 18:16:31.367: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4825e9aa-0229-4499-8aed-dd441732b5b1] Namespace:persistent-local-volumes-test-7184 PodName:hostexec-v126-worker2-s6m8p ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:16:31.367: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:16:31.368: INFO: ExecWithOptions: Clientset creation Apr 11 18:16:31.369: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-7184/pods/hostexec-v126-worker2-s6m8p/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-4825e9aa-0229-4499-8aed-dd441732b5b1&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/11/24 18:16:31.536 Apr 11 18:16:31.536: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-00d355bd-a808-41d0-9b2c-8910d9c2b51c] Namespace:persistent-local-volumes-test-7184 PodName:hostexec-v126-worker2-s6m8p ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:16:31.536: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:16:31.537: INFO: ExecWithOptions: Clientset creation Apr 11 18:16:31.537: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-7184/pods/hostexec-v126-worker2-s6m8p/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-00d355bd-a808-41d0-9b2c-8910d9c2b51c&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/11/24 18:16:31.696 Apr 11 18:16:31.696: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4f86f3c0-65a4-4f3e-913e-6bde75818b89] Namespace:persistent-local-volumes-test-7184 PodName:hostexec-v126-worker2-s6m8p ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:16:31.696: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:16:31.698: INFO: ExecWithOptions: Clientset creation Apr 11 18:16:31.698: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-7184/pods/hostexec-v126-worker2-s6m8p/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-4f86f3c0-65a4-4f3e-913e-6bde75818b89&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Removing the test directory 04/11/24 18:16:31.817 Apr 11 18:16:31.817: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-7b3dbe2d-68d6-4e03-8bf8-f8b5e05c0945] Namespace:persistent-local-volumes-test-7184 PodName:hostexec-v126-worker2-s6m8p ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:16:31.817: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:16:31.818: INFO: ExecWithOptions: Clientset creation Apr 11 18:16:31.818: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-7184/pods/hostexec-v126-worker2-s6m8p/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-volume-test-7b3dbe2d-68d6-4e03-8bf8-f8b5e05c0945&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 11 18:16:31.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-7184" for this suite. 04/11/24 18:16:31.935 << End Captured GinkgoWriter Output Runs only when number of nodes >= 3 In [It] at: test/e2e/storage/persistent_volumes-local.go:415 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ S [SKIPPED] [0.030 seconds] [sig-storage] PersistentVolumes-local test/e2e/storage/utils/framework.go:23 Local volume that cannot be mounted [Slow] test/e2e/storage/persistent_volumes-local.go:307 [It] should fail due to wrong node test/e2e/storage/persistent_volumes-local.go:327 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:16:31.949 Apr 11 18:16:31.949: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/11/24 18:16:31.95 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:16:31.96 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:16:31.963 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [It] should fail due to wrong node test/e2e/storage/persistent_volumes-local.go:327 Apr 11 18:16:31.970: INFO: Runs only when number of nodes >= 2 [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 11 18:16:31.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-7248" for this suite. 04/11/24 18:16:31.974 << End Captured GinkgoWriter Output Runs only when number of nodes >= 2 In [It] at: test/e2e/storage/persistent_volumes-local.go:329 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ S [SKIPPED] [0.026 seconds] [sig-storage] Pod Disks [Feature:StorageProvider] [BeforeEach] test/e2e/storage/pd.go:76 schedule pods each with a PD, delete pod and verify detach [Slow] test/e2e/storage/pd.go:95 for RW PD with pod delete grace period of "immediate (0s)" test/e2e/storage/pd.go:137 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] Pod Disks [Feature:StorageProvider] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:16:31.993 Apr 11 18:16:31.993: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pod-disks 04/11/24 18:16:31.994 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:16:32.003 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:16:32.006 [BeforeEach] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/storage/pd.go:76 Apr 11 18:16:32.009: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/framework/node/init/init.go:32 Apr 11 18:16:32.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] Pod Disks [Feature:StorageProvider] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] Pod Disks [Feature:StorageProvider] tear down framework | framework.go:193 STEP: Destroying namespace "pod-disks-5850" for this suite. 04/11/24 18:16:32.014 << End Captured GinkgoWriter Output Requires at least 2 nodes (not -1) In [BeforeEach] at: test/e2e/storage/pd.go:77 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [45.017 seconds] [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should call NodeUnstage after NodeStage success test/e2e/storage/csi_mock_volume.go:942 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [28.759 seconds] [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1 test/e2e/storage/persistent_volumes-local.go:241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [10.433 seconds] [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] test/e2e/common/storage/projected_configmap.go:78 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [12.069 seconds] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs) test/e2e/common/storage/empty_dir.go:68 ------------------------------ SSSSSSSSSS ------------------------------ • [SLOW TEST] [300.053 seconds] [sig-storage] Projected secret Should fail non-optional pod creation due to secret object does not exist [Slow] test/e2e/common/storage/projected_secret.go:414 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ S [SKIPPED] [0.030 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 PVController test/e2e/storage/volume_metrics.go:500 should create total pv count metrics for with plugin and volume mode labels after creating pv test/e2e/storage/volume_metrics.go:630 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:16:52.385 Apr 11 18:16:52.386: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:16:52.387 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:16:52.397 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:16:52.401 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:16:52.405: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:16:52.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-9288" for this suite. 04/11/24 18:16:52.41 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure. Here's a summary - for full details run Ginkgo in verbose mode: [PANICKED] in [AfterEach] at /usr/local/go/src/runtime/panic.go:260 ------------------------------ SSSSSS ------------------------------ • [SLOW TEST] [49.590 seconds] [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 test/e2e/storage/persistent_volumes-local.go:252 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [6.081 seconds] [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] test/e2e/common/storage/projected_secret.go:93 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ S [SKIPPED] [0.035 seconds] [sig-storage] Dynamic Provisioning test/e2e/storage/utils/framework.go:23 DynamicProvisioner [Slow] [Feature:StorageProvider] test/e2e/storage/volume_provisioning.go:150 [It] should test that deleting a claim before the volume is provisioned deletes the volume. test/e2e/storage/volume_provisioning.go:422 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] Dynamic Provisioning set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:16:58.569 Apr 11 18:16:58.569: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename volume-provisioning 04/11/24 18:16:58.571 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:16:58.586 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:16:58.59 [BeforeEach] [sig-storage] Dynamic Provisioning test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] Dynamic Provisioning test/e2e/storage/volume_provisioning.go:144 [It] should test that deleting a claim before the volume is provisioned deletes the volume. test/e2e/storage/volume_provisioning.go:422 Apr 11 18:16:58.594: INFO: Only supported for providers [openstack gce aws gke vsphere azure] (not local) [AfterEach] [sig-storage] Dynamic Provisioning test/e2e/framework/node/init/init.go:32 Apr 11 18:16:58.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] Dynamic Provisioning test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] Dynamic Provisioning dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] Dynamic Provisioning tear down framework | framework.go:193 STEP: Destroying namespace "volume-provisioning-5881" for this suite. 04/11/24 18:16:58.599 << End Captured GinkgoWriter Output Only supported for providers [openstack gce aws gke vsphere azure] (not local) In [It] at: test/e2e/storage/volume_provisioning.go:428 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [34.935 seconds] [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 test/e2e/storage/persistent_volumes-local.go:258 ------------------------------ SSSSSSSSSSSS ------------------------------ • [SLOW TEST] [16.076 seconds] [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running test/e2e/storage/persistent_volumes-local.go:656 ------------------------------ SSSSSSS ------------------------------ • [SLOW TEST] [10.068 seconds] [sig-storage] HostPath should support subPath [NodeConformance] test/e2e/common/storage/host_path.go:96 ------------------------------ SS ------------------------------ • [SLOW TEST] [30.093 seconds] [sig-storage] PVC Protection Verify "immediate" deletion of a PVC that is not in active use by a pod test/e2e/storage/pvc_protection.go:117 ------------------------------ SSSSSSSSSSSSS ------------------------------ S [SKIPPED] [0.031 seconds] [sig-storage] Regional PD [BeforeEach] test/e2e/storage/regional_pd.go:70 RegionalPD test/e2e/storage/regional_pd.go:78 should provision storage in the allowedTopologies [Slow] test/e2e/storage/regional_pd.go:88 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] Regional PD set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:17:12.603 Apr 11 18:17:12.604: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename regional-pd 04/11/24 18:17:12.605 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:17:12.616 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:17:12.62 [BeforeEach] [sig-storage] Regional PD test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] Regional PD test/e2e/storage/regional_pd.go:70 Apr 11 18:17:12.624: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Regional PD test/e2e/framework/node/init/init.go:32 Apr 11 18:17:12.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] Regional PD test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] Regional PD dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] Regional PD tear down framework | framework.go:193 STEP: Destroying namespace "regional-pd-9393" for this suite. 04/11/24 18:17:12.629 << End Captured GinkgoWriter Output Only supported for providers [gce gke] (not local) In [BeforeEach] at: test/e2e/storage/regional_pd.go:74 ------------------------------ SSSSS ------------------------------ • [SLOW TEST] [57.466 seconds] [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on test/e2e/storage/csi_mock_volume.go:799 ------------------------------ SSSSSSS ------------------------------ • [SLOW TEST] [24.016 seconds] [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1 test/e2e/storage/persistent_volumes-local.go:241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ S [SKIPPED] [8.196 seconds] [sig-storage] PersistentVolumes-local test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] [BeforeEach] test/e2e/storage/persistent_volumes-local.go:198 Set fsGroup for local volume test/e2e/storage/persistent_volumes-local.go:263 should set same fsGroup for two pods simultaneously [Slow] test/e2e/storage/persistent_volumes-local.go:277 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:17:25.609 Apr 11 18:17:25.609: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/11/24 18:17:25.611 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:17:25.621 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:17:25.625 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:198 Apr 11 18:17:25.641: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker2-pr5wd" in namespace "persistent-local-volumes-test-1413" to be "running" Apr 11 18:17:25.644: INFO: Pod "hostexec-v126-worker2-pr5wd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.092252ms Apr 11 18:17:27.648: INFO: Pod "hostexec-v126-worker2-pr5wd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007326576s Apr 11 18:17:29.648: INFO: Pod "hostexec-v126-worker2-pr5wd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007222469s Apr 11 18:17:31.648: INFO: Pod "hostexec-v126-worker2-pr5wd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.007088133s Apr 11 18:17:33.647: INFO: Pod "hostexec-v126-worker2-pr5wd": Phase="Running", Reason="", readiness=true. Elapsed: 8.006778543s Apr 11 18:17:33.647: INFO: Pod "hostexec-v126-worker2-pr5wd" satisfied condition "running" Apr 11 18:17:33.647: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-1413 PodName:hostexec-v126-worker2-pr5wd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:17:33.647: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:17:33.649: INFO: ExecWithOptions: Clientset creation Apr 11 18:17:33.649: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1413/pods/hostexec-v126-worker2-pr5wd/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=ls+-1+%2Fmnt%2Fdisks%2Fby-uuid%2Fgoogle-local-ssds-scsi-fs%2F+%7C+wc+-l&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 11 18:17:33.794: INFO: exec v126-worker2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Apr 11 18:17:33.794: INFO: exec v126-worker2: stdout: "0\n" Apr 11 18:17:33.795: INFO: exec v126-worker2: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Apr 11 18:17:33.795: INFO: exec v126-worker2: exit code: 0 Apr 11 18:17:33.795: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:207 STEP: Cleaning up PVC and PV 04/11/24 18:17:33.795 [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 11 18:17:33.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-1413" for this suite. 04/11/24 18:17:33.8 << End Captured GinkgoWriter Output Requires at least 1 scsi fs localSSD In [BeforeEach] at: test/e2e/storage/persistent_volumes-local.go:1255 There were additional failures detected after the initial failure. Here's a summary - for full details run Ginkgo in verbose mode: [PANICKED] in [AfterEach] at /usr/local/go/src/runtime/panic.go:260 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [119.995 seconds] [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off test/e2e/storage/csi_mock_volume.go:700 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ S [SKIPPED] [0.032 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 PVController test/e2e/storage/volume_metrics.go:500 should create unbound pv count metrics for pvc controller after creating pv only test/e2e/storage/volume_metrics.go:602 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:17:39.181 Apr 11 18:17:39.181: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:17:39.183 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:17:39.192 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:17:39.196 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:17:39.200: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:17:39.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-6616" for this suite. 04/11/24 18:17:39.205 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure. Here's a summary - for full details run Ginkgo in verbose mode: [PANICKED] in [AfterEach] at /usr/local/go/src/runtime/panic.go:260 ------------------------------ SSSSSSSS ------------------------------ • [SLOW TEST] [26.949 seconds] [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity test/e2e/storage/csi_mock_volume.go:1413 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ S [SKIPPED] [0.032 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 PVC test/e2e/storage/volume_metrics.go:491 should create prometheus metrics for volume provisioning errors [Slow] test/e2e/storage/volume_metrics.go:471 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:17:42.735 Apr 11 18:17:42.736: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:17:42.737 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:17:42.749 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:17:42.752 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:17:42.756: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:17:42.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-3331" for this suite. 04/11/24 18:17:42.762 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure. Here's a summary - for full details run Ginkgo in verbose mode: [PANICKED] in [AfterEach] at /usr/local/go/src/runtime/panic.go:260 ------------------------------ SSSSSSSS ------------------------------ • [FAILED] [36.346 seconds] [sig-storage] PersistentVolumes-local test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] test/e2e/storage/persistent_volumes-local.go:195 Set fsGroup for local volume test/e2e/storage/persistent_volumes-local.go:263 [It] should set fsGroup for one pod [Slow] test/e2e/storage/persistent_volumes-local.go:270 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:17:08.693 Apr 11 18:17:08.693: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/11/24 18:17:08.694 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:17:08.708 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:17:08.713 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] [Volume type: dir-bindmounted] test/e2e/storage/persistent_volumes-local.go:198 STEP: Initializing test volumes 04/11/24 18:17:08.725 Apr 11 18:17:08.733: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker2-5hfkf" in namespace "persistent-local-volumes-test-2982" to be "running" Apr 11 18:17:08.736: INFO: Pod "hostexec-v126-worker2-5hfkf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.129765ms Apr 11 18:17:10.741: INFO: Pod "hostexec-v126-worker2-5hfkf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007657928s Apr 11 18:17:12.741: INFO: Pod "hostexec-v126-worker2-5hfkf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007713741s Apr 11 18:17:14.739: INFO: Pod "hostexec-v126-worker2-5hfkf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.006299391s Apr 11 18:17:16.740: INFO: Pod "hostexec-v126-worker2-5hfkf": Phase="Running", Reason="", readiness=true. Elapsed: 8.006984386s Apr 11 18:17:16.740: INFO: Pod "hostexec-v126-worker2-5hfkf" satisfied condition "running" Apr 11 18:17:16.740: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-475ff552-099b-4fbc-81ca-5b6c99924de3 && mount --bind /tmp/local-volume-test-475ff552-099b-4fbc-81ca-5b6c99924de3 /tmp/local-volume-test-475ff552-099b-4fbc-81ca-5b6c99924de3] Namespace:persistent-local-volumes-test-2982 PodName:hostexec-v126-worker2-5hfkf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:17:16.740: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:17:16.742: INFO: ExecWithOptions: Clientset creation Apr 11 18:17:16.742: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-2982/pods/hostexec-v126-worker2-5hfkf/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+%2Ftmp%2Flocal-volume-test-475ff552-099b-4fbc-81ca-5b6c99924de3+%26%26+mount+--bind+%2Ftmp%2Flocal-volume-test-475ff552-099b-4fbc-81ca-5b6c99924de3+%2Ftmp%2Flocal-volume-test-475ff552-099b-4fbc-81ca-5b6c99924de3&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Creating local PVCs and PVs 04/11/24 18:17:16.894 Apr 11 18:17:16.894: INFO: Creating a PV followed by a PVC Apr 11 18:17:16.903: INFO: Waiting for PV local-pvblsd6 to bind to PVC pvc-scdsk Apr 11 18:17:16.903: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-scdsk] to have phase Bound Apr 11 18:17:16.906: INFO: PersistentVolumeClaim pvc-scdsk found but phase is Pending instead of Bound. Apr 11 18:17:18.910: INFO: PersistentVolumeClaim pvc-scdsk found but phase is Pending instead of Bound. Apr 11 18:17:20.913: INFO: PersistentVolumeClaim pvc-scdsk found but phase is Pending instead of Bound. Apr 11 18:17:22.918: INFO: PersistentVolumeClaim pvc-scdsk found but phase is Pending instead of Bound. Apr 11 18:17:24.921: INFO: PersistentVolumeClaim pvc-scdsk found but phase is Pending instead of Bound. Apr 11 18:17:26.925: INFO: PersistentVolumeClaim pvc-scdsk found but phase is Pending instead of Bound. Apr 11 18:17:28.929: INFO: PersistentVolumeClaim pvc-scdsk found and phase=Bound (12.026458985s) Apr 11 18:17:28.929: INFO: Waiting up to 3m0s for PersistentVolume local-pvblsd6 to have phase Bound Apr 11 18:17:28.933: INFO: PersistentVolume local-pvblsd6 found and phase=Bound (3.261486ms) [BeforeEach] Set fsGroup for local volume test/e2e/storage/persistent_volumes-local.go:264 [It] should set fsGroup for one pod [Slow] test/e2e/storage/persistent_volumes-local.go:270 STEP: Checking fsGroup is set 04/11/24 18:17:28.938 STEP: Creating a pod 04/11/24 18:17:28.938 Apr 11 18:17:28.943: INFO: Waiting up to 5m0s for pod "pod-cec6ee6e-ac33-4d11-9ffe-f66a8f02a9e5" in namespace "persistent-local-volumes-test-2982" to be "running" Apr 11 18:17:28.946: INFO: Pod "pod-cec6ee6e-ac33-4d11-9ffe-f66a8f02a9e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.335533ms Apr 11 18:17:30.950: INFO: Pod "pod-cec6ee6e-ac33-4d11-9ffe-f66a8f02a9e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006321134s Apr 11 18:17:32.950: INFO: Pod "pod-cec6ee6e-ac33-4d11-9ffe-f66a8f02a9e5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006406077s Apr 11 18:17:34.949: INFO: Pod "pod-cec6ee6e-ac33-4d11-9ffe-f66a8f02a9e5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.005930437s Apr 11 18:17:36.950: INFO: Pod "pod-cec6ee6e-ac33-4d11-9ffe-f66a8f02a9e5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.006293854s Apr 11 18:17:38.950: INFO: Pod "pod-cec6ee6e-ac33-4d11-9ffe-f66a8f02a9e5": Phase="Running", Reason="", readiness=true. Elapsed: 10.007007892s Apr 11 18:17:38.950: INFO: Pod "pod-cec6ee6e-ac33-4d11-9ffe-f66a8f02a9e5" satisfied condition "running" Apr 11 18:17:38.954: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:35339 --kubeconfig=/home/xtesting/.kube/config --namespace=persistent-local-volumes-test-2982 exec pod-cec6ee6e-ac33-4d11-9ffe-f66a8f02a9e5 --namespace=persistent-local-volumes-test-2982 -- stat -c %g /mnt/volume1' Apr 11 18:17:39.239: INFO: stderr: "" Apr 11 18:17:39.239: INFO: stdout: "1000\n" Apr 11 18:17:41.240: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:35339 --kubeconfig=/home/xtesting/.kube/config --namespace=persistent-local-volumes-test-2982 exec pod-cec6ee6e-ac33-4d11-9ffe-f66a8f02a9e5 --namespace=persistent-local-volumes-test-2982 -- stat -c %g /mnt/volume1' Apr 11 18:17:41.456: INFO: stderr: "" Apr 11 18:17:41.456: INFO: stdout: "1000\n" Apr 11 18:17:43.457: INFO: Unexpected error: failed to get expected fsGroup 1234 on directory /mnt/volume1 in pod pod-cec6ee6e-ac33-4d11-9ffe-f66a8f02a9e5: <*errors.errorString | 0xc002007580>: { s: "Failed to find \"1234\", last result: \"1000\n\"", } Apr 11 18:17:43.457: FAIL: failed to get expected fsGroup 1234 on directory /mnt/volume1 in pod pod-cec6ee6e-ac33-4d11-9ffe-f66a8f02a9e5: Failed to find "1234", last result: "1000 " Full Stack Trace k8s.io/kubernetes/test/e2e/storage.createPodWithFsGroupTest(0xc004c04ab0, 0x17?, 0x4d2, 0x0?) test/e2e/storage/persistent_volumes-local.go:807 +0x29e k8s.io/kubernetes/test/e2e/storage.glob..func25.2.6.2() test/e2e/storage/persistent_volumes-local.go:272 +0x65 [AfterEach] [Volume type: dir-bindmounted] test/e2e/storage/persistent_volumes-local.go:207 STEP: Cleaning up PVC and PV 04/11/24 18:17:43.458 Apr 11 18:17:43.458: INFO: Deleting PersistentVolumeClaim "pvc-scdsk" Apr 11 18:17:43.463: INFO: Deleting PersistentVolume "local-pvblsd6" STEP: Removing the test directory 04/11/24 18:17:43.469 Apr 11 18:17:43.469: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-475ff552-099b-4fbc-81ca-5b6c99924de3 && rm -r /tmp/local-volume-test-475ff552-099b-4fbc-81ca-5b6c99924de3] Namespace:persistent-local-volumes-test-2982 PodName:hostexec-v126-worker2-5hfkf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:17:43.469: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:17:43.470: INFO: ExecWithOptions: Clientset creation Apr 11 18:17:43.470: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-2982/pods/hostexec-v126-worker2-5hfkf/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%2Ftmp%2Flocal-volume-test-475ff552-099b-4fbc-81ca-5b6c99924de3+%26%26+rm+-r+%2Ftmp%2Flocal-volume-test-475ff552-099b-4fbc-81ca-5b6c99924de3&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 11 18:17:43.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 STEP: dump namespace information after failure 04/11/24 18:17:43.657 STEP: Collecting events from namespace "persistent-local-volumes-test-2982". 04/11/24 18:17:43.657 STEP: Found 10 events. 04/11/24 18:17:43.661 Apr 11 18:17:43.661: INFO: At 2024-04-11 18:17:08 +0000 UTC - event for hostexec-v126-worker2-5hfkf: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-2982/hostexec-v126-worker2-5hfkf to v126-worker2 Apr 11 18:17:43.661: INFO: At 2024-04-11 18:17:09 +0000 UTC - event for hostexec-v126-worker2-5hfkf: {kubelet v126-worker2} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Apr 11 18:17:43.661: INFO: At 2024-04-11 18:17:09 +0000 UTC - event for hostexec-v126-worker2-5hfkf: {kubelet v126-worker2} Created: Created container agnhost-container Apr 11 18:17:43.661: INFO: At 2024-04-11 18:17:10 +0000 UTC - event for hostexec-v126-worker2-5hfkf: {kubelet v126-worker2} Started: Started container agnhost-container Apr 11 18:17:43.661: INFO: At 2024-04-11 18:17:16 +0000 UTC - event for pvc-scdsk: {persistentvolume-controller } ProvisioningFailed: no volume plugin matched name: kubernetes.io/no-provisioner Apr 11 18:17:43.661: INFO: At 2024-04-11 18:17:28 +0000 UTC - event for pod-cec6ee6e-ac33-4d11-9ffe-f66a8f02a9e5: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-2982/pod-cec6ee6e-ac33-4d11-9ffe-f66a8f02a9e5 to v126-worker2 Apr 11 18:17:43.661: INFO: At 2024-04-11 18:17:30 +0000 UTC - event for pod-cec6ee6e-ac33-4d11-9ffe-f66a8f02a9e5: {kubelet v126-worker2} AlreadyMountedVolume: The requested fsGroup is 1234, but the volume local-pvblsd6 has GID 1000. The volume may not be shareable. Apr 11 18:17:43.661: INFO: At 2024-04-11 18:17:31 +0000 UTC - event for pod-cec6ee6e-ac33-4d11-9ffe-f66a8f02a9e5: {kubelet v126-worker2} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Apr 11 18:17:43.661: INFO: At 2024-04-11 18:17:31 +0000 UTC - event for pod-cec6ee6e-ac33-4d11-9ffe-f66a8f02a9e5: {kubelet v126-worker2} Created: Created container write-pod Apr 11 18:17:43.661: INFO: At 2024-04-11 18:17:32 +0000 UTC - event for pod-cec6ee6e-ac33-4d11-9ffe-f66a8f02a9e5: {kubelet v126-worker2} Started: Started container write-pod Apr 11 18:17:43.664: INFO: POD NODE PHASE GRACE CONDITIONS Apr 11 18:17:43.664: INFO: hostexec-v126-worker2-5hfkf v126-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2024-04-11 18:17:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2024-04-11 18:17:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2024-04-11 18:17:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2024-04-11 18:17:08 +0000 UTC }] Apr 11 18:17:43.664: INFO: pod-cec6ee6e-ac33-4d11-9ffe-f66a8f02a9e5 v126-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2024-04-11 18:17:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2024-04-11 18:17:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2024-04-11 18:17:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2024-04-11 18:17:28 +0000 UTC }] Apr 11 18:17:43.664: INFO: Apr 11 18:17:43.680: INFO: Logging node info for node v126-control-plane Apr 11 18:17:43.683: INFO: Node Info: &Node{ObjectMeta:{v126-control-plane 3a64757e-5950-42e6-b8ed-4667f760117e 7547927 0 2024-02-15 12:43:04 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:v126-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2024-02-15 12:43:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2024-02-15 12:43:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2024-02-15 12:43:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2024-04-11 18:16:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v126/v126-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2024-04-11 18:16:37 +0000 UTC,LastTransitionTime:2024-02-15 12:42:59 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2024-04-11 18:16:37 +0000 UTC,LastTransitionTime:2024-02-15 12:42:59 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2024-04-11 18:16:37 +0000 UTC,LastTransitionTime:2024-02-15 12:42:59 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2024-04-11 18:16:37 +0000 UTC,LastTransitionTime:2024-02-15 12:43:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.22.0.4,},NodeAddress{Type:Hostname,Address:v126-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a96e30d08f8c42b585519e2395c12ea2,SystemUUID:a3f13d5f-0717-4c0d-a2df-008e7d843a90,BootID:3ece24be-6f26-4926-9346-83e0950952a5,KernelVersion:5.15.0-53-generic,OSImage:Debian GNU/Linux 11 (bullseye),ContainerRuntimeVersion:containerd://1.7.1,KubeletVersion:v1.26.6,KubeProxyVersion:v1.26.6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:ba23b53b0e943e1556160fd3d7e445268699b578d6d1ffcce645a3cfafebb3db registry.k8s.io/kube-apiserver:v1.26.6],SizeBytes:80511487,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:719bf3e60c026520ed06d4f65a6df78f53a838e8675c058f25582d5067117d99 registry.k8s.io/kube-controller-manager:v1.26.6],SizeBytes:68657293,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:ffc23ebf13cca095f49e15bc00a2fc75fc6ae75e10104169680d9cac711339b8 registry.k8s.io/kube-proxy:v1.26.6],SizeBytes:67229690,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:b67a0068e2439c496b04bb021d953b966868421451aa88f2c3701c6b4ab77d4f registry.k8s.io/kube-scheduler:v1.26.6],SizeBytes:57880717,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230511-dc714da8],SizeBytes:27731571,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20230511-dc714da8],SizeBytes:19351145,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:23d4ae0566b98dfee53d4b7a9ef824b6ed1c6b3a8f52bab927e5521f871b5104 docker.io/aquasec/kube-bench:v0.6.10],SizeBytes:18243491,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230510-486859a6],SizeBytes:3052318,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 11 18:17:43.684: INFO: Logging kubelet events for node v126-control-plane Apr 11 18:17:43.687: INFO: Logging pods the kubelet thinks is on node v126-control-plane Apr 11 18:17:43.715: INFO: etcd-v126-control-plane started at 2024-02-15 12:43:08 +0000 UTC (0+1 container statuses recorded) Apr 11 18:17:43.715: INFO: Container etcd ready: true, restart count 0 Apr 11 18:17:43.715: INFO: kube-scheduler-v126-control-plane started at 2024-02-15 12:43:08 +0000 UTC (0+1 container statuses recorded) Apr 11 18:17:43.715: INFO: Container kube-scheduler ready: true, restart count 0 Apr 11 18:17:43.715: INFO: kube-proxy-lxqfk started at 2024-02-15 12:43:20 +0000 UTC (0+1 container statuses recorded) Apr 11 18:17:43.715: INFO: Container kube-proxy ready: true, restart count 0 Apr 11 18:17:43.715: INFO: kindnet-vn4j4 started at 2024-02-15 12:43:20 +0000 UTC (0+1 container statuses recorded) Apr 11 18:17:43.715: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 18:17:43.715: INFO: kube-apiserver-v126-control-plane started at 2024-02-15 12:43:09 +0000 UTC (0+1 container statuses recorded) Apr 11 18:17:43.715: INFO: Container kube-apiserver ready: true, restart count 0 Apr 11 18:17:43.715: INFO: kube-controller-manager-v126-control-plane started at 2024-02-15 12:43:09 +0000 UTC (0+1 container statuses recorded) Apr 11 18:17:43.715: INFO: Container kube-controller-manager ready: true, restart count 0 Apr 11 18:17:43.715: INFO: coredns-787d4945fb-w6k86 started at 2024-02-15 12:43:25 +0000 UTC (0+1 container statuses recorded) Apr 11 18:17:43.715: INFO: Container coredns ready: true, restart count 0 Apr 11 18:17:43.715: INFO: coredns-787d4945fb-xp5nv started at 2024-02-15 12:43:25 +0000 UTC (0+1 container statuses recorded) Apr 11 18:17:43.715: INFO: Container coredns ready: true, restart count 0 Apr 11 18:17:43.715: INFO: local-path-provisioner-6bd6454576-2g84t started at 2024-02-15 12:43:25 +0000 UTC (0+1 container statuses recorded) Apr 11 18:17:43.715: INFO: Container local-path-provisioner ready: true, restart count 0 Apr 11 18:17:43.715: INFO: create-loop-devs-d8k28 started at 2024-02-15 12:43:26 +0000 UTC (0+1 container statuses recorded) Apr 11 18:17:43.715: INFO: Container loopdev ready: true, restart count 0 Apr 11 18:17:43.777: INFO: Latency metrics for node v126-control-plane Apr 11 18:17:43.777: INFO: Logging node info for node v126-worker Apr 11 18:17:43.780: INFO: Node Info: &Node{ObjectMeta:{v126-worker d69cee07-558d-4498-86d9-cff1abedd857 7545697 0 2024-02-15 12:43:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:v126-worker kubernetes.io/os:linux topology.hostpath.csi/node:v126-worker] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2024-02-15 12:43:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2024-02-15 12:43:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {e2e.test Update v1 2024-03-28 18:03:36 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:example.com/fakecpu":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:example.com/fakecpu":{},"f:scheduling.k8s.io/foo":{}}}} status} {kube-controller-manager Update v1 2024-03-28 19:11:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}} } {kubectl Update v1 2024-03-28 19:11:09 +0000 UTC FieldsV1 {"f:spec":{"f:unschedulable":{}}} } {kubelet Update v1 2024-04-11 18:14:48 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v126/v126-worker,Unschedulable:true,Taints:[]Taint{Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:2024-03-28 19:11:09 +0000 UTC,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{1 0} {} 1 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{1 0} {} 1 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2024-04-11 18:14:48 +0000 UTC,LastTransitionTime:2024-02-15 12:43:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2024-04-11 18:14:48 +0000 UTC,LastTransitionTime:2024-02-15 12:43:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2024-04-11 18:14:48 +0000 UTC,LastTransitionTime:2024-02-15 12:43:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2024-04-11 18:14:48 +0000 UTC,LastTransitionTime:2024-02-15 12:43:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.22.0.2,},NodeAddress{Type:Hostname,Address:v126-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d18212626141459c831725483d7679ab,SystemUUID:398bd568-4555-4b1a-8660-f75be5056848,BootID:3ece24be-6f26-4926-9346-83e0950952a5,KernelVersion:5.15.0-53-generic,OSImage:Debian GNU/Linux 11 (bullseye),ContainerRuntimeVersion:containerd://1.7.1,KubeletVersion:v1.26.6,KubeProxyVersion:v1.26.6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[docker.io/litmuschaos/go-runner@sha256:b4aaa2ee36bf687dd0f147ced7dce708398fae6d8410067c9ad9a90f162d55e5 docker.io/litmuschaos/go-runner:2.14.0],SizeBytes:170207512,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:ba23b53b0e943e1556160fd3d7e445268699b578d6d1ffcce645a3cfafebb3db registry.k8s.io/kube-apiserver:v1.26.6],SizeBytes:80511487,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:719bf3e60c026520ed06d4f65a6df78f53a838e8675c058f25582d5067117d99 registry.k8s.io/kube-controller-manager:v1.26.6],SizeBytes:68657293,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:ffc23ebf13cca095f49e15bc00a2fc75fc6ae75e10104169680d9cac711339b8 registry.k8s.io/kube-proxy:v1.26.6],SizeBytes:67229690,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:b67a0068e2439c496b04bb021d953b966868421451aa88f2c3701c6b4ab77d4f registry.k8s.io/kube-scheduler:v1.26.6],SizeBytes:57880717,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2 registry.k8s.io/etcd:3.5.10-0],SizeBytes:56649232,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:e64fe49f059f513a09c772a8972172b2af6833d092c06cc311171d7135e4525a docker.io/aquasec/kube-hunter:0.6.8],SizeBytes:38278203,},ContainerImage{Names:[docker.io/litmuschaos/chaos-operator@sha256:69b1a6ff1409fc80cf169503e29d10e049b46108e57436e452e3800f5f911d70 docker.io/litmuschaos/chaos-operator:2.14.0],SizeBytes:28963838,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230511-dc714da8],SizeBytes:27731571,},ContainerImage{Names:[docker.io/litmuschaos/chaos-runner@sha256:a5fcf3f1766975ec6e4730c0aefdf9705af20c67d9ff68372168c8856acba7af docker.io/litmuschaos/chaos-runner:2.14.0],SizeBytes:26125622,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20230511-dc714da8],SizeBytes:19351145,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:23d4ae0566b98dfee53d4b7a9ef824b6ed1c6b3a8f52bab927e5521f871b5104 docker.io/aquasec/kube-bench:v0.6.10],SizeBytes:18243491,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[registry.k8s.io/build-image/distroless-iptables@sha256:fc259355994e6c6c1025a7cd2d1bdbf201708e9e11ef1dfd3ef787a7ce45730d registry.k8s.io/build-image/distroless-iptables:v0.2.9],SizeBytes:9501695,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230510-486859a6],SizeBytes:3052318,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 11 18:17:43.781: INFO: Logging kubelet events for node v126-worker Apr 11 18:17:43.784: INFO: Logging pods the kubelet thinks is on node v126-worker Apr 11 18:17:43.807: INFO: create-loop-devs-qf7hw started at 2024-02-15 12:43:26 +0000 UTC (0+1 container statuses recorded) Apr 11 18:17:43.807: INFO: Container loopdev ready: true, restart count 0 Apr 11 18:17:43.807: INFO: kindnet-llt78 started at 2024-02-15 12:43:25 +0000 UTC (0+1 container statuses recorded) Apr 11 18:17:43.807: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 18:17:43.807: INFO: kube-proxy-6gjpv started at 2024-02-15 12:43:25 +0000 UTC (0+1 container statuses recorded) Apr 11 18:17:43.807: INFO: Container kube-proxy ready: true, restart count 0 Apr 11 18:17:44.037: INFO: Latency metrics for node v126-worker Apr 11 18:17:44.037: INFO: Logging node info for node v126-worker2 Apr 11 18:17:44.041: INFO: Node Info: &Node{ObjectMeta:{v126-worker2 325f688d-d472-4d00-af05-b1602ff4d011 7549032 0 2024-02-15 12:43:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:v126-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:v126-worker2] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-8220":"v126-worker2"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2024-02-15 12:43:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2024-02-15 12:43:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kube-controller-manager Update v1 2024-03-23 10:52:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {e2e.test Update v1 2024-04-11 18:06:27 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:example.com/fakecpu":{}},"f:capacity":{"f:example.com/fakecpu":{}}}} status} {kube-controller-manager Update v1 2024-04-11 18:17:24 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2024-04-11 18:17:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v126/v126-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2024-04-11 18:17:26 +0000 UTC,LastTransitionTime:2024-02-15 12:43:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2024-04-11 18:17:26 +0000 UTC,LastTransitionTime:2024-02-15 12:43:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2024-04-11 18:17:26 +0000 UTC,LastTransitionTime:2024-02-15 12:43:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2024-04-11 18:17:26 +0000 UTC,LastTransitionTime:2024-02-15 12:43:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.22.0.3,},NodeAddress{Type:Hostname,Address:v126-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9a4f500a92ab44e68eb943ba261bf2b3,SystemUUID:3a962073-037f-4c28-a122-8f4b5dfc4ca0,BootID:3ece24be-6f26-4926-9346-83e0950952a5,KernelVersion:5.15.0-53-generic,OSImage:Debian GNU/Linux 11 (bullseye),ContainerRuntimeVersion:containerd://1.7.1,KubeletVersion:v1.26.6,KubeProxyVersion:v1.26.6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/litmuschaos/go-runner@sha256:b4aaa2ee36bf687dd0f147ced7dce708398fae6d8410067c9ad9a90f162d55e5 docker.io/litmuschaos/go-runner:2.14.0],SizeBytes:170207512,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:ba23b53b0e943e1556160fd3d7e445268699b578d6d1ffcce645a3cfafebb3db registry.k8s.io/kube-apiserver:v1.26.6],SizeBytes:80511487,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:719bf3e60c026520ed06d4f65a6df78f53a838e8675c058f25582d5067117d99 registry.k8s.io/kube-controller-manager:v1.26.6],SizeBytes:68657293,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:ffc23ebf13cca095f49e15bc00a2fc75fc6ae75e10104169680d9cac711339b8 registry.k8s.io/kube-proxy:v1.26.6],SizeBytes:67229690,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:b67a0068e2439c496b04bb021d953b966868421451aa88f2c3701c6b4ab77d4f registry.k8s.io/kube-scheduler:v1.26.6],SizeBytes:57880717,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2 registry.k8s.io/etcd:3.5.10-0],SizeBytes:56649232,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/litmuschaos/chaos-operator@sha256:69b1a6ff1409fc80cf169503e29d10e049b46108e57436e452e3800f5f911d70 docker.io/litmuschaos/chaos-operator:2.14.0],SizeBytes:28963838,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230511-dc714da8],SizeBytes:27731571,},ContainerImage{Names:[docker.io/litmuschaos/chaos-runner@sha256:a5fcf3f1766975ec6e4730c0aefdf9705af20c67d9ff68372168c8856acba7af docker.io/litmuschaos/chaos-runner:2.14.0],SizeBytes:26125622,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20230511-dc714da8],SizeBytes:19351145,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf registry.k8s.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230510-486859a6],SizeBytes:3052318,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-8220^bda36f71-f82f-11ee-a2c8-0a6394580117],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-8220^bda36f71-f82f-11ee-a2c8-0a6394580117,DevicePath:,},},Config:nil,},} Apr 11 18:17:44.042: INFO: Logging kubelet events for node v126-worker2 Apr 11 18:17:44.045: INFO: Logging pods the kubelet thinks is on node v126-worker2 Apr 11 18:17:44.072: INFO: pod-ephm-test-projected-sqsx started at 2024-04-11 18:17:08 +0000 UTC (0+1 container statuses recorded) Apr 11 18:17:44.072: INFO: Container test-container-subpath-projected-sqsx ready: false, restart count 0 Apr 11 18:17:44.072: INFO: pod-47dbdb2f-b32a-4072-896c-ceaa1bb979b6 started at 2024-04-11 18:16:54 +0000 UTC (0+1 container statuses recorded) Apr 11 18:17:44.072: INFO: Container write-pod ready: false, restart count 0 Apr 11 18:17:44.072: INFO: pod-projected-configmaps-9c90939a-8a06-4037-82d3-b750c2b2fd97 started at (0+0 container statuses recorded) Apr 11 18:17:44.072: INFO: pod-cec6ee6e-ac33-4d11-9ffe-f66a8f02a9e5 started at 2024-04-11 18:17:28 +0000 UTC (0+1 container statuses recorded) Apr 11 18:17:44.072: INFO: Container write-pod ready: true, restart count 0 Apr 11 18:17:44.072: INFO: create-loop-devs-tmv9n started at 2024-02-15 12:43:26 +0000 UTC (0+1 container statuses recorded) Apr 11 18:17:44.072: INFO: Container loopdev ready: true, restart count 0 Apr 11 18:17:44.072: INFO: csi-mockplugin-0 started at 2024-04-11 18:17:12 +0000 UTC (0+3 container statuses recorded) Apr 11 18:17:44.072: INFO: Container csi-provisioner ready: true, restart count 0 Apr 11 18:17:44.072: INFO: Container driver-registrar ready: true, restart count 0 Apr 11 18:17:44.072: INFO: Container mock ready: true, restart count 0 Apr 11 18:17:44.072: INFO: pod-secrets-34f643f6-3f33-4dcf-a689-00a0cbf296dd started at 2024-04-11 18:14:29 +0000 UTC (0+1 container statuses recorded) Apr 11 18:17:44.072: INFO: Container creates-volume-test ready: false, restart count 0 Apr 11 18:17:44.072: INFO: csi-mockplugin-attacher-0 started at 2024-04-11 18:17:39 +0000 UTC (0+1 container statuses recorded) Apr 11 18:17:44.072: INFO: Container csi-attacher ready: true, restart count 0 Apr 11 18:17:44.072: INFO: csi-mockplugin-attacher-0 started at 2024-04-11 18:17:12 +0000 UTC (0+1 container statuses recorded) Apr 11 18:17:44.072: INFO: Container csi-attacher ready: true, restart count 0 Apr 11 18:17:44.072: INFO: pod-subpath-test-configmap-xx8z started at 2024-04-11 18:16:34 +0000 UTC (1+2 container statuses recorded) Apr 11 18:17:44.072: INFO: Init container init-volume-configmap-xx8z ready: true, restart count 0 Apr 11 18:17:44.072: INFO: Container test-container-subpath-configmap-xx8z ready: true, restart count 4 Apr 11 18:17:44.072: INFO: Container test-container-volume-configmap-xx8z ready: true, restart count 0 Apr 11 18:17:44.072: INFO: kube-proxy-zhx9l started at 2024-02-15 12:43:25 +0000 UTC (0+1 container statuses recorded) Apr 11 18:17:44.072: INFO: Container kube-proxy ready: true, restart count 0 Apr 11 18:17:44.072: INFO: pod-200e6a37-301c-423f-b450-bf89c756e603 started at 2024-04-11 18:16:54 +0000 UTC (0+1 container statuses recorded) Apr 11 18:17:44.072: INFO: Container write-pod ready: false, restart count 0 Apr 11 18:17:44.072: INFO: pod-configmaps-fa229a98-0670-4c30-9363-3304095b4961 started at 2024-04-11 18:16:03 +0000 UTC (0+1 container statuses recorded) Apr 11 18:17:44.072: INFO: Container agnhost-container ready: false, restart count 0 Apr 11 18:17:44.072: INFO: csi-mockplugin-0 started at 2024-04-11 18:17:39 +0000 UTC (0+3 container statuses recorded) Apr 11 18:17:44.072: INFO: Container csi-provisioner ready: false, restart count 0 Apr 11 18:17:44.072: INFO: Container driver-registrar ready: false, restart count 0 Apr 11 18:17:44.072: INFO: Container mock ready: false, restart count 0 Apr 11 18:17:44.072: INFO: kindnet-l6j8p started at 2024-02-15 12:43:25 +0000 UTC (0+1 container statuses recorded) Apr 11 18:17:44.072: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 18:17:44.072: INFO: pod-secrets-3453490d-dfb3-4e97-815c-1f8e58e32c7f started at 2024-04-11 18:16:57 +0000 UTC (0+1 container statuses recorded) Apr 11 18:17:44.072: INFO: Container creates-volume-test ready: false, restart count 0 Apr 11 18:17:44.072: INFO: pvc-volume-tester-sdpnm started at 2024-04-11 18:17:23 +0000 UTC (0+1 container statuses recorded) Apr 11 18:17:44.072: INFO: Container volume-tester ready: true, restart count 0 Apr 11 18:17:44.072: INFO: external-provisioner-h6krl started at 2024-04-11 18:17:33 +0000 UTC (0+1 container statuses recorded) Apr 11 18:17:44.072: INFO: Container nfs-provisioner ready: true, restart count 0 Apr 11 18:17:44.072: INFO: hostexec-v126-worker2-5hfkf started at 2024-04-11 18:17:08 +0000 UTC (0+1 container statuses recorded) Apr 11 18:17:44.072: INFO: Container agnhost-container ready: true, restart count 0 Apr 11 18:17:44.072: INFO: pod-secrets-22d8e9dd-deff-429f-9818-9bc71eaf87af started at 2024-04-11 18:11:52 +0000 UTC (0+1 container statuses recorded) Apr 11 18:17:44.072: INFO: Container creates-volume-test ready: false, restart count 0 Apr 11 18:17:45.032: INFO: Latency metrics for node v126-worker2 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-2982" for this suite. 04/11/24 18:17:45.033 << End Captured GinkgoWriter Output Apr 11 18:17:43.457: failed to get expected fsGroup 1234 on directory /mnt/volume1 in pod pod-cec6ee6e-ac33-4d11-9ffe-f66a8f02a9e5: Failed to find "1234", last result: "1000 " In [It] at: test/e2e/storage/persistent_volumes-local.go:807 ------------------------------ SSSSSSS ------------------------------ • [SLOW TEST] [13.236 seconds] [sig-storage] Dynamic Provisioning DynamicProvisioner External should let an external dynamic provisioner create and delete persistent volumes [Slow] test/e2e/storage/volume_provisioning.go:538 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [6.072 seconds] [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] test/e2e/common/storage/projected_configmap.go:113 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [6.156 seconds] [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size test/e2e/common/storage/empty_dir.go:299 ------------------------------ S ------------------------------ • [SLOW TEST] [18.015 seconds] [sig-storage] PersistentVolumes-expansion loopback local block volume should support online expansion on node test/e2e/storage/local_volume_resize.go:85 ------------------------------ SSS ------------------------------ • [SLOW TEST] [59.984 seconds] [sig-storage] CSI mock volume CSI volume limit information using mock driver should report attach limit for persistent volume when generic ephemeral volume is attached [Slow] test/e2e/storage/csi_mock_volume.go:645 ------------------------------ SSSSSSSSSSSS ------------------------------ • [SLOW TEST] [20.969 seconds] [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow] test/e2e/storage/persistent_volumes-local.go:277 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ S [SKIPPED] [0.027 seconds] [sig-storage] PersistentVolumes GCEPD [Feature:StorageProvider] [BeforeEach] test/e2e/storage/persistent_volumes-gce.go:79 should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach test/e2e/storage/persistent_volumes-gce.go:129 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] PersistentVolumes GCEPD [Feature:StorageProvider] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:18:16.018 Apr 11 18:18:16.018: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:18:16.02 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:18:16.028 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:18:16.032 [BeforeEach] [sig-storage] PersistentVolumes GCEPD [Feature:StorageProvider] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes GCEPD [Feature:StorageProvider] test/e2e/storage/persistent_volumes-gce.go:79 Apr 11 18:18:16.036: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] PersistentVolumes GCEPD [Feature:StorageProvider] test/e2e/framework/node/init/init.go:32 Apr 11 18:18:16.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] PersistentVolumes GCEPD [Feature:StorageProvider] test/e2e/storage/persistent_volumes-gce.go:113 Apr 11 18:18:16.040: INFO: AfterEach: Cleaning up test resources Apr 11 18:18:16.040: INFO: pvc is nil Apr 11 18:18:16.040: INFO: pv is nil [DeferCleanup (Each)] [sig-storage] PersistentVolumes GCEPD [Feature:StorageProvider] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes GCEPD [Feature:StorageProvider] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes GCEPD [Feature:StorageProvider] tear down framework | framework.go:193 STEP: Destroying namespace "pv-990" for this suite. 04/11/24 18:18:16.04 << End Captured GinkgoWriter Output Only supported for providers [gce gke] (not local) In [BeforeEach] at: test/e2e/storage/persistent_volumes-gce.go:87 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [42.896 seconds] [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity unused test/e2e/storage/csi_mock_volume.go:1413 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [14.105 seconds] [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathBlockDev test/e2e/storage/host_path_type.go:101 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [20.819 seconds] [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 test/e2e/storage/persistent_volumes-local.go:258 ------------------------------ SSSSSS ------------------------------ • [SLOW TEST] [32.464 seconds] [sig-storage] CSI mock volume storage capacity unlimited test/e2e/storage/csi_mock_volume.go:1194 ------------------------------ S ------------------------------ • [SLOW TEST] [6.074 seconds] [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] test/e2e/common/storage/projected_downwardapi.go:109 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ S [SKIPPED] [0.032 seconds] [sig-storage] Regional PD [BeforeEach] test/e2e/storage/regional_pd.go:70 RegionalPD test/e2e/storage/regional_pd.go:78 should provision storage [Slow] test/e2e/storage/regional_pd.go:79 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] Regional PD set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:18:43.672 Apr 11 18:18:43.673: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename regional-pd 04/11/24 18:18:43.674 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:18:43.685 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:18:43.689 [BeforeEach] [sig-storage] Regional PD test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] Regional PD test/e2e/storage/regional_pd.go:70 Apr 11 18:18:43.694: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Regional PD test/e2e/framework/node/init/init.go:32 Apr 11 18:18:43.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] Regional PD test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] Regional PD dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] Regional PD tear down framework | framework.go:193 STEP: Destroying namespace "regional-pd-7685" for this suite. 04/11/24 18:18:43.699 << End Captured GinkgoWriter Output Only supported for providers [gce gke] (not local) In [BeforeEach] at: test/e2e/storage/regional_pd.go:74 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [8.079 seconds] [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathCharDev test/e2e/storage/host_path_type.go:230 ------------------------------ SSSSSSSSSSSS ------------------------------ • [SLOW TEST] [134.367 seconds] [sig-storage] Subpath Container restart should verify that container can restart successfully after configmaps modified test/e2e/storage/subpath.go:123 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [39.273 seconds] [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 test/e2e/storage/persistent_volumes-local.go:258 ------------------------------ SSSS ------------------------------ • [SLOW TEST] [18.832 seconds] [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1 test/e2e/storage/persistent_volumes-local.go:235 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [10.069 seconds] [sig-storage] HostPathType Socket [Slow] Should be able to mount socket 'asocket' successfully when HostPathType is HostPathUnset test/e2e/storage/host_path_type.go:216 ------------------------------ SSSSSSSSSSSS ------------------------------ • [SLOW TEST] [6.069 seconds] [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] test/e2e/common/storage/projected_downwardapi.go:94 ------------------------------ SSSSSSSS ------------------------------ S [SKIPPED] [0.031 seconds] [sig-storage] Volumes [BeforeEach] test/e2e/common/storage/volumes.go:66 NFSv4 test/e2e/common/storage/volumes.go:76 should be mountable for NFSv4 test/e2e/common/storage/volumes.go:77 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] Volumes set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:19:01.256 Apr 11 18:19:01.256: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename volume 04/11/24 18:19:01.258 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:19:01.269 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:19:01.273 [BeforeEach] [sig-storage] Volumes test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] Volumes test/e2e/common/storage/volumes.go:66 Apr 11 18:19:01.277: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [sig-storage] Volumes test/e2e/framework/node/init/init.go:32 Apr 11 18:19:01.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] Volumes test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] Volumes dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] Volumes tear down framework | framework.go:193 STEP: Destroying namespace "volume-278" for this suite. 04/11/24 18:19:01.282 << End Captured GinkgoWriter Output Only supported for node OS distro [gci ubuntu custom] (not debian) In [BeforeEach] at: test/e2e/common/storage/volumes.go:67 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ S [SKIPPED] [8.199 seconds] [sig-storage] PersistentVolumes-local test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] [BeforeEach] test/e2e/storage/persistent_volumes-local.go:198 Two pods mounting a local volume one after the other test/e2e/storage/persistent_volumes-local.go:257 should be able to write from pod1 and read from pod2 test/e2e/storage/persistent_volumes-local.go:258 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:18:59.343 Apr 11 18:18:59.343: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/11/24 18:18:59.345 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:18:59.356 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:18:59.36 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:198 Apr 11 18:18:59.375: INFO: Waiting up to 5m0s for pod "hostexec-v126-worker2-z29r9" in namespace "persistent-local-volumes-test-1240" to be "running" Apr 11 18:18:59.378: INFO: Pod "hostexec-v126-worker2-z29r9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.966579ms Apr 11 18:19:01.381: INFO: Pod "hostexec-v126-worker2-z29r9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006124348s Apr 11 18:19:03.382: INFO: Pod "hostexec-v126-worker2-z29r9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007146778s Apr 11 18:19:05.383: INFO: Pod "hostexec-v126-worker2-z29r9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.007942082s Apr 11 18:19:07.382: INFO: Pod "hostexec-v126-worker2-z29r9": Phase="Running", Reason="", readiness=true. Elapsed: 8.00718875s Apr 11 18:19:07.382: INFO: Pod "hostexec-v126-worker2-z29r9" satisfied condition "running" Apr 11 18:19:07.382: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-1240 PodName:hostexec-v126-worker2-z29r9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 11 18:19:07.382: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:19:07.383: INFO: ExecWithOptions: Clientset creation Apr 11 18:19:07.383: INFO: ExecWithOptions: execute(POST https://172.30.13.90:35339/api/v1/namespaces/persistent-local-volumes-test-1240/pods/hostexec-v126-worker2-z29r9/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=ls+-1+%2Fmnt%2Fdisks%2Fby-uuid%2Fgoogle-local-ssds-scsi-fs%2F+%7C+wc+-l&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Apr 11 18:19:07.532: INFO: exec v126-worker2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Apr 11 18:19:07.532: INFO: exec v126-worker2: stdout: "0\n" Apr 11 18:19:07.532: INFO: exec v126-worker2: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Apr 11 18:19:07.532: INFO: exec v126-worker2: exit code: 0 Apr 11 18:19:07.532: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] test/e2e/storage/persistent_volumes-local.go:207 STEP: Cleaning up PVC and PV 04/11/24 18:19:07.533 [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 11 18:19:07.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-1240" for this suite. 04/11/24 18:19:07.538 << End Captured GinkgoWriter Output Requires at least 1 scsi fs localSSD In [BeforeEach] at: test/e2e/storage/persistent_volumes-local.go:1255 There were additional failures detected after the initial failure. Here's a summary - for full details run Ginkgo in verbose mode: [PANICKED] in [AfterEach] at /usr/local/go/src/runtime/panic.go:260 ------------------------------ S ------------------------------ S [SKIPPED] [0.034 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 Ephemeral test/e2e/storage/volume_metrics.go:495 should create volume metrics with the correct BlockMode PVC ref test/e2e/storage/volume_metrics.go:477 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:19:07.545 Apr 11 18:19:07.545: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:19:07.547 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:19:07.557 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:19:07.561 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:19:07.565: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:19:07.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-9390" for this suite. 04/11/24 18:19:07.57 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure. Here's a summary - for full details run Ginkgo in verbose mode: [PANICKED] in [AfterEach] at /usr/local/go/src/runtime/panic.go:260 ------------------------------ SSSSSSSSS ------------------------------ • [SLOW TEST] [16.193 seconds] [sig-storage] HostPathType Character Device [Slow] Should be able to mount character device 'achardev' successfully when HostPathType is HostPathUnset test/e2e/storage/host_path_type.go:286 ------------------------------ SSSSSSS ------------------------------ • [SLOW TEST] [126.053 seconds] [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected test/e2e/storage/ephemeral_volume.go:58 ------------------------------ SS ------------------------------ • [SLOW TEST] [31.322 seconds] [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow] test/e2e/storage/persistent_volumes-local.go:277 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [300.058 seconds] [sig-storage] Secrets Should fail non-optional pod creation due to the key in the secret object does not exist [Slow] test/e2e/common/storage/secrets_volume.go:449 ------------------------------ SSS ------------------------------ • [SLOW TEST] [29.112 seconds] [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow] test/e2e/storage/persistent_volumes-local.go:277 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [19.385 seconds] [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1 test/e2e/storage/persistent_volumes-local.go:241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [126.050 seconds] [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap test/e2e/storage/ephemeral_volume.go:58 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [6.065 seconds] [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] test/e2e/common/storage/configmap_volume.go:78 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [73.314 seconds] [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default test/e2e/storage/csi_mock_volume.go:1696 ------------------------------ SSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [20.092 seconds] [sig-storage] HostPathType Directory [Slow] Should be able to mount directory 'adir' successfully when HostPathType is HostPathDirectory test/e2e/storage/host_path_type.go:78 ------------------------------ SSSSSSSS ------------------------------ • [SLOW TEST] [18.055 seconds] [sig-storage] Volumes ConfigMap should be mountable test/e2e/storage/volumes.go:50 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [92.696 seconds] [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should retry NodeStage after NodeStage final error test/e2e/storage/csi_mock_volume.go:942 ------------------------------ SSSSSSSSSSSS ------------------------------ • [SLOW TEST] [72.921 seconds] [sig-storage] CSI mock volume storage capacity exhausted, immediate binding test/e2e/storage/csi_mock_volume.go:1194 ------------------------------ SS ------------------------------ S [SKIPPED] [0.034 seconds] [sig-storage] PersistentVolumes-local test/e2e/storage/utils/framework.go:23 Pod with node different from PV's NodeAffinity [BeforeEach] test/e2e/storage/persistent_volumes-local.go:357 should fail scheduling due to different NodeSelector test/e2e/storage/persistent_volumes-local.go:382 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] PersistentVolumes-local set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:20:42.148 Apr 11 18:20:42.148: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test 04/11/24 18:20:42.15 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:20:42.161 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:20:42.165 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:161 [BeforeEach] Pod with node different from PV's NodeAffinity test/e2e/storage/persistent_volumes-local.go:357 Apr 11 18:20:42.172: INFO: Runs only when number of nodes >= 2 [AfterEach] Pod with node different from PV's NodeAffinity test/e2e/storage/persistent_volumes-local.go:373 STEP: Cleaning up PVC and PV 04/11/24 18:20:42.173 [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/node/init/init.go:32 Apr 11 18:20:42.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] PersistentVolumes-local tear down framework | framework.go:193 STEP: Destroying namespace "persistent-local-volumes-test-4022" for this suite. 04/11/24 18:20:42.177 << End Captured GinkgoWriter Output Runs only when number of nodes >= 2 In [BeforeEach] at: test/e2e/storage/persistent_volumes-local.go:359 There were additional failures detected after the initial failure. Here's a summary - for full details run Ginkgo in verbose mode: [PANICKED] in [AfterEach] at /usr/local/go/src/runtime/panic.go:260 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [97.742 seconds] [sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage final error test/e2e/storage/csi_mock_volume.go:1075 ------------------------------ SSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [47.964 seconds] [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=on, nodeExpansion=on test/e2e/storage/csi_mock_volume.go:799 ------------------------------ SSSSSSSSSSS ------------------------------ • [SLOW TEST] [52.462 seconds] [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity disabled test/e2e/storage/csi_mock_volume.go:1413 ------------------------------ S ------------------------------ • [SLOW TEST] [300.057 seconds] [sig-storage] ConfigMap Should fail non-optional pod creation due to configMap object does not exist [Slow] test/e2e/common/storage/configmap_volume.go:557 ------------------------------ • [SLOW TEST] [51.430 seconds] [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment test/e2e/storage/csi_mock_volume.go:392 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [32.115 seconds] [sig-storage] PVC Protection Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable test/e2e/storage/pvc_protection.go:148 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [29.057 seconds] [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 test/e2e/storage/persistent_volumes-local.go:252 ------------------------------ SSSSSSSS ------------------------------ • [SLOW TEST] [8.070 seconds] [sig-storage] HostPath should support r/w [NodeConformance] test/e2e/common/storage/host_path.go:68 ------------------------------ SS ------------------------------ • [SLOW TEST] [29.177 seconds] [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1 test/e2e/storage/persistent_volumes-local.go:235 ------------------------------ SSSSSSSSSSS ------------------------------ • [SLOW TEST] [18.075 seconds] [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathBlockDev test/e2e/storage/host_path_type.go:235 ------------------------------ S [SKIPPED] [0.030 seconds] [sig-storage] Pod Disks [Feature:StorageProvider] [BeforeEach] test/e2e/storage/pd.go:76 schedule pods each with a PD, delete pod and verify detach [Slow] test/e2e/storage/pd.go:95 for RW PD with pod delete grace period of "default (30s)" test/e2e/storage/pd.go:137 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] Pod Disks [Feature:StorageProvider] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:21:32.485 Apr 11 18:21:32.485: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pod-disks 04/11/24 18:21:32.487 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:21:32.497 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:21:32.501 [BeforeEach] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/storage/pd.go:76 Apr 11 18:21:32.505: INFO: Requires at least 2 nodes (not 1) [AfterEach] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/framework/node/init/init.go:32 Apr 11 18:21:32.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] Pod Disks [Feature:StorageProvider] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] Pod Disks [Feature:StorageProvider] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] Pod Disks [Feature:StorageProvider] tear down framework | framework.go:193 STEP: Destroying namespace "pod-disks-6644" for this suite. 04/11/24 18:21:32.51 << End Captured GinkgoWriter Output Requires at least 2 nodes (not 1) In [BeforeEach] at: test/e2e/storage/pd.go:77 ------------------------------ S ------------------------------ • [SLOW TEST] [10.204 seconds] [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathCharDev test/e2e/storage/host_path_type.go:375 ------------------------------ SSSSSS ------------------------------ • [SLOW TEST] [300.059 seconds] [sig-storage] Projected secret Should fail non-optional pod creation due to the key in the secret object does not exist [Slow] test/e2e/common/storage/projected_secret.go:424 ------------------------------ • [SLOW TEST] [44.891 seconds] [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 test/e2e/storage/persistent_volumes-local.go:258 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [57.652 seconds] [sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] should call NodeStage after NodeUnstage success test/e2e/storage/csi_mock_volume.go:1075 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [82.924 seconds] [sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology test/e2e/storage/csi_mock_volume.go:1194 ------------------------------ SSS ------------------------------ • [SLOW TEST] [35.446 seconds] [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow] test/e2e/storage/persistent_volumes-local.go:277 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [20.092 seconds] [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathSocket test/e2e/storage/host_path_type.go:159 ------------------------------ • [SLOW TEST] [199.714 seconds] [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should not call NodeUnstage after NodeStage final error test/e2e/storage/csi_mock_volume.go:942 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [109.941 seconds] [sig-storage] CSI mock volume Delegate FSGroup to CSI driver [LinuxOnly] should not pass FSGroup to CSI driver if it is set in pod and driver supports VOLUME_MOUNT_GROUP test/e2e/storage/csi_mock_volume.go:1771 ------------------------------ SSSSSS ------------------------------ • [SLOW TEST] [36.702 seconds] [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1 test/e2e/storage/persistent_volumes-local.go:235 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [43.422 seconds] [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume test/e2e/storage/csi_mock_volume.go:549 ------------------------------ SSSSSSSSS ------------------------------ S [SKIPPED] [0.033 seconds] [sig-storage] Multi-AZ Cluster Volumes [BeforeEach] test/e2e/storage/ubernetes_lite_volumes.go:45 should schedule pods in the same zones as statically provisioned PVs test/e2e/storage/ubernetes_lite_volumes.go:56 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] Multi-AZ Cluster Volumes set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:22:44.187 Apr 11 18:22:44.187: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename multi-az 04/11/24 18:22:44.189 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:22:44.201 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:22:44.205 [BeforeEach] [sig-storage] Multi-AZ Cluster Volumes test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] Multi-AZ Cluster Volumes test/e2e/storage/ubernetes_lite_volumes.go:45 Apr 11 18:22:44.209: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Multi-AZ Cluster Volumes test/e2e/framework/node/init/init.go:32 Apr 11 18:22:44.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] Multi-AZ Cluster Volumes test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] Multi-AZ Cluster Volumes dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] Multi-AZ Cluster Volumes tear down framework | framework.go:193 STEP: Destroying namespace "multi-az-1838" for this suite. 04/11/24 18:22:44.215 << End Captured GinkgoWriter Output Only supported for providers [gce gke] (not local) In [BeforeEach] at: test/e2e/storage/ubernetes_lite_volumes.go:46 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [8.179 seconds] [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathSocket test/e2e/storage/host_path_type.go:300 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ S [SKIPPED] [0.029 seconds] [sig-storage] [Serial] Volume metrics [BeforeEach] test/e2e/storage/volume_metrics.go:62 PVC test/e2e/storage/volume_metrics.go:491 should create volume metrics with the correct FilesystemMode PVC ref test/e2e/storage/volume_metrics.go:474 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] [Serial] Volume metrics set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:22:44.53 Apr 11 18:22:44.530: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pv 04/11/24 18:22:44.531 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:22:44.542 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:22:44.545 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:62 Apr 11 18:22:44.549: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/framework/node/init/init.go:32 Apr 11 18:22:44.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-storage] [Serial] Volume metrics test/e2e/storage/volume_metrics.go:101 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] [Serial] Volume metrics tear down framework | framework.go:193 STEP: Destroying namespace "pv-3298" for this suite. 04/11/24 18:22:44.554 << End Captured GinkgoWriter Output Only supported for providers [gce gke aws] (not local) In [BeforeEach] at: test/e2e/storage/volume_metrics.go:70 There were additional failures detected after the initial failure. Here's a summary - for full details run Ginkgo in verbose mode: [PANICKED] in [AfterEach] at /usr/local/go/src/runtime/panic.go:260 ------------------------------ SSSSSSSSSS ------------------------------ • [SLOW TEST] [14.073 seconds] [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathDirectory test/e2e/storage/host_path_type.go:220 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [23.640 seconds] [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and read from pod1 test/e2e/storage/persistent_volumes-local.go:235 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [6.084 seconds] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup test/e2e/common/storage/empty_dir.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [26.931 seconds] [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity test/e2e/storage/csi_mock_volume.go:1413 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [58.012 seconds] [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on test/e2e/storage/csi_mock_volume.go:700 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [SLOW TEST] [14.098 seconds] [sig-storage] HostPathType File [Slow] Should be able to mount file 'afile' successfully when HostPathType is HostPathFile test/e2e/storage/host_path_type.go:146 ------------------------------ SSSSSSSS ------------------------------ • [SLOW TEST] [91.696 seconds] [sig-storage] CSI mock volume storage capacity exhausted, late binding, no topology test/e2e/storage/csi_mock_volume.go:1194 ------------------------------ SSSS ------------------------------ • [SLOW TEST] [14.789 seconds] [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1 test/e2e/storage/persistent_volumes-local.go:241 ------------------------------ • [SLOW TEST] [24.923 seconds] [sig-storage] PersistentVolumes-local [Volume type: dir-link] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow] test/e2e/storage/persistent_volumes-local.go:277 ------------------------------ • [SLOW TEST] [63.988 seconds] [sig-storage] CSI mock volume CSI volume limit information using mock driver should report attach limit for generic ephemeral volume when persistent volume is attached [Slow] test/e2e/storage/csi_mock_volume.go:620 ------------------------------ • [SLOW TEST] [35.385 seconds] [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for ephemermal volume and drivers with attachment test/e2e/storage/csi_mock_volume.go:392 ------------------------------ • [SLOW TEST] [300.062 seconds] [sig-storage] ConfigMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow] test/e2e/common/storage/configmap_volume.go:566 ------------------------------ • [SLOW TEST] [216.083 seconds] [sig-storage] CSI mock volume CSI volume limit information using mock driver should report attach limit when limit is bigger than 0 [Slow] test/e2e/storage/csi_mock_volume.go:589 ------------------------------ • [SLOW TEST] [300.051 seconds] [sig-storage] Projected configMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow] test/e2e/common/storage/projected_configmap.go:472 ------------------------------ • [SLOW TEST] [267.472 seconds] [sig-storage] CSI mock volume CSI CSIDriver deployment after pod creation using non-attachable mock driver should bringup pod after deploying CSIDriver attach=false [Slow] test/e2e/storage/csi_mock_volume.go:430 ------------------------------ • [SLOW TEST] [602.100 seconds] [sig-storage] PersistentVolumes-local Local volume that cannot be mounted [Slow] should fail due to non-existent path test/e2e/storage/persistent_volumes-local.go:310 ------------------------------ Summarizing 1 Failure: [FAIL] [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Set fsGroup for local volume [It] should set fsGroup for one pod [Slow] test/e2e/storage/persistent_volumes-local.go:807 Ran 172 of 7069 Specs in 1436.945 seconds FAIL! -- 171 Passed | 1 Failed | 0 Pending | 6897 Skipped Ginkgo ran 1 suite in 23m58.024296856s Test Suite Failed