Running Suite: Kubernetes e2e suite =================================== Random Seed: 1664971383 - Will randomize all specs Will run 6444 specs Running in parallel across 10 nodes Oct 5 12:03:06.885: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:03:06.888: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Oct 5 12:03:06.916: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 5 12:03:06.976: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 5 12:03:06.976: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Oct 5 12:03:06.976: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Oct 5 12:03:06.991: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) Oct 5 12:03:06.991: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Oct 5 12:03:06.991: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Oct 5 12:03:06.991: INFO: e2e test version: v1.22.15 Oct 5 12:03:06.993: INFO: kube-apiserver version: v1.22.15 Oct 5 12:03:06.996: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:03:07.004: INFO: Cluster IP family: ipv4 Oct 5 12:03:07.000: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:03:07.024: INFO: Cluster IP family: ipv4 Oct 5 12:03:07.002: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:03:07.027: INFO: Cluster IP family: ipv4 Oct 5 12:03:07.010: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:03:07.030: INFO: Cluster IP family: ipv4 Oct 5 12:03:07.005: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:03:07.030: INFO: Cluster IP family: ipv4 Oct 5 12:03:07.002: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:03:07.030: INFO: Cluster IP family: ipv4 Oct 5 12:03:07.028: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:03:07.055: INFO: Cluster IP family: ipv4 Oct 5 12:03:07.034: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:03:07.058: INFO: Cluster IP family: ipv4 Oct 5 12:03:07.040: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:03:07.064: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Oct 5 12:03:07.085: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:03:07.102: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:03:07.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv-protection W1005 12:03:07.136869 31 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 5 12:03:07.136: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:51 Oct 5 12:03:07.140: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PV STEP: Waiting for PV to enter phase Available Oct 5 12:03:07.146: INFO: Waiting up to 30s for PersistentVolume hostpath-mw72c to have phase Available Oct 5 12:03:07.149: INFO: PersistentVolume hostpath-mw72c found but phase is Pending instead of Available. Oct 5 12:03:08.153: INFO: PersistentVolume hostpath-mw72c found and phase=Available (1.007153049s) STEP: Checking that PV Protection finalizer is set [It] Verify that PV bound to a PVC is not removed immediately /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:107 STEP: Creating a PVC STEP: Waiting for PVC to become Bound Oct 5 12:03:08.163: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-f56w2] to have phase Bound Oct 5 12:03:08.166: INFO: PersistentVolumeClaim pvc-f56w2 found but phase is Pending instead of Bound. Oct 5 12:03:10.170: INFO: PersistentVolumeClaim pvc-f56w2 found and phase=Bound (2.007754358s) STEP: Deleting the PV, however, the PV must not be removed from the system as it's bound to a PVC STEP: Checking that the PV status is Terminating STEP: Deleting the PVC that is bound to the PV STEP: Checking that the PV is automatically removed from the system because it's no longer bound to a PVC Oct 5 12:03:10.184: INFO: Waiting up to 3m0s for PersistentVolume hostpath-mw72c to get deleted Oct 5 12:03:10.188: INFO: PersistentVolume hostpath-mw72c found and phase=Bound (4.2309ms) Oct 5 12:03:12.334: INFO: PersistentVolume hostpath-mw72c was removed [AfterEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:03:12.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-protection-5384" for this suite. [AfterEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:92 Oct 5 12:03:12.344: INFO: AfterEach: Cleaning up test resources. Oct 5 12:03:12.344: INFO: Deleting PersistentVolumeClaim "pvc-f56w2" Oct 5 12:03:12.395: INFO: Deleting PersistentVolume "hostpath-mw72c" • [SLOW TEST:5.285 seconds] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Verify that PV bound to a PVC is not removed immediately /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:107 ------------------------------ {"msg":"PASSED [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately","total":-1,"completed":1,"skipped":5,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:03:07.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test W1005 12:03:07.093807 21 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 5 12:03:07.093: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Oct 5 12:03:13.118: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-3580 PodName:hostexec-v122-worker-8f2wq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:03:13.118: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:03:13.266: INFO: exec v122-worker: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Oct 5 12:03:13.266: INFO: exec v122-worker: stdout: "0\n" Oct 5 12:03:13.266: INFO: exec v122-worker: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Oct 5 12:03:13.266: INFO: exec v122-worker: exit code: 0 Oct 5 12:03:13.266: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:03:13.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3580" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [6.206 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1250 ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:03:07.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test W1005 12:03:07.095362 33 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 5 12:03:07.095: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Oct 5 12:03:13.118: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-5189 PodName:hostexec-v122-worker-8wtxk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:03:13.118: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:03:13.266: INFO: exec v122-worker: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Oct 5 12:03:13.266: INFO: exec v122-worker: stdout: "0\n" Oct 5 12:03:13.266: INFO: exec v122-worker: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Oct 5 12:03:13.266: INFO: exec v122-worker: exit code: 0 Oct 5 12:03:13.266: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:03:13.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5189" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [6.213 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1250 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:03:07.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api W1005 12:03:07.085794 24 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 5 12:03:07.085: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:91 STEP: Creating a pod to test downward API volume plugin Oct 5 12:03:07.094: INFO: Waiting up to 5m0s for pod "metadata-volume-537a8152-9c7c-4580-a138-34aa6414b17a" in namespace "downward-api-1737" to be "Succeeded or Failed" Oct 5 12:03:07.097: INFO: Pod "metadata-volume-537a8152-9c7c-4580-a138-34aa6414b17a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.958612ms Oct 5 12:03:09.102: INFO: Pod "metadata-volume-537a8152-9c7c-4580-a138-34aa6414b17a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007473506s Oct 5 12:03:11.106: INFO: Pod "metadata-volume-537a8152-9c7c-4580-a138-34aa6414b17a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011879929s Oct 5 12:03:13.111: INFO: Pod "metadata-volume-537a8152-9c7c-4580-a138-34aa6414b17a": Phase="Running", Reason="", readiness=false. Elapsed: 6.016666207s Oct 5 12:03:15.116: INFO: Pod "metadata-volume-537a8152-9c7c-4580-a138-34aa6414b17a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.021106132s STEP: Saw pod success Oct 5 12:03:15.116: INFO: Pod "metadata-volume-537a8152-9c7c-4580-a138-34aa6414b17a" satisfied condition "Succeeded or Failed" Oct 5 12:03:15.119: INFO: Trying to get logs from node v122-worker pod metadata-volume-537a8152-9c7c-4580-a138-34aa6414b17a container client-container: STEP: delete the pod Oct 5 12:03:15.158: INFO: Waiting for pod metadata-volume-537a8152-9c7c-4580-a138-34aa6414b17a to disappear Oct 5 12:03:15.161: INFO: Pod metadata-volume-537a8152-9c7c-4580-a138-34aa6414b17a no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:03:15.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1737" for this suite. • [SLOW TEST:8.125 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:91 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":1,"skipped":7,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:03:07.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir W1005 12:03:07.089125 19 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 5 12:03:07.089: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support memory backed volumes of specified size /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:298 STEP: Creating Pod Oct 5 12:03:07.103: INFO: The status of Pod pod-size-memory-volume-10feefd1-e25a-4026-96c4-1023168bd72a is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:03:09.108: INFO: The status of Pod pod-size-memory-volume-10feefd1-e25a-4026-96c4-1023168bd72a is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:03:11.108: INFO: The status of Pod pod-size-memory-volume-10feefd1-e25a-4026-96c4-1023168bd72a is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:03:13.108: INFO: The status of Pod pod-size-memory-volume-10feefd1-e25a-4026-96c4-1023168bd72a is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:03:15.108: INFO: The status of Pod pod-size-memory-volume-10feefd1-e25a-4026-96c4-1023168bd72a is Running (Ready = true) STEP: Waiting for the pod running STEP: Getting the pod STEP: Reading empty dir size Oct 5 12:03:15.117: INFO: ExecWithOptions {Command:[/bin/sh -c df | grep /usr/share/volumeshare | awk '{print $2}'] Namespace:emptydir-1575 PodName:pod-size-memory-volume-10feefd1-e25a-4026-96c4-1023168bd72a ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:03:15.117: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:03:15.224: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:03:15.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1575" for this suite. • [SLOW TEST:8.179 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 pod should support memory backed volumes of specified size /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:298 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size","total":-1,"completed":1,"skipped":12,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:03:15.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Oct 5 12:03:15.225: INFO: The status of Pod test-hostpath-type-qstsc is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:03:17.230: INFO: The status of Pod test-hostpath-type-qstsc is Running (Ready = true) STEP: running on node v122-worker [It] Should fail on mounting socket 'asocket' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:216 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:03:19.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-7706" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathDirectory","total":-1,"completed":2,"skipped":14,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:03:15.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] nonexistent volume subPath should have the correct mode and owner using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:63 STEP: Creating a pod to test emptydir subpath on tmpfs Oct 5 12:03:15.299: INFO: Waiting up to 5m0s for pod "pod-04287dad-9969-4002-8a51-ef994aef92c5" in namespace "emptydir-6699" to be "Succeeded or Failed" Oct 5 12:03:15.302: INFO: Pod "pod-04287dad-9969-4002-8a51-ef994aef92c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.73068ms Oct 5 12:03:17.306: INFO: Pod "pod-04287dad-9969-4002-8a51-ef994aef92c5": Phase="Running", Reason="", readiness=false. Elapsed: 2.006566345s Oct 5 12:03:19.309: INFO: Pod "pod-04287dad-9969-4002-8a51-ef994aef92c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01020612s STEP: Saw pod success Oct 5 12:03:19.309: INFO: Pod "pod-04287dad-9969-4002-8a51-ef994aef92c5" satisfied condition "Succeeded or Failed" Oct 5 12:03:19.312: INFO: Trying to get logs from node v122-worker pod pod-04287dad-9969-4002-8a51-ef994aef92c5 container test-container: STEP: delete the pod Oct 5 12:03:19.325: INFO: Waiting for pod pod-04287dad-9969-4002-8a51-ef994aef92c5 to disappear Oct 5 12:03:19.328: INFO: Pod pod-04287dad-9969-4002-8a51-ef994aef92c5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:03:19.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6699" for this suite. •S ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup","total":-1,"completed":2,"skipped":24,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:03:19.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:57 Oct 5 12:03:19.533: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:03:19.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-8914" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:91 S [SKIPPING] in Spec Setup (BeforeEach) [0.045 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:457 should create unbound pv count metrics for pvc controller after creating pv only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:559 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:61 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:03:07.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test W1005 12:03:07.163147 27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 5 12:03:07.163: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Oct 5 12:03:15.188: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-a6f1bca3-4dd5-406e-b5b0-eae08cc56d15-backend && mount --bind /tmp/local-volume-test-a6f1bca3-4dd5-406e-b5b0-eae08cc56d15-backend /tmp/local-volume-test-a6f1bca3-4dd5-406e-b5b0-eae08cc56d15-backend && ln -s /tmp/local-volume-test-a6f1bca3-4dd5-406e-b5b0-eae08cc56d15-backend /tmp/local-volume-test-a6f1bca3-4dd5-406e-b5b0-eae08cc56d15] Namespace:persistent-local-volumes-test-8318 PodName:hostexec-v122-worker-4l8ms ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:03:15.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:03:15.305: INFO: Creating a PV followed by a PVC Oct 5 12:03:15.313: INFO: Waiting for PV local-pvzz5xp to bind to PVC pvc-8fr64 Oct 5 12:03:15.314: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-8fr64] to have phase Bound Oct 5 12:03:15.316: INFO: PersistentVolumeClaim pvc-8fr64 found but phase is Pending instead of Bound. Oct 5 12:03:17.320: INFO: PersistentVolumeClaim pvc-8fr64 found but phase is Pending instead of Bound. Oct 5 12:03:19.325: INFO: PersistentVolumeClaim pvc-8fr64 found but phase is Pending instead of Bound. Oct 5 12:03:21.330: INFO: PersistentVolumeClaim pvc-8fr64 found and phase=Bound (6.016532986s) Oct 5 12:03:21.330: INFO: Waiting up to 3m0s for PersistentVolume local-pvzz5xp to have phase Bound Oct 5 12:03:21.333: INFO: PersistentVolume local-pvzz5xp found and phase=Bound (2.939514ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Oct 5 12:03:21.339: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:03:21.341: INFO: Deleting PersistentVolumeClaim "pvc-8fr64" Oct 5 12:03:21.346: INFO: Deleting PersistentVolume "local-pvzz5xp" STEP: Removing the test directory Oct 5 12:03:21.351: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-a6f1bca3-4dd5-406e-b5b0-eae08cc56d15 && umount /tmp/local-volume-test-a6f1bca3-4dd5-406e-b5b0-eae08cc56d15-backend && rm -r /tmp/local-volume-test-a6f1bca3-4dd5-406e-b5b0-eae08cc56d15-backend] Namespace:persistent-local-volumes-test-8318 PodName:hostexec-v122-worker-4l8ms ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:03:21.351: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:03:21.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8318" for this suite. S [SKIPPING] [14.390 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:03:21.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Oct 5 12:03:21.573: INFO: The status of Pod test-hostpath-type-f6htl is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:03:23.578: INFO: The status of Pod test-hostpath-type-f6htl is Running (Ready = true) STEP: running on node v122-worker STEP: Create a block device for further testing Oct 5 12:03:23.581: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-1992 PodName:test-hostpath-type-f6htl ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:03:23.581: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:350 [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:03:25.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-1992" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathUnset","total":-1,"completed":1,"skipped":91,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:03:07.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test W1005 12:03:07.079435 23 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 5 12:03:07.079: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Oct 5 12:03:11.102: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-7660600e-342c-4e3c-8863-4c4df850a9b4-backend && mount --bind /tmp/local-volume-test-7660600e-342c-4e3c-8863-4c4df850a9b4-backend /tmp/local-volume-test-7660600e-342c-4e3c-8863-4c4df850a9b4-backend && ln -s /tmp/local-volume-test-7660600e-342c-4e3c-8863-4c4df850a9b4-backend /tmp/local-volume-test-7660600e-342c-4e3c-8863-4c4df850a9b4] Namespace:persistent-local-volumes-test-7178 PodName:hostexec-v122-worker2-xwctq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:03:11.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:03:11.248: INFO: Creating a PV followed by a PVC Oct 5 12:03:11.255: INFO: Waiting for PV local-pvn625d to bind to PVC pvc-m2f5k Oct 5 12:03:11.255: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-m2f5k] to have phase Bound Oct 5 12:03:11.258: INFO: PersistentVolumeClaim pvc-m2f5k found but phase is Pending instead of Bound. Oct 5 12:03:13.263: INFO: PersistentVolumeClaim pvc-m2f5k found but phase is Pending instead of Bound. Oct 5 12:03:15.267: INFO: PersistentVolumeClaim pvc-m2f5k found but phase is Pending instead of Bound. Oct 5 12:03:17.276: INFO: PersistentVolumeClaim pvc-m2f5k found but phase is Pending instead of Bound. Oct 5 12:03:19.279: INFO: PersistentVolumeClaim pvc-m2f5k found but phase is Pending instead of Bound. Oct 5 12:03:21.284: INFO: PersistentVolumeClaim pvc-m2f5k found and phase=Bound (10.028283328s) Oct 5 12:03:21.284: INFO: Waiting up to 3m0s for PersistentVolume local-pvn625d to have phase Bound Oct 5 12:03:21.287: INFO: PersistentVolume local-pvn625d found and phase=Bound (2.965352ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Oct 5 12:03:29.319: INFO: pod "pod-13b50563-0611-41c9-bee6-486873083b87" created on Node "v122-worker2" STEP: Writing in pod1 Oct 5 12:03:29.319: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7178 PodName:pod-13b50563-0611-41c9-bee6-486873083b87 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:03:29.319: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:03:29.436: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Oct 5 12:03:29.437: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7178 PodName:pod-13b50563-0611-41c9-bee6-486873083b87 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:03:29.437: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:03:29.565: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Oct 5 12:03:33.586: INFO: pod "pod-c508d9a8-c404-459b-b36a-db6fe59351e0" created on Node "v122-worker2" Oct 5 12:03:33.586: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7178 PodName:pod-c508d9a8-c404-459b-b36a-db6fe59351e0 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:03:33.586: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:03:33.709: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Oct 5 12:03:33.709: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-7660600e-342c-4e3c-8863-4c4df850a9b4 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7178 PodName:pod-c508d9a8-c404-459b-b36a-db6fe59351e0 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:03:33.709: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:03:33.829: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-7660600e-342c-4e3c-8863-4c4df850a9b4 > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Oct 5 12:03:33.829: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7178 PodName:pod-13b50563-0611-41c9-bee6-486873083b87 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:03:33.829: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:03:33.967: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-7660600e-342c-4e3c-8863-4c4df850a9b4", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-13b50563-0611-41c9-bee6-486873083b87 in namespace persistent-local-volumes-test-7178 STEP: Deleting pod2 STEP: Deleting pod pod-c508d9a8-c404-459b-b36a-db6fe59351e0 in namespace persistent-local-volumes-test-7178 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:03:33.978: INFO: Deleting PersistentVolumeClaim "pvc-m2f5k" Oct 5 12:03:33.983: INFO: Deleting PersistentVolume "local-pvn625d" STEP: Removing the test directory Oct 5 12:03:33.992: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-7660600e-342c-4e3c-8863-4c4df850a9b4 && umount /tmp/local-volume-test-7660600e-342c-4e3c-8863-4c4df850a9b4-backend && rm -r /tmp/local-volume-test-7660600e-342c-4e3c-8863-4c4df850a9b4-backend] Namespace:persistent-local-volumes-test-7178 PodName:hostexec-v122-worker2-xwctq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:03:33.992: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:03:34.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7178" for this suite. • [SLOW TEST:27.118 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":2,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:03:19.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Oct 5 12:03:21.395: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-14a03839-8c28-4299-94c3-a66392f1ac0d && mount --bind /tmp/local-volume-test-14a03839-8c28-4299-94c3-a66392f1ac0d /tmp/local-volume-test-14a03839-8c28-4299-94c3-a66392f1ac0d] Namespace:persistent-local-volumes-test-3548 PodName:hostexec-v122-worker-lch8b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:03:21.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:03:21.480: INFO: Creating a PV followed by a PVC Oct 5 12:03:21.492: INFO: Waiting for PV local-pvc2wsr to bind to PVC pvc-9prdz Oct 5 12:03:21.492: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-9prdz] to have phase Bound Oct 5 12:03:21.502: INFO: PersistentVolumeClaim pvc-9prdz found but phase is Pending instead of Bound. Oct 5 12:03:23.506: INFO: PersistentVolumeClaim pvc-9prdz found but phase is Pending instead of Bound. Oct 5 12:03:25.511: INFO: PersistentVolumeClaim pvc-9prdz found but phase is Pending instead of Bound. Oct 5 12:03:27.516: INFO: PersistentVolumeClaim pvc-9prdz found but phase is Pending instead of Bound. Oct 5 12:03:29.520: INFO: PersistentVolumeClaim pvc-9prdz found but phase is Pending instead of Bound. Oct 5 12:03:31.525: INFO: PersistentVolumeClaim pvc-9prdz found but phase is Pending instead of Bound. Oct 5 12:03:33.530: INFO: PersistentVolumeClaim pvc-9prdz found but phase is Pending instead of Bound. Oct 5 12:03:35.536: INFO: PersistentVolumeClaim pvc-9prdz found and phase=Bound (14.044129239s) Oct 5 12:03:35.536: INFO: Waiting up to 3m0s for PersistentVolume local-pvc2wsr to have phase Bound Oct 5 12:03:35.539: INFO: PersistentVolume local-pvc2wsr found and phase=Bound (3.102828ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Oct 5 12:03:37.565: INFO: pod "pod-bd771fe7-df14-4f08-b01b-3a93644fa7d9" created on Node "v122-worker" STEP: Writing in pod1 Oct 5 12:03:37.565: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3548 PodName:pod-bd771fe7-df14-4f08-b01b-3a93644fa7d9 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:03:37.566: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:03:37.656: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Oct 5 12:03:37.656: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3548 PodName:pod-bd771fe7-df14-4f08-b01b-3a93644fa7d9 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:03:37.656: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:03:37.776: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-bd771fe7-df14-4f08-b01b-3a93644fa7d9 in namespace persistent-local-volumes-test-3548 STEP: Creating pod2 STEP: Creating a pod Oct 5 12:03:39.799: INFO: pod "pod-80dd5891-eaab-4739-89db-68bfde759eeb" created on Node "v122-worker" STEP: Reading in pod2 Oct 5 12:03:39.799: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3548 PodName:pod-80dd5891-eaab-4739-89db-68bfde759eeb ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:03:39.799: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:03:39.874: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-80dd5891-eaab-4739-89db-68bfde759eeb in namespace persistent-local-volumes-test-3548 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:03:39.878: INFO: Deleting PersistentVolumeClaim "pvc-9prdz" Oct 5 12:03:39.882: INFO: Deleting PersistentVolume "local-pvc2wsr" STEP: Removing the test directory Oct 5 12:03:39.886: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-14a03839-8c28-4299-94c3-a66392f1ac0d && rm -r /tmp/local-volume-test-14a03839-8c28-4299-94c3-a66392f1ac0d] Namespace:persistent-local-volumes-test-3548 PodName:hostexec-v122-worker-lch8b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:03:39.886: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:03:40.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3548" for this suite. • [SLOW TEST:20.707 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":3,"skipped":25,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:03:13.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity used, insufficient capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1300 STEP: Building a driver namespace object, basename csi-mock-volumes-1636 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Oct 5 12:03:13.409: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1636-5485/csi-attacher Oct 5 12:03:13.412: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1636 Oct 5 12:03:13.412: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-1636 Oct 5 12:03:13.414: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1636 Oct 5 12:03:13.417: INFO: creating *v1.Role: csi-mock-volumes-1636-5485/external-attacher-cfg-csi-mock-volumes-1636 Oct 5 12:03:13.419: INFO: creating *v1.RoleBinding: csi-mock-volumes-1636-5485/csi-attacher-role-cfg Oct 5 12:03:13.423: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1636-5485/csi-provisioner Oct 5 12:03:13.425: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1636 Oct 5 12:03:13.425: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-1636 Oct 5 12:03:13.428: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1636 Oct 5 12:03:13.430: INFO: creating *v1.Role: csi-mock-volumes-1636-5485/external-provisioner-cfg-csi-mock-volumes-1636 Oct 5 12:03:13.433: INFO: creating *v1.RoleBinding: csi-mock-volumes-1636-5485/csi-provisioner-role-cfg Oct 5 12:03:13.436: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1636-5485/csi-resizer Oct 5 12:03:13.439: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1636 Oct 5 12:03:13.439: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-1636 Oct 5 12:03:13.442: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1636 Oct 5 12:03:13.445: INFO: creating *v1.Role: csi-mock-volumes-1636-5485/external-resizer-cfg-csi-mock-volumes-1636 Oct 5 12:03:13.448: INFO: creating *v1.RoleBinding: csi-mock-volumes-1636-5485/csi-resizer-role-cfg Oct 5 12:03:13.452: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1636-5485/csi-snapshotter Oct 5 12:03:13.455: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1636 Oct 5 12:03:13.455: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-1636 Oct 5 12:03:13.458: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1636 Oct 5 12:03:13.461: INFO: creating *v1.Role: csi-mock-volumes-1636-5485/external-snapshotter-leaderelection-csi-mock-volumes-1636 Oct 5 12:03:13.464: INFO: creating *v1.RoleBinding: csi-mock-volumes-1636-5485/external-snapshotter-leaderelection Oct 5 12:03:13.467: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1636-5485/csi-mock Oct 5 12:03:13.470: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1636 Oct 5 12:03:13.473: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1636 Oct 5 12:03:13.476: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1636 Oct 5 12:03:13.478: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1636 Oct 5 12:03:13.488: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1636 Oct 5 12:03:13.491: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1636 Oct 5 12:03:13.493: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1636 Oct 5 12:03:13.497: INFO: creating *v1.StatefulSet: csi-mock-volumes-1636-5485/csi-mockplugin Oct 5 12:03:13.502: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-1636 Oct 5 12:03:13.505: INFO: creating *v1.StatefulSet: csi-mock-volumes-1636-5485/csi-mockplugin-attacher Oct 5 12:03:13.508: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-1636" Oct 5 12:03:13.511: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-1636 to register on node v122-worker2 STEP: Creating pod Oct 5 12:03:34.798: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Deleting the previously created pod Oct 5 12:03:34.815: INFO: Deleting pod "pvc-volume-tester-sws6v" in namespace "csi-mock-volumes-1636" Oct 5 12:03:34.824: INFO: Wait up to 5m0s for pod "pvc-volume-tester-sws6v" to be fully deleted STEP: Deleting pod pvc-volume-tester-sws6v Oct 5 12:03:34.827: INFO: Deleting pod "pvc-volume-tester-sws6v" in namespace "csi-mock-volumes-1636" STEP: Deleting claim pvc-c9v7f STEP: Deleting storageclass mock-csi-storage-capacity-csi-mock-volumes-1636 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-1636 STEP: Waiting for namespaces [csi-mock-volumes-1636] to vanish STEP: uninstalling csi mock driver Oct 5 12:03:40.852: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1636-5485/csi-attacher Oct 5 12:03:40.860: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1636 Oct 5 12:03:40.866: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1636 Oct 5 12:03:40.870: INFO: deleting *v1.Role: csi-mock-volumes-1636-5485/external-attacher-cfg-csi-mock-volumes-1636 Oct 5 12:03:40.875: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1636-5485/csi-attacher-role-cfg Oct 5 12:03:40.880: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1636-5485/csi-provisioner Oct 5 12:03:40.884: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1636 Oct 5 12:03:40.889: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1636 Oct 5 12:03:40.893: INFO: deleting *v1.Role: csi-mock-volumes-1636-5485/external-provisioner-cfg-csi-mock-volumes-1636 Oct 5 12:03:40.898: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1636-5485/csi-provisioner-role-cfg Oct 5 12:03:40.902: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1636-5485/csi-resizer Oct 5 12:03:40.907: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1636 Oct 5 12:03:40.911: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1636 Oct 5 12:03:40.916: INFO: deleting *v1.Role: csi-mock-volumes-1636-5485/external-resizer-cfg-csi-mock-volumes-1636 Oct 5 12:03:40.920: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1636-5485/csi-resizer-role-cfg Oct 5 12:03:40.925: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1636-5485/csi-snapshotter Oct 5 12:03:40.930: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1636 Oct 5 12:03:40.934: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1636 Oct 5 12:03:40.939: INFO: deleting *v1.Role: csi-mock-volumes-1636-5485/external-snapshotter-leaderelection-csi-mock-volumes-1636 Oct 5 12:03:40.943: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1636-5485/external-snapshotter-leaderelection Oct 5 12:03:40.947: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1636-5485/csi-mock Oct 5 12:03:40.952: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1636 Oct 5 12:03:40.957: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1636 Oct 5 12:03:40.961: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1636 Oct 5 12:03:40.966: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1636 Oct 5 12:03:40.970: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1636 Oct 5 12:03:40.975: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1636 Oct 5 12:03:40.979: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1636 Oct 5 12:03:40.984: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1636-5485/csi-mockplugin Oct 5 12:03:40.989: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-1636 Oct 5 12:03:40.994: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1636-5485/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-1636-5485 STEP: Waiting for namespaces [csi-mock-volumes-1636-5485] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:03:53.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:39.708 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1257 CSIStorageCapacity used, insufficient capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1300 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","total":-1,"completed":1,"skipped":34,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:03:53.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] volume on tmpfs should have the correct mode using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:75 STEP: Creating a pod to test emptydir volume type on tmpfs Oct 5 12:03:53.097: INFO: Waiting up to 5m0s for pod "pod-1a214edc-54b1-4ff1-bd63-b884c3606c00" in namespace "emptydir-3724" to be "Succeeded or Failed" Oct 5 12:03:53.101: INFO: Pod "pod-1a214edc-54b1-4ff1-bd63-b884c3606c00": Phase="Pending", Reason="", readiness=false. Elapsed: 3.181591ms Oct 5 12:03:55.105: INFO: Pod "pod-1a214edc-54b1-4ff1-bd63-b884c3606c00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007787863s Oct 5 12:03:57.110: INFO: Pod "pod-1a214edc-54b1-4ff1-bd63-b884c3606c00": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012231678s Oct 5 12:03:59.113: INFO: Pod "pod-1a214edc-54b1-4ff1-bd63-b884c3606c00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015730605s STEP: Saw pod success Oct 5 12:03:59.113: INFO: Pod "pod-1a214edc-54b1-4ff1-bd63-b884c3606c00" satisfied condition "Succeeded or Failed" Oct 5 12:03:59.116: INFO: Trying to get logs from node v122-worker pod pod-1a214edc-54b1-4ff1-bd63-b884c3606c00 container test-container: STEP: delete the pod Oct 5 12:03:59.145: INFO: Waiting for pod pod-1a214edc-54b1-4ff1-bd63-b884c3606c00 to disappear Oct 5 12:03:59.149: INFO: Pod pod-1a214edc-54b1-4ff1-bd63-b884c3606c00 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:03:59.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3724" for this suite. • [SLOW TEST:6.113 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48 volume on tmpfs should have the correct mode using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:75 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup","total":-1,"completed":2,"skipped":47,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:03:59.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:57 Oct 5 12:03:59.223: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:03:59.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-9953" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:91 S [SKIPPING] in Spec Setup (BeforeEach) [0.045 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create metrics for total number of volumes in A/D Controller [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:396 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:61 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:03:13.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity disabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1300 STEP: Building a driver namespace object, basename csi-mock-volumes-5093 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Oct 5 12:03:13.372: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5093-107/csi-attacher Oct 5 12:03:13.374: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5093 Oct 5 12:03:13.374: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-5093 Oct 5 12:03:13.378: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5093 Oct 5 12:03:13.382: INFO: creating *v1.Role: csi-mock-volumes-5093-107/external-attacher-cfg-csi-mock-volumes-5093 Oct 5 12:03:13.384: INFO: creating *v1.RoleBinding: csi-mock-volumes-5093-107/csi-attacher-role-cfg Oct 5 12:03:13.388: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5093-107/csi-provisioner Oct 5 12:03:13.391: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5093 Oct 5 12:03:13.391: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-5093 Oct 5 12:03:13.394: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5093 Oct 5 12:03:13.397: INFO: creating *v1.Role: csi-mock-volumes-5093-107/external-provisioner-cfg-csi-mock-volumes-5093 Oct 5 12:03:13.399: INFO: creating *v1.RoleBinding: csi-mock-volumes-5093-107/csi-provisioner-role-cfg Oct 5 12:03:13.402: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5093-107/csi-resizer Oct 5 12:03:13.405: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5093 Oct 5 12:03:13.405: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-5093 Oct 5 12:03:13.408: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5093 Oct 5 12:03:13.411: INFO: creating *v1.Role: csi-mock-volumes-5093-107/external-resizer-cfg-csi-mock-volumes-5093 Oct 5 12:03:13.413: INFO: creating *v1.RoleBinding: csi-mock-volumes-5093-107/csi-resizer-role-cfg Oct 5 12:03:13.416: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5093-107/csi-snapshotter Oct 5 12:03:13.419: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5093 Oct 5 12:03:13.419: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-5093 Oct 5 12:03:13.422: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5093 Oct 5 12:03:13.425: INFO: creating *v1.Role: csi-mock-volumes-5093-107/external-snapshotter-leaderelection-csi-mock-volumes-5093 Oct 5 12:03:13.428: INFO: creating *v1.RoleBinding: csi-mock-volumes-5093-107/external-snapshotter-leaderelection Oct 5 12:03:13.431: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5093-107/csi-mock Oct 5 12:03:13.434: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5093 Oct 5 12:03:13.437: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5093 Oct 5 12:03:13.439: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5093 Oct 5 12:03:13.442: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5093 Oct 5 12:03:13.445: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5093 Oct 5 12:03:13.448: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5093 Oct 5 12:03:13.451: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5093 Oct 5 12:03:13.454: INFO: creating *v1.StatefulSet: csi-mock-volumes-5093-107/csi-mockplugin Oct 5 12:03:13.460: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-5093 Oct 5 12:03:13.463: INFO: creating *v1.StatefulSet: csi-mock-volumes-5093-107/csi-mockplugin-attacher Oct 5 12:03:13.466: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-5093" Oct 5 12:03:13.470: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5093 to register on node v122-worker2 STEP: Creating pod Oct 5 12:03:27.995: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Deleting the previously created pod Oct 5 12:03:48.018: INFO: Deleting pod "pvc-volume-tester-2q2h7" in namespace "csi-mock-volumes-5093" Oct 5 12:03:48.024: INFO: Wait up to 5m0s for pod "pvc-volume-tester-2q2h7" to be fully deleted STEP: Deleting pod pvc-volume-tester-2q2h7 Oct 5 12:03:58.031: INFO: Deleting pod "pvc-volume-tester-2q2h7" in namespace "csi-mock-volumes-5093" STEP: Deleting claim pvc-mknqp Oct 5 12:03:58.047: INFO: Waiting up to 2m0s for PersistentVolume pvc-dcd6969c-e268-495e-b793-3cf74941254b to get deleted Oct 5 12:03:58.050: INFO: PersistentVolume pvc-dcd6969c-e268-495e-b793-3cf74941254b found and phase=Bound (3.536215ms) Oct 5 12:04:00.054: INFO: PersistentVolume pvc-dcd6969c-e268-495e-b793-3cf74941254b was removed STEP: Deleting storageclass mock-csi-storage-capacity-csi-mock-volumes-5093 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-5093 STEP: Waiting for namespaces [csi-mock-volumes-5093] to vanish STEP: uninstalling csi mock driver Oct 5 12:04:06.069: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5093-107/csi-attacher Oct 5 12:04:06.074: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5093 Oct 5 12:04:06.079: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5093 Oct 5 12:04:06.084: INFO: deleting *v1.Role: csi-mock-volumes-5093-107/external-attacher-cfg-csi-mock-volumes-5093 Oct 5 12:04:06.088: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5093-107/csi-attacher-role-cfg Oct 5 12:04:06.093: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5093-107/csi-provisioner Oct 5 12:04:06.097: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5093 Oct 5 12:04:06.102: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5093 Oct 5 12:04:06.106: INFO: deleting *v1.Role: csi-mock-volumes-5093-107/external-provisioner-cfg-csi-mock-volumes-5093 Oct 5 12:04:06.111: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5093-107/csi-provisioner-role-cfg Oct 5 12:04:06.115: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5093-107/csi-resizer Oct 5 12:04:06.120: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5093 Oct 5 12:04:06.124: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5093 Oct 5 12:04:06.129: INFO: deleting *v1.Role: csi-mock-volumes-5093-107/external-resizer-cfg-csi-mock-volumes-5093 Oct 5 12:04:06.133: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5093-107/csi-resizer-role-cfg Oct 5 12:04:06.138: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5093-107/csi-snapshotter Oct 5 12:04:06.143: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5093 Oct 5 12:04:06.147: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5093 Oct 5 12:04:06.152: INFO: deleting *v1.Role: csi-mock-volumes-5093-107/external-snapshotter-leaderelection-csi-mock-volumes-5093 Oct 5 12:04:06.156: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5093-107/external-snapshotter-leaderelection Oct 5 12:04:06.161: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5093-107/csi-mock Oct 5 12:04:06.166: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5093 Oct 5 12:04:06.170: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5093 Oct 5 12:04:06.175: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5093 Oct 5 12:04:06.179: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5093 Oct 5 12:04:06.184: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5093 Oct 5 12:04:06.189: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5093 Oct 5 12:04:06.193: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5093 Oct 5 12:04:06.198: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5093-107/csi-mockplugin Oct 5 12:04:06.204: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-5093 Oct 5 12:04:06.208: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5093-107/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-5093-107 STEP: Waiting for namespaces [csi-mock-volumes-5093-107] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:04:18.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:64.936 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1257 CSIStorageCapacity disabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1300 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity disabled","total":-1,"completed":1,"skipped":18,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:03:19.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity used, have capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1300 STEP: Building a driver namespace object, basename csi-mock-volumes-4303 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Oct 5 12:03:19.676: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4303-6527/csi-attacher Oct 5 12:03:19.680: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4303 Oct 5 12:03:19.680: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4303 Oct 5 12:03:19.684: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4303 Oct 5 12:03:19.687: INFO: creating *v1.Role: csi-mock-volumes-4303-6527/external-attacher-cfg-csi-mock-volumes-4303 Oct 5 12:03:19.691: INFO: creating *v1.RoleBinding: csi-mock-volumes-4303-6527/csi-attacher-role-cfg Oct 5 12:03:19.695: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4303-6527/csi-provisioner Oct 5 12:03:19.698: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4303 Oct 5 12:03:19.698: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4303 Oct 5 12:03:19.702: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4303 Oct 5 12:03:19.705: INFO: creating *v1.Role: csi-mock-volumes-4303-6527/external-provisioner-cfg-csi-mock-volumes-4303 Oct 5 12:03:19.709: INFO: creating *v1.RoleBinding: csi-mock-volumes-4303-6527/csi-provisioner-role-cfg Oct 5 12:03:19.712: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4303-6527/csi-resizer Oct 5 12:03:19.715: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4303 Oct 5 12:03:19.716: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4303 Oct 5 12:03:19.719: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4303 Oct 5 12:03:19.722: INFO: creating *v1.Role: csi-mock-volumes-4303-6527/external-resizer-cfg-csi-mock-volumes-4303 Oct 5 12:03:19.726: INFO: creating *v1.RoleBinding: csi-mock-volumes-4303-6527/csi-resizer-role-cfg Oct 5 12:03:19.729: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4303-6527/csi-snapshotter Oct 5 12:03:19.733: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4303 Oct 5 12:03:19.733: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4303 Oct 5 12:03:19.736: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4303 Oct 5 12:03:19.739: INFO: creating *v1.Role: csi-mock-volumes-4303-6527/external-snapshotter-leaderelection-csi-mock-volumes-4303 Oct 5 12:03:19.743: INFO: creating *v1.RoleBinding: csi-mock-volumes-4303-6527/external-snapshotter-leaderelection Oct 5 12:03:19.746: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4303-6527/csi-mock Oct 5 12:03:19.749: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4303 Oct 5 12:03:19.753: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4303 Oct 5 12:03:19.756: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4303 Oct 5 12:03:19.760: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4303 Oct 5 12:03:19.763: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4303 Oct 5 12:03:19.766: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4303 Oct 5 12:03:19.770: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4303 Oct 5 12:03:19.773: INFO: creating *v1.StatefulSet: csi-mock-volumes-4303-6527/csi-mockplugin Oct 5 12:03:19.778: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4303 Oct 5 12:03:19.782: INFO: creating *v1.StatefulSet: csi-mock-volumes-4303-6527/csi-mockplugin-attacher Oct 5 12:03:19.787: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4303" Oct 5 12:03:19.790: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4303 to register on node v122-worker2 STEP: Creating pod Oct 5 12:03:34.317: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Deleting the previously created pod Oct 5 12:03:46.341: INFO: Deleting pod "pvc-volume-tester-qhbbg" in namespace "csi-mock-volumes-4303" Oct 5 12:03:46.346: INFO: Wait up to 5m0s for pod "pvc-volume-tester-qhbbg" to be fully deleted STEP: Deleting pod pvc-volume-tester-qhbbg Oct 5 12:03:58.355: INFO: Deleting pod "pvc-volume-tester-qhbbg" in namespace "csi-mock-volumes-4303" STEP: Deleting claim pvc-nh7cb Oct 5 12:03:58.368: INFO: Waiting up to 2m0s for PersistentVolume pvc-7ca8b988-1f72-4809-a1e3-d2e20c254bc6 to get deleted Oct 5 12:03:58.372: INFO: PersistentVolume pvc-7ca8b988-1f72-4809-a1e3-d2e20c254bc6 found and phase=Bound (4.174152ms) Oct 5 12:04:00.376: INFO: PersistentVolume pvc-7ca8b988-1f72-4809-a1e3-d2e20c254bc6 was removed STEP: Deleting storageclass mock-csi-storage-capacity-csi-mock-volumes-4303 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4303 STEP: Waiting for namespaces [csi-mock-volumes-4303] to vanish STEP: uninstalling csi mock driver Oct 5 12:04:06.391: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4303-6527/csi-attacher Oct 5 12:04:06.394: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4303 Oct 5 12:04:06.397: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4303 Oct 5 12:04:06.400: INFO: deleting *v1.Role: csi-mock-volumes-4303-6527/external-attacher-cfg-csi-mock-volumes-4303 Oct 5 12:04:06.403: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4303-6527/csi-attacher-role-cfg Oct 5 12:04:06.408: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4303-6527/csi-provisioner Oct 5 12:04:06.412: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4303 Oct 5 12:04:06.415: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4303 Oct 5 12:04:06.419: INFO: deleting *v1.Role: csi-mock-volumes-4303-6527/external-provisioner-cfg-csi-mock-volumes-4303 Oct 5 12:04:06.422: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4303-6527/csi-provisioner-role-cfg Oct 5 12:04:06.425: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4303-6527/csi-resizer Oct 5 12:04:06.429: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4303 Oct 5 12:04:06.432: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4303 Oct 5 12:04:06.436: INFO: deleting *v1.Role: csi-mock-volumes-4303-6527/external-resizer-cfg-csi-mock-volumes-4303 Oct 5 12:04:06.439: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4303-6527/csi-resizer-role-cfg Oct 5 12:04:06.442: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4303-6527/csi-snapshotter Oct 5 12:04:06.446: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4303 Oct 5 12:04:06.448: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4303 Oct 5 12:04:06.452: INFO: deleting *v1.Role: csi-mock-volumes-4303-6527/external-snapshotter-leaderelection-csi-mock-volumes-4303 Oct 5 12:04:06.455: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4303-6527/external-snapshotter-leaderelection Oct 5 12:04:06.459: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4303-6527/csi-mock Oct 5 12:04:06.464: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4303 Oct 5 12:04:06.467: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4303 Oct 5 12:04:06.471: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4303 Oct 5 12:04:06.474: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4303 Oct 5 12:04:06.479: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4303 Oct 5 12:04:06.482: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4303 Oct 5 12:04:06.486: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4303 Oct 5 12:04:06.491: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4303-6527/csi-mockplugin Oct 5 12:04:06.495: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-4303 Oct 5 12:04:06.499: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4303-6527/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-4303-6527 STEP: Waiting for namespaces [csi-mock-volumes-4303-6527] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:04:22.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:62.934 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1257 CSIStorageCapacity used, have capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1300 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","total":-1,"completed":3,"skipped":196,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:04:18.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Oct 5 12:04:18.306: INFO: The status of Pod test-hostpath-type-6xkvw is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:04:20.311: INFO: The status of Pod test-hostpath-type-6xkvw is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:04:22.310: INFO: The status of Pod test-hostpath-type-6xkvw is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:04:24.311: INFO: The status of Pod test-hostpath-type-6xkvw is Running (Ready = true) STEP: running on node v122-worker STEP: Create a character device for further testing Oct 5 12:04:24.314: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-8358 PodName:test-hostpath-type-6xkvw ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:04:24.314: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting character device 'achardev' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:285 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:04:26.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-8358" for this suite. • [SLOW TEST:8.201 seconds] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting character device 'achardev' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:285 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathDirectory","total":-1,"completed":2,"skipped":31,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:03:07.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes W1005 12:03:07.080137 20 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 5 12:03:07.080: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity unused /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1300 STEP: Building a driver namespace object, basename csi-mock-volumes-2916 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Oct 5 12:03:07.151: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2916-4747/csi-attacher Oct 5 12:03:07.154: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2916 Oct 5 12:03:07.154: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-2916 Oct 5 12:03:07.157: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2916 Oct 5 12:03:07.161: INFO: creating *v1.Role: csi-mock-volumes-2916-4747/external-attacher-cfg-csi-mock-volumes-2916 Oct 5 12:03:07.165: INFO: creating *v1.RoleBinding: csi-mock-volumes-2916-4747/csi-attacher-role-cfg Oct 5 12:03:07.168: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2916-4747/csi-provisioner Oct 5 12:03:07.172: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2916 Oct 5 12:03:07.172: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-2916 Oct 5 12:03:07.175: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2916 Oct 5 12:03:07.178: INFO: creating *v1.Role: csi-mock-volumes-2916-4747/external-provisioner-cfg-csi-mock-volumes-2916 Oct 5 12:03:07.181: INFO: creating *v1.RoleBinding: csi-mock-volumes-2916-4747/csi-provisioner-role-cfg Oct 5 12:03:07.184: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2916-4747/csi-resizer Oct 5 12:03:07.188: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2916 Oct 5 12:03:07.188: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-2916 Oct 5 12:03:07.191: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2916 Oct 5 12:03:07.194: INFO: creating *v1.Role: csi-mock-volumes-2916-4747/external-resizer-cfg-csi-mock-volumes-2916 Oct 5 12:03:07.197: INFO: creating *v1.RoleBinding: csi-mock-volumes-2916-4747/csi-resizer-role-cfg Oct 5 12:03:07.200: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2916-4747/csi-snapshotter Oct 5 12:03:07.203: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2916 Oct 5 12:03:07.203: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-2916 Oct 5 12:03:07.207: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2916 Oct 5 12:03:07.210: INFO: creating *v1.Role: csi-mock-volumes-2916-4747/external-snapshotter-leaderelection-csi-mock-volumes-2916 Oct 5 12:03:07.213: INFO: creating *v1.RoleBinding: csi-mock-volumes-2916-4747/external-snapshotter-leaderelection Oct 5 12:03:07.217: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2916-4747/csi-mock Oct 5 12:03:07.220: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2916 Oct 5 12:03:07.223: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2916 Oct 5 12:03:07.226: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2916 Oct 5 12:03:07.230: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2916 Oct 5 12:03:07.233: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2916 Oct 5 12:03:07.237: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2916 Oct 5 12:03:07.240: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2916 Oct 5 12:03:07.244: INFO: creating *v1.StatefulSet: csi-mock-volumes-2916-4747/csi-mockplugin Oct 5 12:03:07.251: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-2916 Oct 5 12:03:07.255: INFO: creating *v1.StatefulSet: csi-mock-volumes-2916-4747/csi-mockplugin-attacher Oct 5 12:03:07.260: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-2916" Oct 5 12:03:07.263: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-2916 to register on node v122-worker2 STEP: Creating pod Oct 5 12:03:38.734: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Deleting the previously created pod Oct 5 12:03:58.765: INFO: Deleting pod "pvc-volume-tester-z4q82" in namespace "csi-mock-volumes-2916" Oct 5 12:03:58.771: INFO: Wait up to 5m0s for pod "pvc-volume-tester-z4q82" to be fully deleted STEP: Deleting pod pvc-volume-tester-z4q82 Oct 5 12:04:02.780: INFO: Deleting pod "pvc-volume-tester-z4q82" in namespace "csi-mock-volumes-2916" STEP: Deleting claim pvc-dwhhh Oct 5 12:04:02.791: INFO: Waiting up to 2m0s for PersistentVolume pvc-e0f39ca1-d216-44d4-9ea4-04686c8da135 to get deleted Oct 5 12:04:02.795: INFO: PersistentVolume pvc-e0f39ca1-d216-44d4-9ea4-04686c8da135 found and phase=Bound (3.406782ms) Oct 5 12:04:04.799: INFO: PersistentVolume pvc-e0f39ca1-d216-44d4-9ea4-04686c8da135 found and phase=Released (2.007440755s) Oct 5 12:04:06.802: INFO: PersistentVolume pvc-e0f39ca1-d216-44d4-9ea4-04686c8da135 found and phase=Released (4.010960365s) Oct 5 12:04:08.805: INFO: PersistentVolume pvc-e0f39ca1-d216-44d4-9ea4-04686c8da135 found and phase=Released (6.013394502s) Oct 5 12:04:10.809: INFO: PersistentVolume pvc-e0f39ca1-d216-44d4-9ea4-04686c8da135 was removed STEP: Deleting storageclass mock-csi-storage-capacity-csi-mock-volumes-2916 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-2916 STEP: Waiting for namespaces [csi-mock-volumes-2916] to vanish STEP: uninstalling csi mock driver Oct 5 12:04:16.824: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2916-4747/csi-attacher Oct 5 12:04:16.829: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2916 Oct 5 12:04:16.833: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2916 Oct 5 12:04:16.838: INFO: deleting *v1.Role: csi-mock-volumes-2916-4747/external-attacher-cfg-csi-mock-volumes-2916 Oct 5 12:04:16.842: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2916-4747/csi-attacher-role-cfg Oct 5 12:04:16.846: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2916-4747/csi-provisioner Oct 5 12:04:16.851: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2916 Oct 5 12:04:16.855: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2916 Oct 5 12:04:16.860: INFO: deleting *v1.Role: csi-mock-volumes-2916-4747/external-provisioner-cfg-csi-mock-volumes-2916 Oct 5 12:04:16.864: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2916-4747/csi-provisioner-role-cfg Oct 5 12:04:16.869: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2916-4747/csi-resizer Oct 5 12:04:16.873: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2916 Oct 5 12:04:16.877: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2916 Oct 5 12:04:16.881: INFO: deleting *v1.Role: csi-mock-volumes-2916-4747/external-resizer-cfg-csi-mock-volumes-2916 Oct 5 12:04:16.885: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2916-4747/csi-resizer-role-cfg Oct 5 12:04:16.890: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2916-4747/csi-snapshotter Oct 5 12:04:16.894: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2916 Oct 5 12:04:16.898: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2916 Oct 5 12:04:16.903: INFO: deleting *v1.Role: csi-mock-volumes-2916-4747/external-snapshotter-leaderelection-csi-mock-volumes-2916 Oct 5 12:04:16.907: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2916-4747/external-snapshotter-leaderelection Oct 5 12:04:16.911: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2916-4747/csi-mock Oct 5 12:04:16.915: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2916 Oct 5 12:04:16.919: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2916 Oct 5 12:04:16.923: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2916 Oct 5 12:04:16.927: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2916 Oct 5 12:04:16.932: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2916 Oct 5 12:04:16.936: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2916 Oct 5 12:04:16.940: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2916 Oct 5 12:04:16.944: INFO: deleting *v1.StatefulSet: csi-mock-volumes-2916-4747/csi-mockplugin Oct 5 12:04:16.949: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-2916 Oct 5 12:04:16.953: INFO: deleting *v1.StatefulSet: csi-mock-volumes-2916-4747/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-2916-4747 STEP: Waiting for namespaces [csi-mock-volumes-2916-4747] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:04:28.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:81.943 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1257 CSIStorageCapacity unused /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1300 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity unused","total":-1,"completed":1,"skipped":1,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:03:59.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "v122-worker2" at path "/tmp/local-volume-test-3f249ca0-2bc2-4e63-9f38-f2691968f49b" Oct 5 12:04:03.395: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-3f249ca0-2bc2-4e63-9f38-f2691968f49b" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-3f249ca0-2bc2-4e63-9f38-f2691968f49b" "/tmp/local-volume-test-3f249ca0-2bc2-4e63-9f38-f2691968f49b"] Namespace:persistent-local-volumes-test-8711 PodName:hostexec-v122-worker2-v7c2w ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:04:03.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:04:03.546: INFO: Creating a PV followed by a PVC Oct 5 12:04:03.554: INFO: Waiting for PV local-pvf6l57 to bind to PVC pvc-nms7k Oct 5 12:04:03.554: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-nms7k] to have phase Bound Oct 5 12:04:03.557: INFO: PersistentVolumeClaim pvc-nms7k found but phase is Pending instead of Bound. Oct 5 12:04:05.562: INFO: PersistentVolumeClaim pvc-nms7k found and phase=Bound (2.007799299s) Oct 5 12:04:05.562: INFO: Waiting up to 3m0s for PersistentVolume local-pvf6l57 to have phase Bound Oct 5 12:04:05.565: INFO: PersistentVolume local-pvf6l57 found and phase=Bound (2.410308ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Oct 5 12:04:19.584: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:45799 --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-8711 exec pod-1d4a0f56-da1b-401a-aa56-0ba68572906d --namespace=persistent-local-volumes-test-8711 -- stat -c %g /mnt/volume1' Oct 5 12:04:19.861: INFO: stderr: "" Oct 5 12:04:19.861: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Oct 5 12:04:35.876: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:45799 --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-8711 exec pod-ef88ae33-294a-4c7b-b9d0-aa77899e2814 --namespace=persistent-local-volumes-test-8711 -- stat -c %g /mnt/volume1' Oct 5 12:04:36.092: INFO: stderr: "" Oct 5 12:04:36.092: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod pod-1d4a0f56-da1b-401a-aa56-0ba68572906d in namespace persistent-local-volumes-test-8711 STEP: Deleting second pod STEP: Deleting pod pod-ef88ae33-294a-4c7b-b9d0-aa77899e2814 in namespace persistent-local-volumes-test-8711 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:04:36.102: INFO: Deleting PersistentVolumeClaim "pvc-nms7k" Oct 5 12:04:36.107: INFO: Deleting PersistentVolume "local-pvf6l57" STEP: Unmount tmpfs mount point on node "v122-worker2" at path "/tmp/local-volume-test-3f249ca0-2bc2-4e63-9f38-f2691968f49b" Oct 5 12:04:36.112: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-3f249ca0-2bc2-4e63-9f38-f2691968f49b"] Namespace:persistent-local-volumes-test-8711 PodName:hostexec-v122-worker2-v7c2w ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:04:36.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:04:36.265: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-3f249ca0-2bc2-4e63-9f38-f2691968f49b] Namespace:persistent-local-volumes-test-8711 PodName:hostexec-v122-worker2-v7c2w ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:04:36.265: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:04:36.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8711" for this suite. • [SLOW TEST:37.078 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":3,"skipped":116,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:04:29.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Oct 5 12:04:41.061: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-5241 PodName:hostexec-v122-worker2-755pw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:04:41.061: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:04:41.192: INFO: exec v122-worker2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Oct 5 12:04:41.192: INFO: exec v122-worker2: stdout: "0\n" Oct 5 12:04:41.192: INFO: exec v122-worker2: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Oct 5 12:04:41.192: INFO: exec v122-worker2: exit code: 0 Oct 5 12:04:41.192: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:04:41.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5241" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [12.200 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1250 ------------------------------ SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:03:07.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes W1005 12:03:07.124114 29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 5 12:03:07.124: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should not modify fsGroup if fsGroupPolicy=None /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1583 STEP: Building a driver namespace object, basename csi-mock-volumes-9070 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Oct 5 12:03:07.256: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9070-8898/csi-attacher Oct 5 12:03:07.261: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9070 Oct 5 12:03:07.261: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-9070 Oct 5 12:03:07.265: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9070 Oct 5 12:03:07.269: INFO: creating *v1.Role: csi-mock-volumes-9070-8898/external-attacher-cfg-csi-mock-volumes-9070 Oct 5 12:03:07.272: INFO: creating *v1.RoleBinding: csi-mock-volumes-9070-8898/csi-attacher-role-cfg Oct 5 12:03:07.277: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9070-8898/csi-provisioner Oct 5 12:03:07.281: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9070 Oct 5 12:03:07.281: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-9070 Oct 5 12:03:07.285: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9070 Oct 5 12:03:07.294: INFO: creating *v1.Role: csi-mock-volumes-9070-8898/external-provisioner-cfg-csi-mock-volumes-9070 Oct 5 12:03:07.298: INFO: creating *v1.RoleBinding: csi-mock-volumes-9070-8898/csi-provisioner-role-cfg Oct 5 12:03:07.302: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9070-8898/csi-resizer Oct 5 12:03:07.305: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9070 Oct 5 12:03:07.305: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-9070 Oct 5 12:03:07.309: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9070 Oct 5 12:03:07.313: INFO: creating *v1.Role: csi-mock-volumes-9070-8898/external-resizer-cfg-csi-mock-volumes-9070 Oct 5 12:03:07.317: INFO: creating *v1.RoleBinding: csi-mock-volumes-9070-8898/csi-resizer-role-cfg Oct 5 12:03:07.321: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9070-8898/csi-snapshotter Oct 5 12:03:07.324: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9070 Oct 5 12:03:07.324: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-9070 Oct 5 12:03:07.328: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9070 Oct 5 12:03:07.332: INFO: creating *v1.Role: csi-mock-volumes-9070-8898/external-snapshotter-leaderelection-csi-mock-volumes-9070 Oct 5 12:03:07.336: INFO: creating *v1.RoleBinding: csi-mock-volumes-9070-8898/external-snapshotter-leaderelection Oct 5 12:03:07.340: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9070-8898/csi-mock Oct 5 12:03:07.343: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9070 Oct 5 12:03:07.347: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9070 Oct 5 12:03:07.351: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9070 Oct 5 12:03:07.355: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9070 Oct 5 12:03:07.358: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9070 Oct 5 12:03:07.362: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9070 Oct 5 12:03:07.366: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9070 Oct 5 12:03:07.370: INFO: creating *v1.StatefulSet: csi-mock-volumes-9070-8898/csi-mockplugin Oct 5 12:03:07.376: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-9070 Oct 5 12:03:07.380: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-9070" Oct 5 12:03:07.383: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-9070 to register on node v122-worker2 STEP: Creating pod with fsGroup Oct 5 12:03:38.788: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Oct 5 12:03:38.798: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-2f5cd] to have phase Bound Oct 5 12:03:38.801: INFO: PersistentVolumeClaim pvc-2f5cd found but phase is Pending instead of Bound. Oct 5 12:03:40.812: INFO: PersistentVolumeClaim pvc-2f5cd found and phase=Bound (2.013480003s) Oct 5 12:03:54.830: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir /mnt/test/csi-mock-volumes-9070] Namespace:csi-mock-volumes-9070 PodName:pvc-volume-tester-xkl4r ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:03:54.830: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:03:54.885: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'filecontents' > '/mnt/test/csi-mock-volumes-9070/csi-mock-volumes-9070'; sync] Namespace:csi-mock-volumes-9070 PodName:pvc-volume-tester-xkl4r ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:03:54.885: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:03:54.986: INFO: ExecWithOptions {Command:[/bin/sh -c ls -l /mnt/test/csi-mock-volumes-9070/csi-mock-volumes-9070] Namespace:csi-mock-volumes-9070 PodName:pvc-volume-tester-xkl4r ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:03:54.986: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:03:55.060: INFO: pod csi-mock-volumes-9070/pvc-volume-tester-xkl4r exec for cmd ls -l /mnt/test/csi-mock-volumes-9070/csi-mock-volumes-9070, stdout: -rw-r--r-- 1 root root 13 Oct 5 12:03 /mnt/test/csi-mock-volumes-9070/csi-mock-volumes-9070, stderr: Oct 5 12:03:55.060: INFO: ExecWithOptions {Command:[/bin/sh -c rm -fr /mnt/test/csi-mock-volumes-9070] Namespace:csi-mock-volumes-9070 PodName:pvc-volume-tester-xkl4r ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:03:55.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod pvc-volume-tester-xkl4r Oct 5 12:03:55.143: INFO: Deleting pod "pvc-volume-tester-xkl4r" in namespace "csi-mock-volumes-9070" Oct 5 12:03:55.147: INFO: Wait up to 5m0s for pod "pvc-volume-tester-xkl4r" to be fully deleted STEP: Deleting claim pvc-2f5cd Oct 5 12:04:33.163: INFO: Waiting up to 2m0s for PersistentVolume pvc-7defaab0-28f7-42f1-a7dd-20aab63eaf4c to get deleted Oct 5 12:04:33.166: INFO: PersistentVolume pvc-7defaab0-28f7-42f1-a7dd-20aab63eaf4c found and phase=Bound (2.967609ms) Oct 5 12:04:35.170: INFO: PersistentVolume pvc-7defaab0-28f7-42f1-a7dd-20aab63eaf4c was removed STEP: Deleting storageclass csi-mock-volumes-9070-schx6pq STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-9070 STEP: Waiting for namespaces [csi-mock-volumes-9070] to vanish STEP: uninstalling csi mock driver Oct 5 12:04:41.184: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9070-8898/csi-attacher Oct 5 12:04:41.189: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9070 Oct 5 12:04:41.194: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9070 Oct 5 12:04:41.198: INFO: deleting *v1.Role: csi-mock-volumes-9070-8898/external-attacher-cfg-csi-mock-volumes-9070 Oct 5 12:04:41.202: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9070-8898/csi-attacher-role-cfg Oct 5 12:04:41.207: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9070-8898/csi-provisioner Oct 5 12:04:41.211: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9070 Oct 5 12:04:41.216: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9070 Oct 5 12:04:41.220: INFO: deleting *v1.Role: csi-mock-volumes-9070-8898/external-provisioner-cfg-csi-mock-volumes-9070 Oct 5 12:04:41.228: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9070-8898/csi-provisioner-role-cfg Oct 5 12:04:41.232: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9070-8898/csi-resizer Oct 5 12:04:41.236: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9070 Oct 5 12:04:41.240: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9070 Oct 5 12:04:41.244: INFO: deleting *v1.Role: csi-mock-volumes-9070-8898/external-resizer-cfg-csi-mock-volumes-9070 Oct 5 12:04:41.249: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9070-8898/csi-resizer-role-cfg Oct 5 12:04:41.256: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9070-8898/csi-snapshotter Oct 5 12:04:41.261: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9070 Oct 5 12:04:41.267: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9070 Oct 5 12:04:41.271: INFO: deleting *v1.Role: csi-mock-volumes-9070-8898/external-snapshotter-leaderelection-csi-mock-volumes-9070 Oct 5 12:04:41.276: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9070-8898/external-snapshotter-leaderelection Oct 5 12:04:41.280: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9070-8898/csi-mock Oct 5 12:04:41.285: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9070 Oct 5 12:04:41.289: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9070 Oct 5 12:04:41.293: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9070 Oct 5 12:04:41.298: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9070 Oct 5 12:04:41.302: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9070 Oct 5 12:04:41.306: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9070 Oct 5 12:04:41.311: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9070 Oct 5 12:04:41.316: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9070-8898/csi-mockplugin Oct 5 12:04:41.321: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-9070 STEP: deleting the driver namespace: csi-mock-volumes-9070-8898 STEP: Waiting for namespaces [csi-mock-volumes-9070-8898] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:04:53.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:106.254 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI FSGroupPolicy [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1559 should not modify fsGroup if fsGroupPolicy=None /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1583 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should not modify fsGroup if fsGroupPolicy=None","total":-1,"completed":1,"skipped":33,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:04:36.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Oct 5 12:04:48.672: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-f6ed2687-4bb0-4b7d-8a68-a94f8a9bfa0f && mount --bind /tmp/local-volume-test-f6ed2687-4bb0-4b7d-8a68-a94f8a9bfa0f /tmp/local-volume-test-f6ed2687-4bb0-4b7d-8a68-a94f8a9bfa0f] Namespace:persistent-local-volumes-test-1759 PodName:hostexec-v122-worker2-xnfhd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:04:48.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:04:48.822: INFO: Creating a PV followed by a PVC Oct 5 12:04:48.831: INFO: Waiting for PV local-pvvc6tv to bind to PVC pvc-4ggjq Oct 5 12:04:48.831: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-4ggjq] to have phase Bound Oct 5 12:04:48.834: INFO: PersistentVolumeClaim pvc-4ggjq found but phase is Pending instead of Bound. Oct 5 12:04:50.839: INFO: PersistentVolumeClaim pvc-4ggjq found and phase=Bound (2.007719223s) Oct 5 12:04:50.839: INFO: Waiting up to 3m0s for PersistentVolume local-pvvc6tv to have phase Bound Oct 5 12:04:50.843: INFO: PersistentVolume local-pvvc6tv found and phase=Bound (3.67298ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Oct 5 12:04:52.870: INFO: pod "pod-a4ea6177-488d-4241-a5b2-f89060018403" created on Node "v122-worker2" STEP: Writing in pod1 Oct 5 12:04:52.870: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1759 PodName:pod-a4ea6177-488d-4241-a5b2-f89060018403 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:04:52.870: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:04:52.963: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Oct 5 12:04:52.963: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1759 PodName:pod-a4ea6177-488d-4241-a5b2-f89060018403 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:04:52.963: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:04:53.088: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Oct 5 12:04:53.088: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-f6ed2687-4bb0-4b7d-8a68-a94f8a9bfa0f > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1759 PodName:pod-a4ea6177-488d-4241-a5b2-f89060018403 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:04:53.088: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:04:53.205: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-f6ed2687-4bb0-4b7d-8a68-a94f8a9bfa0f > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-a4ea6177-488d-4241-a5b2-f89060018403 in namespace persistent-local-volumes-test-1759 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:04:53.211: INFO: Deleting PersistentVolumeClaim "pvc-4ggjq" Oct 5 12:04:53.216: INFO: Deleting PersistentVolume "local-pvvc6tv" STEP: Removing the test directory Oct 5 12:04:53.220: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-f6ed2687-4bb0-4b7d-8a68-a94f8a9bfa0f && rm -r /tmp/local-volume-test-f6ed2687-4bb0-4b7d-8a68-a94f8a9bfa0f] Namespace:persistent-local-volumes-test-1759 PodName:hostexec-v122-worker2-xnfhd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:04:53.220: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:04:53.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1759" for this suite. • [SLOW TEST:16.741 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":4,"skipped":236,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:04:41.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Oct 5 12:04:41.267: INFO: The status of Pod test-hostpath-type-7k6gd is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:04:43.271: INFO: The status of Pod test-hostpath-type-7k6gd is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:04:45.272: INFO: The status of Pod test-hostpath-type-7k6gd is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:04:47.272: INFO: The status of Pod test-hostpath-type-7k6gd is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:04:49.272: INFO: The status of Pod test-hostpath-type-7k6gd is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:04:51.273: INFO: The status of Pod test-hostpath-type-7k6gd is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:04:53.272: INFO: The status of Pod test-hostpath-type-7k6gd is Running (Ready = true) STEP: running on node v122-worker STEP: Create a block device for further testing Oct 5 12:04:53.275: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-5825 PodName:test-hostpath-type-7k6gd ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:04:53.275: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:354 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:04:55.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-5825" for this suite. • [SLOW TEST:14.176 seconds] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting block device 'ablkdev' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:354 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathDirectory","total":-1,"completed":2,"skipped":23,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:03:25.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] exhausted, late binding, with topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1081 STEP: Building a driver namespace object, basename csi-mock-volumes-4058 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Oct 5 12:03:25.869: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4058-4154/csi-attacher Oct 5 12:03:25.873: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4058 Oct 5 12:03:25.873: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4058 Oct 5 12:03:25.877: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4058 Oct 5 12:03:25.881: INFO: creating *v1.Role: csi-mock-volumes-4058-4154/external-attacher-cfg-csi-mock-volumes-4058 Oct 5 12:03:25.885: INFO: creating *v1.RoleBinding: csi-mock-volumes-4058-4154/csi-attacher-role-cfg Oct 5 12:03:25.889: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4058-4154/csi-provisioner Oct 5 12:03:25.895: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4058 Oct 5 12:03:25.895: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4058 Oct 5 12:03:25.899: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4058 Oct 5 12:03:25.903: INFO: creating *v1.Role: csi-mock-volumes-4058-4154/external-provisioner-cfg-csi-mock-volumes-4058 Oct 5 12:03:25.907: INFO: creating *v1.RoleBinding: csi-mock-volumes-4058-4154/csi-provisioner-role-cfg Oct 5 12:03:25.911: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4058-4154/csi-resizer Oct 5 12:03:25.915: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4058 Oct 5 12:03:25.915: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4058 Oct 5 12:03:25.919: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4058 Oct 5 12:03:25.922: INFO: creating *v1.Role: csi-mock-volumes-4058-4154/external-resizer-cfg-csi-mock-volumes-4058 Oct 5 12:03:25.926: INFO: creating *v1.RoleBinding: csi-mock-volumes-4058-4154/csi-resizer-role-cfg Oct 5 12:03:25.930: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4058-4154/csi-snapshotter Oct 5 12:03:25.933: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4058 Oct 5 12:03:25.933: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4058 Oct 5 12:03:25.937: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4058 Oct 5 12:03:25.940: INFO: creating *v1.Role: csi-mock-volumes-4058-4154/external-snapshotter-leaderelection-csi-mock-volumes-4058 Oct 5 12:03:25.944: INFO: creating *v1.RoleBinding: csi-mock-volumes-4058-4154/external-snapshotter-leaderelection Oct 5 12:03:25.948: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4058-4154/csi-mock Oct 5 12:03:25.951: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4058 Oct 5 12:03:25.955: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4058 Oct 5 12:03:25.959: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4058 Oct 5 12:03:25.962: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4058 Oct 5 12:03:25.966: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4058 Oct 5 12:03:25.970: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4058 Oct 5 12:03:25.973: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4058 Oct 5 12:03:25.977: INFO: creating *v1.StatefulSet: csi-mock-volumes-4058-4154/csi-mockplugin Oct 5 12:03:25.984: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4058 Oct 5 12:03:25.988: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4058" Oct 5 12:03:25.991: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4058 to register on node v122-worker I1005 12:03:34.037830 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-4058","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1005 12:03:34.128466 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I1005 12:03:34.130903 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-4058","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1005 12:03:34.132815 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}},{"Type":{"Service":{"type":2}}}]},"Error":"","FullError":null} I1005 12:03:34.135681 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I1005 12:03:34.751479 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-4058","accessible_topology":{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}},"Error":"","FullError":null} STEP: Creating pod Oct 5 12:03:35.513: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil I1005 12:03:35.539161 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-9061b6ae-57e5-4210-823e-d62180c06fbf","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}],"accessibility_requirements":{"requisite":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}],"preferred":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}} I1005 12:03:38.372102 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-9061b6ae-57e5-4210-823e-d62180c06fbf","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}],"accessibility_requirements":{"requisite":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}],"preferred":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-9061b6ae-57e5-4210-823e-d62180c06fbf"},"accessible_topology":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Error":"","FullError":null} I1005 12:03:39.584662 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:03:39.587778 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Oct 5 12:03:39.590: INFO: >>> kubeConfig: /root/.kube/config I1005 12:03:39.712591 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9061b6ae-57e5-4210-823e-d62180c06fbf/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-9061b6ae-57e5-4210-823e-d62180c06fbf","storage.kubernetes.io/csiProvisionerIdentity":"1664971414137-8081-csi-mock-csi-mock-volumes-4058"}},"Response":{},"Error":"","FullError":null} I1005 12:03:39.717729 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:03:39.719567 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Oct 5 12:03:39.721: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:03:39.824: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:03:39.918: INFO: >>> kubeConfig: /root/.kube/config I1005 12:03:40.063868 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9061b6ae-57e5-4210-823e-d62180c06fbf/globalmount","target_path":"/var/lib/kubelet/pods/aee6ef02-c211-4f9f-bb72-7b6f5cfc5cc7/volumes/kubernetes.io~csi/pvc-9061b6ae-57e5-4210-823e-d62180c06fbf/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-9061b6ae-57e5-4210-823e-d62180c06fbf","storage.kubernetes.io/csiProvisionerIdentity":"1664971414137-8081-csi-mock-csi-mock-volumes-4058"}},"Response":{},"Error":"","FullError":null} Oct 5 12:03:45.532: INFO: Deleting pod "pvc-volume-tester-6t4h6" in namespace "csi-mock-volumes-4058" Oct 5 12:03:45.538: INFO: Wait up to 5m0s for pod "pvc-volume-tester-6t4h6" to be fully deleted Oct 5 12:03:52.589: INFO: >>> kubeConfig: /root/.kube/config I1005 12:03:52.729512 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/aee6ef02-c211-4f9f-bb72-7b6f5cfc5cc7/volumes/kubernetes.io~csi/pvc-9061b6ae-57e5-4210-823e-d62180c06fbf/mount"},"Response":{},"Error":"","FullError":null} I1005 12:03:52.794453 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:03:52.797013 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9061b6ae-57e5-4210-823e-d62180c06fbf/globalmount"},"Response":{},"Error":"","FullError":null} I1005 12:03:57.579914 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} STEP: Checking PVC events Oct 5 12:03:58.551: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-k4ws2", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4058", SelfLink:"", UID:"9061b6ae-57e5-4210-823e-d62180c06fbf", ResourceVersion:"1991", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63800568215, loc:(*time.Location)(0xa04d060)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0040122b8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0040122d0), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0037e7a10), VolumeMode:(*v1.PersistentVolumeMode)(0xc0037e7a20), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Oct 5 12:03:58.552: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-k4ws2", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4058", SelfLink:"", UID:"9061b6ae-57e5-4210-823e-d62180c06fbf", ResourceVersion:"1994", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63800568215, loc:(*time.Location)(0xa04d060)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.kubernetes.io/selected-node":"v122-worker"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004012330), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004012348), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004012360), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004012378), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0037e7a50), VolumeMode:(*v1.PersistentVolumeMode)(0xc0037e7a60), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Oct 5 12:03:58.552: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-k4ws2", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4058", SelfLink:"", UID:"9061b6ae-57e5-4210-823e-d62180c06fbf", ResourceVersion:"1995", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63800568215, loc:(*time.Location)(0xa04d060)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4058", "volume.kubernetes.io/selected-node":"v122-worker"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004c3a570), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004c3a588), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004c3a5a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004c3a5b8), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004c3a5d0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004c3a5e8), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0042cb5c0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0042cb5d0), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Oct 5 12:03:58.552: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-k4ws2", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4058", SelfLink:"", UID:"9061b6ae-57e5-4210-823e-d62180c06fbf", ResourceVersion:"1999", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63800568215, loc:(*time.Location)(0xa04d060)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4058"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004c3a600), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004c3a618), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004c3a630), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004c3a648), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004c3a660), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004c3a678), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0042cb600), VolumeMode:(*v1.PersistentVolumeMode)(0xc0042cb610), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Oct 5 12:03:58.552: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-k4ws2", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4058", SelfLink:"", UID:"9061b6ae-57e5-4210-823e-d62180c06fbf", ResourceVersion:"2039", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63800568215, loc:(*time.Location)(0xa04d060)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4058", "volume.kubernetes.io/selected-node":"v122-worker"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc005202690), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0052026a8), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0052026c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0052026d8), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0052026f0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc005202720), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0048b9cd0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0048b9ce0), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Oct 5 12:03:58.552: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-k4ws2", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4058", SelfLink:"", UID:"9061b6ae-57e5-4210-823e-d62180c06fbf", ResourceVersion:"2045", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63800568215, loc:(*time.Location)(0xa04d060)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4058", "volume.kubernetes.io/selected-node":"v122-worker"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc005202750), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc005202768), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc005202798), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0052027b0), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0052027e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0052027f8), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-9061b6ae-57e5-4210-823e-d62180c06fbf", StorageClassName:(*string)(0xc0048b9d10), VolumeMode:(*v1.PersistentVolumeMode)(0xc0048b9d20), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Oct 5 12:03:58.553: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-k4ws2", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4058", SelfLink:"", UID:"9061b6ae-57e5-4210-823e-d62180c06fbf", ResourceVersion:"2046", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63800568215, loc:(*time.Location)(0xa04d060)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4058", "volume.kubernetes.io/selected-node":"v122-worker"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004c3a6c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004c3a6d8), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004c3a6f0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004c3a708), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004c3a738), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004c3a750), Subresource:"status"}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004c3a768), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004c3a780), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-9061b6ae-57e5-4210-823e-d62180c06fbf", StorageClassName:(*string)(0xc0042cb6b0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0042cb6c0), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Oct 5 12:03:58.553: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-k4ws2", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4058", SelfLink:"", UID:"9061b6ae-57e5-4210-823e-d62180c06fbf", ResourceVersion:"2637", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63800568215, loc:(*time.Location)(0xa04d060)}}, DeletionTimestamp:(*v1.Time)(0xc004c3a7c8), DeletionGracePeriodSeconds:(*int64)(0xc0012fa358), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4058", "volume.kubernetes.io/selected-node":"v122-worker"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004c3a7e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004c3a7f8), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004c3a810), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004c3a828), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004c3a840), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004c3a858), Subresource:"status"}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004c3a870), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004c3a8a0), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-9061b6ae-57e5-4210-823e-d62180c06fbf", StorageClassName:(*string)(0xc0042cb700), VolumeMode:(*v1.PersistentVolumeMode)(0xc0042cb710), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Oct 5 12:03:58.553: INFO: PVC event DELETED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-k4ws2", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4058", SelfLink:"", UID:"9061b6ae-57e5-4210-823e-d62180c06fbf", ResourceVersion:"2638", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63800568215, loc:(*time.Location)(0xa04d060)}}, DeletionTimestamp:(*v1.Time)(0xc004c3a8d0), DeletionGracePeriodSeconds:(*int64)(0xc0012fa458), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4058", "volume.kubernetes.io/selected-node":"v122-worker"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004c3a8e8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004c3a918), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004c3a930), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004c3a948), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004c3a960), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004c3a978), Subresource:"status"}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004c3a990), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004c3a9a8), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-9061b6ae-57e5-4210-823e-d62180c06fbf", StorageClassName:(*string)(0xc0042cb750), VolumeMode:(*v1.PersistentVolumeMode)(0xc0042cb760), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} STEP: Deleting pod pvc-volume-tester-6t4h6 Oct 5 12:03:58.553: INFO: Deleting pod "pvc-volume-tester-6t4h6" in namespace "csi-mock-volumes-4058" STEP: Deleting claim pvc-k4ws2 STEP: Deleting storageclass csi-mock-volumes-4058-schs8dn STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4058 STEP: Waiting for namespaces [csi-mock-volumes-4058] to vanish STEP: uninstalling csi mock driver Oct 5 12:04:11.596: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4058-4154/csi-attacher Oct 5 12:04:11.600: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4058 Oct 5 12:04:11.604: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4058 Oct 5 12:04:11.608: INFO: deleting *v1.Role: csi-mock-volumes-4058-4154/external-attacher-cfg-csi-mock-volumes-4058 Oct 5 12:04:11.612: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4058-4154/csi-attacher-role-cfg Oct 5 12:04:11.616: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4058-4154/csi-provisioner Oct 5 12:04:11.619: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4058 Oct 5 12:04:11.623: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4058 Oct 5 12:04:11.627: INFO: deleting *v1.Role: csi-mock-volumes-4058-4154/external-provisioner-cfg-csi-mock-volumes-4058 Oct 5 12:04:11.635: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4058-4154/csi-provisioner-role-cfg Oct 5 12:04:11.640: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4058-4154/csi-resizer Oct 5 12:04:11.644: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4058 Oct 5 12:04:11.649: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4058 Oct 5 12:04:11.654: INFO: deleting *v1.Role: csi-mock-volumes-4058-4154/external-resizer-cfg-csi-mock-volumes-4058 Oct 5 12:04:11.658: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4058-4154/csi-resizer-role-cfg Oct 5 12:04:11.663: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4058-4154/csi-snapshotter Oct 5 12:04:11.667: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4058 Oct 5 12:04:11.671: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4058 Oct 5 12:04:11.675: INFO: deleting *v1.Role: csi-mock-volumes-4058-4154/external-snapshotter-leaderelection-csi-mock-volumes-4058 Oct 5 12:04:11.679: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4058-4154/external-snapshotter-leaderelection Oct 5 12:04:11.684: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4058-4154/csi-mock Oct 5 12:04:11.688: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4058 Oct 5 12:04:11.692: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4058 Oct 5 12:04:11.696: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4058 Oct 5 12:04:11.700: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4058 Oct 5 12:04:11.705: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4058 Oct 5 12:04:11.709: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4058 Oct 5 12:04:11.714: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4058 Oct 5 12:04:11.718: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4058-4154/csi-mockplugin Oct 5 12:04:11.723: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-4058 STEP: deleting the driver namespace: csi-mock-volumes-4058-4154 STEP: Waiting for namespaces [csi-mock-volumes-4058-4154] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:04:55.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:89.980 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 storage capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1023 exhausted, late binding, with topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1081 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology","total":-1,"completed":2,"skipped":105,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:04:55.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename regional-pd STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:68 Oct 5 12:04:55.782: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:04:55.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "regional-pd-901" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.038 seconds] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 RegionalPD [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:76 should provision storage with delayed binding [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:81 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:04:53.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Oct 5 12:04:53.454: INFO: The status of Pod test-hostpath-type-s9knm is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:04:55.461: INFO: The status of Pod test-hostpath-type-s9knm is Running (Ready = true) STEP: running on node v122-worker2 STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should fail on mounting file 'afile' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:166 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:05:01.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-7793" for this suite. • [SLOW TEST:8.104 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting file 'afile' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:166 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathBlockDev","total":-1,"completed":2,"skipped":78,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:03:34.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:455 STEP: Setting up 10 local volumes on node "v122-worker" STEP: Creating tmpfs mount point on node "v122-worker" at path "/tmp/local-volume-test-8e9bf6ca-255a-41b2-9587-2b3565176fe8" Oct 5 12:03:36.261: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-8e9bf6ca-255a-41b2-9587-2b3565176fe8" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-8e9bf6ca-255a-41b2-9587-2b3565176fe8" "/tmp/local-volume-test-8e9bf6ca-255a-41b2-9587-2b3565176fe8"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker-pr8vs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:03:36.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "v122-worker" at path "/tmp/local-volume-test-534c2bb7-e3d8-4cda-939b-3b543e519895" Oct 5 12:03:36.401: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-534c2bb7-e3d8-4cda-939b-3b543e519895" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-534c2bb7-e3d8-4cda-939b-3b543e519895" "/tmp/local-volume-test-534c2bb7-e3d8-4cda-939b-3b543e519895"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker-pr8vs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:03:36.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "v122-worker" at path "/tmp/local-volume-test-2a37a7f2-6be7-46d4-a7d3-51f314294e20" Oct 5 12:03:36.542: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-2a37a7f2-6be7-46d4-a7d3-51f314294e20" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-2a37a7f2-6be7-46d4-a7d3-51f314294e20" "/tmp/local-volume-test-2a37a7f2-6be7-46d4-a7d3-51f314294e20"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker-pr8vs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:03:36.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "v122-worker" at path "/tmp/local-volume-test-cabbf990-a9a2-4d59-9259-4dd121330d27" Oct 5 12:03:36.682: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-cabbf990-a9a2-4d59-9259-4dd121330d27" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-cabbf990-a9a2-4d59-9259-4dd121330d27" "/tmp/local-volume-test-cabbf990-a9a2-4d59-9259-4dd121330d27"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker-pr8vs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:03:36.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "v122-worker" at path "/tmp/local-volume-test-f4b2cffc-24a4-4003-b013-5d9cfd992fb7" Oct 5 12:03:36.832: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-f4b2cffc-24a4-4003-b013-5d9cfd992fb7" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-f4b2cffc-24a4-4003-b013-5d9cfd992fb7" "/tmp/local-volume-test-f4b2cffc-24a4-4003-b013-5d9cfd992fb7"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker-pr8vs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:03:36.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "v122-worker" at path "/tmp/local-volume-test-5fffd71f-a7df-414e-bea6-6e53d402cf66" Oct 5 12:03:36.949: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-5fffd71f-a7df-414e-bea6-6e53d402cf66" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-5fffd71f-a7df-414e-bea6-6e53d402cf66" "/tmp/local-volume-test-5fffd71f-a7df-414e-bea6-6e53d402cf66"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker-pr8vs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:03:36.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "v122-worker" at path "/tmp/local-volume-test-2e0658c2-1228-4773-b684-93fdf179abad" Oct 5 12:03:37.075: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-2e0658c2-1228-4773-b684-93fdf179abad" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-2e0658c2-1228-4773-b684-93fdf179abad" "/tmp/local-volume-test-2e0658c2-1228-4773-b684-93fdf179abad"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker-pr8vs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:03:37.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "v122-worker" at path "/tmp/local-volume-test-e1cd3260-dbe9-47f6-bd45-2e9c3905863c" Oct 5 12:03:37.198: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-e1cd3260-dbe9-47f6-bd45-2e9c3905863c" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-e1cd3260-dbe9-47f6-bd45-2e9c3905863c" "/tmp/local-volume-test-e1cd3260-dbe9-47f6-bd45-2e9c3905863c"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker-pr8vs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:03:37.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "v122-worker" at path "/tmp/local-volume-test-910a4719-c0c9-402d-8496-837916afbdce" Oct 5 12:03:37.353: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-910a4719-c0c9-402d-8496-837916afbdce" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-910a4719-c0c9-402d-8496-837916afbdce" "/tmp/local-volume-test-910a4719-c0c9-402d-8496-837916afbdce"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker-pr8vs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:03:37.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "v122-worker" at path "/tmp/local-volume-test-8073e474-6c56-4b92-a86f-7dc451ac466f" Oct 5 12:03:37.464: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-8073e474-6c56-4b92-a86f-7dc451ac466f" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-8073e474-6c56-4b92-a86f-7dc451ac466f" "/tmp/local-volume-test-8073e474-6c56-4b92-a86f-7dc451ac466f"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker-pr8vs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:03:37.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Setting up 10 local volumes on node "v122-worker2" STEP: Creating tmpfs mount point on node "v122-worker2" at path "/tmp/local-volume-test-66c6b6fa-a058-495f-87f7-a6838943c3e2" Oct 5 12:03:41.615: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-66c6b6fa-a058-495f-87f7-a6838943c3e2" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-66c6b6fa-a058-495f-87f7-a6838943c3e2" "/tmp/local-volume-test-66c6b6fa-a058-495f-87f7-a6838943c3e2"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker2-cnkm2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:03:41.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "v122-worker2" at path "/tmp/local-volume-test-e74281b7-f337-46dd-863c-0f6b07059fa8" Oct 5 12:03:41.722: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-e74281b7-f337-46dd-863c-0f6b07059fa8" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-e74281b7-f337-46dd-863c-0f6b07059fa8" "/tmp/local-volume-test-e74281b7-f337-46dd-863c-0f6b07059fa8"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker2-cnkm2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:03:41.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "v122-worker2" at path "/tmp/local-volume-test-d439dd17-fdc6-438c-8271-5f55bfd6dc69" Oct 5 12:03:41.838: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-d439dd17-fdc6-438c-8271-5f55bfd6dc69" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-d439dd17-fdc6-438c-8271-5f55bfd6dc69" "/tmp/local-volume-test-d439dd17-fdc6-438c-8271-5f55bfd6dc69"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker2-cnkm2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:03:41.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "v122-worker2" at path "/tmp/local-volume-test-f9489208-00ef-400f-a6aa-40f18ae5ae3e" Oct 5 12:03:41.925: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-f9489208-00ef-400f-a6aa-40f18ae5ae3e" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-f9489208-00ef-400f-a6aa-40f18ae5ae3e" "/tmp/local-volume-test-f9489208-00ef-400f-a6aa-40f18ae5ae3e"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker2-cnkm2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:03:41.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "v122-worker2" at path "/tmp/local-volume-test-23e2001c-f84e-44f7-9550-371fc91ca242" Oct 5 12:03:42.010: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-23e2001c-f84e-44f7-9550-371fc91ca242" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-23e2001c-f84e-44f7-9550-371fc91ca242" "/tmp/local-volume-test-23e2001c-f84e-44f7-9550-371fc91ca242"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker2-cnkm2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:03:42.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "v122-worker2" at path "/tmp/local-volume-test-a1177ee8-e77e-43a2-a7d0-e1fde136f82a" Oct 5 12:03:42.156: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-a1177ee8-e77e-43a2-a7d0-e1fde136f82a" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-a1177ee8-e77e-43a2-a7d0-e1fde136f82a" "/tmp/local-volume-test-a1177ee8-e77e-43a2-a7d0-e1fde136f82a"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker2-cnkm2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:03:42.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "v122-worker2" at path "/tmp/local-volume-test-15b1831a-de8b-4301-b4ea-f87dd5caa7eb" Oct 5 12:03:42.284: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-15b1831a-de8b-4301-b4ea-f87dd5caa7eb" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-15b1831a-de8b-4301-b4ea-f87dd5caa7eb" "/tmp/local-volume-test-15b1831a-de8b-4301-b4ea-f87dd5caa7eb"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker2-cnkm2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:03:42.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "v122-worker2" at path "/tmp/local-volume-test-c3b56b2c-9d86-4b94-8654-a1b1a4339efe" Oct 5 12:03:42.419: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-c3b56b2c-9d86-4b94-8654-a1b1a4339efe" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-c3b56b2c-9d86-4b94-8654-a1b1a4339efe" "/tmp/local-volume-test-c3b56b2c-9d86-4b94-8654-a1b1a4339efe"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker2-cnkm2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:03:42.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "v122-worker2" at path "/tmp/local-volume-test-e108ceed-0780-42c8-9647-aab11a25b01a" Oct 5 12:03:42.549: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-e108ceed-0780-42c8-9647-aab11a25b01a" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-e108ceed-0780-42c8-9647-aab11a25b01a" "/tmp/local-volume-test-e108ceed-0780-42c8-9647-aab11a25b01a"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker2-cnkm2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:03:42.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "v122-worker2" at path "/tmp/local-volume-test-c98725a0-a095-40d1-acf3-696247c7bf23" Oct 5 12:03:42.690: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-c98725a0-a095-40d1-acf3-696247c7bf23" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-c98725a0-a095-40d1-acf3-696247c7bf23" "/tmp/local-volume-test-c98725a0-a095-40d1-acf3-696247c7bf23"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker2-cnkm2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:03:42.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Create 20 PVs STEP: Start a goroutine to recycle unbound PVs [It] should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:531 STEP: Creating 7 pods periodically STEP: Waiting for all pods to complete successfully STEP: Delete "local-pvc2wsr" and create a new PV for same local volume storage Oct 5 12:03:56.023: INFO: Deleting pod pod-d7b2691c-8fe8-4b78-97c4-d0d8f6947b44 Oct 5 12:03:56.032: INFO: Deleting PersistentVolumeClaim "pvc-6bvqh" Oct 5 12:03:56.037: INFO: Deleting PersistentVolumeClaim "pvc-7h8wr" Oct 5 12:03:56.042: INFO: Deleting PersistentVolumeClaim "pvc-q9wdj" Oct 5 12:03:56.048: INFO: 1/28 pods finished STEP: Delete "local-pvd2lf4" and create a new PV for same local volume storage STEP: Delete "local-pv4nv5x" and create a new PV for same local volume storage STEP: Delete "local-pvfc7vn" and create a new PV for same local volume storage STEP: Delete "pvc-9061b6ae-57e5-4210-823e-d62180c06fbf" and create a new PV for same local volume storage STEP: Delete "pvc-9061b6ae-57e5-4210-823e-d62180c06fbf" and create a new PV for same local volume storage Oct 5 12:03:58.023: INFO: Deleting pod pod-7f0e04be-e911-4a09-a4dd-588a8abc24c2 Oct 5 12:03:58.032: INFO: Deleting PersistentVolumeClaim "pvc-jr5mb" Oct 5 12:03:58.037: INFO: Deleting PersistentVolumeClaim "pvc-cxg6s" Oct 5 12:03:58.044: INFO: Deleting PersistentVolumeClaim "pvc-2j2wf" Oct 5 12:03:58.050: INFO: 2/28 pods finished Oct 5 12:03:58.050: INFO: Deleting pod pod-8a6a5e9d-9a90-4472-88e8-7d64666080d1 Oct 5 12:03:58.059: INFO: Deleting PersistentVolumeClaim "pvc-bbkm7" Oct 5 12:03:58.063: INFO: Deleting PersistentVolumeClaim "pvc-rwccw" STEP: Delete "local-pvkhlcl" and create a new PV for same local volume storage Oct 5 12:03:58.069: INFO: Deleting PersistentVolumeClaim "pvc-8gpwl" Oct 5 12:03:58.073: INFO: 3/28 pods finished STEP: Delete "local-pv9kcq4" and create a new PV for same local volume storage STEP: Delete "pvc-dcd6969c-e268-495e-b793-3cf74941254b" and create a new PV for same local volume storage STEP: Delete "local-pvnpc9g" and create a new PV for same local volume storage STEP: Delete "local-pvb2w2j" and create a new PV for same local volume storage STEP: Delete "local-pvzdmjl" and create a new PV for same local volume storage STEP: Delete "local-pvmr9jb" and create a new PV for same local volume storage STEP: Delete "pvc-7ca8b988-1f72-4809-a1e3-d2e20c254bc6" and create a new PV for same local volume storage STEP: Delete "pvc-dcd6969c-e268-495e-b793-3cf74941254b" and create a new PV for same local volume storage STEP: Delete "pvc-dcd6969c-e268-495e-b793-3cf74941254b" and create a new PV for same local volume storage STEP: Delete "pvc-7ca8b988-1f72-4809-a1e3-d2e20c254bc6" and create a new PV for same local volume storage STEP: Delete "pvc-7ca8b988-1f72-4809-a1e3-d2e20c254bc6" and create a new PV for same local volume storage Oct 5 12:04:00.023: INFO: Deleting pod pod-625df3aa-d5d0-49e3-b540-b707b9ad83b3 Oct 5 12:04:00.031: INFO: Deleting PersistentVolumeClaim "pvc-4x6cw" Oct 5 12:04:00.036: INFO: Deleting PersistentVolumeClaim "pvc-66zb8" Oct 5 12:04:00.040: INFO: Deleting PersistentVolumeClaim "pvc-5xzq4" Oct 5 12:04:00.045: INFO: 4/28 pods finished STEP: Delete "local-pvd6zzm" and create a new PV for same local volume storage STEP: Delete "local-pvj8lwc" and create a new PV for same local volume storage STEP: Delete "local-pvcwt5f" and create a new PV for same local volume storage Oct 5 12:04:01.145: INFO: Deleting pod pod-43422e0f-7c5b-44d4-9ca6-8f1bf83301dc Oct 5 12:04:01.153: INFO: Deleting PersistentVolumeClaim "pvc-kwhgx" Oct 5 12:04:01.158: INFO: Deleting PersistentVolumeClaim "pvc-c58gl" Oct 5 12:04:01.164: INFO: Deleting PersistentVolumeClaim "pvc-ccqwn" Oct 5 12:04:01.169: INFO: 5/28 pods finished Oct 5 12:04:01.169: INFO: Deleting pod pod-63b38614-d566-4a35-a340-45c5499dc6d8 Oct 5 12:04:01.176: INFO: Deleting PersistentVolumeClaim "pvc-7p4qz" Oct 5 12:04:01.181: INFO: Deleting PersistentVolumeClaim "pvc-45r7t" Oct 5 12:04:01.185: INFO: Deleting PersistentVolumeClaim "pvc-nzqtm" Oct 5 12:04:01.190: INFO: 6/28 pods finished STEP: Delete "local-pvkl7x6" and create a new PV for same local volume storage STEP: Delete "local-pvbh4tp" and create a new PV for same local volume storage STEP: Delete "local-pvxmnhw" and create a new PV for same local volume storage STEP: Delete "local-pvvxc9v" and create a new PV for same local volume storage STEP: Delete "local-pvfkfng" and create a new PV for same local volume storage STEP: Delete "local-pv2cp99" and create a new PV for same local volume storage STEP: Delete "pvc-e0f39ca1-d216-44d4-9ea4-04686c8da135" and create a new PV for same local volume storage Oct 5 12:04:07.021: INFO: Deleting pod pod-45bb884b-5991-445c-80c1-023eba30a424 Oct 5 12:04:07.027: INFO: Deleting PersistentVolumeClaim "pvc-mwqxc" Oct 5 12:04:07.031: INFO: Deleting PersistentVolumeClaim "pvc-jp5cg" Oct 5 12:04:07.034: INFO: Deleting PersistentVolumeClaim "pvc-lbk7l" Oct 5 12:04:07.038: INFO: 7/28 pods finished STEP: Delete "local-pvnqps6" and create a new PV for same local volume storage STEP: Delete "local-pv6hjn5" and create a new PV for same local volume storage STEP: Delete "local-pv5ms4w" and create a new PV for same local volume storage STEP: Delete "pvc-e0f39ca1-d216-44d4-9ea4-04686c8da135" and create a new PV for same local volume storage STEP: Delete "pvc-e0f39ca1-d216-44d4-9ea4-04686c8da135" and create a new PV for same local volume storage Oct 5 12:04:10.022: INFO: Deleting pod pod-3f7648dd-77a9-4cee-bec5-aafd297d1765 Oct 5 12:04:10.030: INFO: Deleting PersistentVolumeClaim "pvc-wwpd4" Oct 5 12:04:10.036: INFO: Deleting PersistentVolumeClaim "pvc-prqzl" Oct 5 12:04:10.041: INFO: Deleting PersistentVolumeClaim "pvc-r7v9j" Oct 5 12:04:10.046: INFO: 8/28 pods finished STEP: Delete "local-pv4pdbt" and create a new PV for same local volume storage STEP: Delete "local-pvgmcx8" and create a new PV for same local volume storage STEP: Delete "local-pvpdvzv" and create a new PV for same local volume storage Oct 5 12:04:12.021: INFO: Deleting pod pod-2887c659-e0b2-4e10-971f-e35921c29281 Oct 5 12:04:12.029: INFO: Deleting PersistentVolumeClaim "pvc-nnmps" Oct 5 12:04:12.036: INFO: Deleting PersistentVolumeClaim "pvc-nqbbw" Oct 5 12:04:12.040: INFO: Deleting PersistentVolumeClaim "pvc-f5ckc" Oct 5 12:04:12.044: INFO: 9/28 pods finished STEP: Delete "local-pvwcbms" and create a new PV for same local volume storage STEP: Delete "local-pvtp5pb" and create a new PV for same local volume storage STEP: Delete "local-pvb84jt" and create a new PV for same local volume storage Oct 5 12:04:16.021: INFO: Deleting pod pod-ba6e0c64-2a48-4a7f-9b10-9c24ef398f00 Oct 5 12:04:16.028: INFO: Deleting PersistentVolumeClaim "pvc-tsqgz" Oct 5 12:04:16.033: INFO: Deleting PersistentVolumeClaim "pvc-hzw95" Oct 5 12:04:16.037: INFO: Deleting PersistentVolumeClaim "pvc-nzvnr" Oct 5 12:04:16.041: INFO: 10/28 pods finished STEP: Delete "local-pv9zzjk" and create a new PV for same local volume storage STEP: Delete "local-pvtvxtf" and create a new PV for same local volume storage STEP: Delete "local-pv5rmgp" and create a new PV for same local volume storage Oct 5 12:04:17.021: INFO: Deleting pod pod-3fbfbb4f-1647-46df-a555-c27344329a9d Oct 5 12:04:17.034: INFO: Deleting PersistentVolumeClaim "pvc-mfqxk" Oct 5 12:04:17.038: INFO: Deleting PersistentVolumeClaim "pvc-nbb49" Oct 5 12:04:17.041: INFO: Deleting PersistentVolumeClaim "pvc-grw4g" Oct 5 12:04:17.045: INFO: 11/28 pods finished STEP: Delete "local-pvv879x" and create a new PV for same local volume storage STEP: Delete "local-pvnpt2j" and create a new PV for same local volume storage STEP: Delete "local-pvrsjnm" and create a new PV for same local volume storage Oct 5 12:04:20.022: INFO: Deleting pod pod-4c987cd0-796c-414b-a803-92a43a4bf0ca Oct 5 12:04:20.031: INFO: Deleting PersistentVolumeClaim "pvc-bxn72" Oct 5 12:04:20.036: INFO: Deleting PersistentVolumeClaim "pvc-kwblj" Oct 5 12:04:20.041: INFO: Deleting PersistentVolumeClaim "pvc-ntr7n" Oct 5 12:04:20.045: INFO: 12/28 pods finished STEP: Delete "local-pvlcwvc" and create a new PV for same local volume storage STEP: Delete "local-pv2vbfc" and create a new PV for same local volume storage STEP: Delete "local-pvmq7tc" and create a new PV for same local volume storage Oct 5 12:04:21.023: INFO: Deleting pod pod-0dc04f20-939d-49d8-af2f-85c0ef189abc Oct 5 12:04:21.032: INFO: Deleting PersistentVolumeClaim "pvc-hwv7f" Oct 5 12:04:21.037: INFO: Deleting PersistentVolumeClaim "pvc-l9bst" Oct 5 12:04:21.042: INFO: Deleting PersistentVolumeClaim "pvc-5jsh6" Oct 5 12:04:21.046: INFO: 13/28 pods finished STEP: Delete "local-pvprnlq" and create a new PV for same local volume storage STEP: Delete "local-pvkmw92" and create a new PV for same local volume storage STEP: Delete "local-pv66xhj" and create a new PV for same local volume storage Oct 5 12:04:22.022: INFO: Deleting pod pod-2a30a7ac-4372-4296-be9e-029aa56c72d9 Oct 5 12:04:22.028: INFO: Deleting PersistentVolumeClaim "pvc-8jdjc" Oct 5 12:04:22.033: INFO: Deleting PersistentVolumeClaim "pvc-rrd5z" Oct 5 12:04:22.038: INFO: Deleting PersistentVolumeClaim "pvc-zpxz9" Oct 5 12:04:22.042: INFO: 14/28 pods finished STEP: Delete "local-pvjr4fk" and create a new PV for same local volume storage STEP: Delete "local-pvpbc7l" and create a new PV for same local volume storage STEP: Delete "local-pv2psgx" and create a new PV for same local volume storage Oct 5 12:04:28.022: INFO: Deleting pod pod-e09470bc-2116-4fc8-aea7-62d6932a68f2 Oct 5 12:04:28.032: INFO: Deleting PersistentVolumeClaim "pvc-26phz" Oct 5 12:04:28.042: INFO: Deleting PersistentVolumeClaim "pvc-s7cvj" Oct 5 12:04:28.047: INFO: Deleting PersistentVolumeClaim "pvc-qj6mr" Oct 5 12:04:28.052: INFO: 15/28 pods finished STEP: Delete "local-pvk5tmv" and create a new PV for same local volume storage STEP: Delete "local-pvkdcnw" and create a new PV for same local volume storage STEP: Delete "local-pv9gnqp" and create a new PV for same local volume storage Oct 5 12:04:29.022: INFO: Deleting pod pod-e8e81ad7-0292-4b7e-acc8-feb2634b8d72 Oct 5 12:04:29.030: INFO: Deleting PersistentVolumeClaim "pvc-8phv7" Oct 5 12:04:29.036: INFO: Deleting PersistentVolumeClaim "pvc-rn744" Oct 5 12:04:29.041: INFO: Deleting PersistentVolumeClaim "pvc-9v9vh" Oct 5 12:04:29.045: INFO: 16/28 pods finished STEP: Delete "local-pv7hhvd" and create a new PV for same local volume storage STEP: Delete "local-pv45x65" and create a new PV for same local volume storage STEP: Delete "local-pv7k279" and create a new PV for same local volume storage Oct 5 12:04:30.022: INFO: Deleting pod pod-9b3eaaf2-f4a6-483a-bbf7-75e161959bd0 Oct 5 12:04:30.030: INFO: Deleting PersistentVolumeClaim "pvc-vrnqh" Oct 5 12:04:30.041: INFO: Deleting PersistentVolumeClaim "pvc-zms84" Oct 5 12:04:30.046: INFO: Deleting PersistentVolumeClaim "pvc-zbd68" Oct 5 12:04:30.050: INFO: 17/28 pods finished STEP: Delete "local-pvv6ftr" and create a new PV for same local volume storage STEP: Delete "local-pv7hkrr" and create a new PV for same local volume storage STEP: Delete "local-pvfcz8c" and create a new PV for same local volume storage Oct 5 12:04:31.025: INFO: Deleting pod pod-c456ad67-3148-44d6-a0ad-bd47806c41f2 Oct 5 12:04:31.041: INFO: Deleting PersistentVolumeClaim "pvc-jrltk" Oct 5 12:04:31.045: INFO: Deleting PersistentVolumeClaim "pvc-czpf6" Oct 5 12:04:31.051: INFO: Deleting PersistentVolumeClaim "pvc-thcdh" Oct 5 12:04:31.055: INFO: 18/28 pods finished STEP: Delete "local-pvj9h8v" and create a new PV for same local volume storage STEP: Delete "local-pvgcjmp" and create a new PV for same local volume storage STEP: Delete "local-pvmrb9g" and create a new PV for same local volume storage STEP: Delete "pvc-7defaab0-28f7-42f1-a7dd-20aab63eaf4c" and create a new PV for same local volume storage STEP: Delete "pvc-7defaab0-28f7-42f1-a7dd-20aab63eaf4c" and create a new PV for same local volume storage Oct 5 12:04:34.022: INFO: Deleting pod pod-6f51d4a1-b10e-4052-8d29-c187b77cf5e1 Oct 5 12:04:34.031: INFO: Deleting PersistentVolumeClaim "pvc-9n9t4" Oct 5 12:04:34.041: INFO: Deleting PersistentVolumeClaim "pvc-cs8dp" Oct 5 12:04:34.045: INFO: Deleting PersistentVolumeClaim "pvc-xv9pg" Oct 5 12:04:34.050: INFO: 19/28 pods finished STEP: Delete "local-pv5hz2s" and create a new PV for same local volume storage STEP: Delete "local-pv5hz2s" and create a new PV for same local volume storage STEP: Delete "local-pvnpqsq" and create a new PV for same local volume storage STEP: Delete "local-pv2x6w2" and create a new PV for same local volume storage Oct 5 12:04:37.025: INFO: Deleting pod pod-f5621960-7df5-40dd-8cf0-2d1a97a6472a Oct 5 12:04:37.035: INFO: Deleting PersistentVolumeClaim "pvc-kprlr" Oct 5 12:04:37.039: INFO: Deleting PersistentVolumeClaim "pvc-2qw4p" Oct 5 12:04:37.044: INFO: Deleting PersistentVolumeClaim "pvc-z2nww" Oct 5 12:04:37.048: INFO: 20/28 pods finished STEP: Delete "local-pvnl5pp" and create a new PV for same local volume storage STEP: Delete "local-pvsq4tc" and create a new PV for same local volume storage STEP: Delete "local-pv7hjjj" and create a new PV for same local volume storage Oct 5 12:04:43.022: INFO: Deleting pod pod-ec1c29e8-b57e-4e29-b7b5-9c1aee9f8b24 Oct 5 12:04:43.031: INFO: Deleting PersistentVolumeClaim "pvc-8ph7x" Oct 5 12:04:43.042: INFO: Deleting PersistentVolumeClaim "pvc-nkqs6" Oct 5 12:04:43.047: INFO: Deleting PersistentVolumeClaim "pvc-vjkdz" Oct 5 12:04:43.052: INFO: 21/28 pods finished STEP: Delete "local-pv7xzrv" and create a new PV for same local volume storage STEP: Delete "local-pv2m26c" and create a new PV for same local volume storage STEP: Delete "local-pvb49cz" and create a new PV for same local volume storage Oct 5 12:04:44.022: INFO: Deleting pod pod-f3be5e91-cda6-4663-9d9b-7e2c06055983 Oct 5 12:04:44.029: INFO: Deleting PersistentVolumeClaim "pvc-2987h" Oct 5 12:04:44.034: INFO: Deleting PersistentVolumeClaim "pvc-755n8" Oct 5 12:04:44.040: INFO: Deleting PersistentVolumeClaim "pvc-z97hr" Oct 5 12:04:44.045: INFO: 22/28 pods finished STEP: Delete "local-pvzfrtw" and create a new PV for same local volume storage STEP: Delete "local-pv9fl8d" and create a new PV for same local volume storage STEP: Delete "local-pvdqwvz" and create a new PV for same local volume storage STEP: Delete "local-pvf6l57" and create a new PV for same local volume storage Oct 5 12:04:47.022: INFO: Deleting pod pod-387ec40c-10d1-496d-b22a-a3d34436cb1a Oct 5 12:04:47.031: INFO: Deleting PersistentVolumeClaim "pvc-b88xk" Oct 5 12:04:47.042: INFO: Deleting PersistentVolumeClaim "pvc-5g4cj" Oct 5 12:04:47.047: INFO: Deleting PersistentVolumeClaim "pvc-hb6xf" Oct 5 12:04:47.052: INFO: 23/28 pods finished STEP: Delete "local-pvs5tdv" and create a new PV for same local volume storage STEP: Delete "local-pvmb4k4" and create a new PV for same local volume storage STEP: Delete "local-pvv4v6p" and create a new PV for same local volume storage Oct 5 12:04:48.022: INFO: Deleting pod pod-cc881d6a-de41-4063-88c7-79fb6828aef5 Oct 5 12:04:48.030: INFO: Deleting PersistentVolumeClaim "pvc-j87hj" Oct 5 12:04:48.036: INFO: Deleting PersistentVolumeClaim "pvc-x7zjb" Oct 5 12:04:48.041: INFO: Deleting PersistentVolumeClaim "pvc-wncl2" Oct 5 12:04:48.046: INFO: 24/28 pods finished STEP: Delete "local-pv9rlwp" and create a new PV for same local volume storage STEP: Delete "local-pvmhlvl" and create a new PV for same local volume storage STEP: Delete "local-pvtlgsp" and create a new PV for same local volume storage Oct 5 12:04:50.021: INFO: Deleting pod pod-32953d01-07b3-4f0f-a203-e53dc112d125 Oct 5 12:04:50.029: INFO: Deleting PersistentVolumeClaim "pvc-nfvgd" Oct 5 12:04:50.034: INFO: Deleting PersistentVolumeClaim "pvc-zhdw8" Oct 5 12:04:50.039: INFO: Deleting PersistentVolumeClaim "pvc-vz6cx" Oct 5 12:04:50.044: INFO: 25/28 pods finished STEP: Delete "local-pvsfjb8" and create a new PV for same local volume storage STEP: Delete "local-pv8rzs8" and create a new PV for same local volume storage STEP: Delete "local-pv8vwc7" and create a new PV for same local volume storage Oct 5 12:04:51.021: INFO: Deleting pod pod-7d209d3f-7882-40e2-b60d-8f2b5414c997 Oct 5 12:04:51.030: INFO: Deleting PersistentVolumeClaim "pvc-rqxwj" Oct 5 12:04:51.040: INFO: Deleting PersistentVolumeClaim "pvc-p7cbq" Oct 5 12:04:51.045: INFO: Deleting PersistentVolumeClaim "pvc-xlqhh" Oct 5 12:04:51.050: INFO: 26/28 pods finished STEP: Delete "local-pvqv95k" and create a new PV for same local volume storage STEP: Delete "local-pvk9r7k" and create a new PV for same local volume storage STEP: Delete "local-pv9c2q7" and create a new PV for same local volume storage STEP: Delete "local-pvvc6tv" and create a new PV for same local volume storage Oct 5 12:04:59.022: INFO: Deleting pod pod-c880cf3e-089f-4cd5-b287-5e71423c281b Oct 5 12:04:59.032: INFO: Deleting PersistentVolumeClaim "pvc-gsdzw" Oct 5 12:04:59.038: INFO: Deleting PersistentVolumeClaim "pvc-sh8z8" Oct 5 12:04:59.043: INFO: Deleting PersistentVolumeClaim "pvc-xdhtf" Oct 5 12:04:59.048: INFO: 27/28 pods finished Oct 5 12:04:59.048: INFO: Deleting pod pod-e941e132-3fb2-409d-b79a-39ea85a09661 Oct 5 12:04:59.057: INFO: Deleting PersistentVolumeClaim "pvc-qqsqk" Oct 5 12:04:59.062: INFO: Deleting PersistentVolumeClaim "pvc-2xh9j" STEP: Delete "local-pvpdm75" and create a new PV for same local volume storage Oct 5 12:04:59.069: INFO: Deleting PersistentVolumeClaim "pvc-zvrc9" Oct 5 12:04:59.075: INFO: 28/28 pods finished [AfterEach] Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:519 STEP: Stop and wait for recycle goroutine to finish STEP: Clean all PVs STEP: Cleaning up 10 local volumes on node "v122-worker" STEP: Cleaning up PVC and PV Oct 5 12:04:59.076: INFO: pvc is nil Oct 5 12:04:59.076: INFO: Deleting PersistentVolume "local-pvbdnnk" STEP: Cleaning up PVC and PV Oct 5 12:04:59.082: INFO: pvc is nil Oct 5 12:04:59.082: INFO: Deleting PersistentVolume "local-pvztv75" STEP: Cleaning up PVC and PV Oct 5 12:04:59.086: INFO: pvc is nil Oct 5 12:04:59.086: INFO: Deleting PersistentVolume "local-pvv4bpb" STEP: Cleaning up PVC and PV Oct 5 12:04:59.105: INFO: pvc is nil Oct 5 12:04:59.105: INFO: Deleting PersistentVolume "local-pvbtpz2" STEP: Cleaning up PVC and PV Oct 5 12:04:59.114: INFO: pvc is nil Oct 5 12:04:59.114: INFO: Deleting PersistentVolume "local-pvw467t" STEP: Cleaning up PVC and PV Oct 5 12:04:59.118: INFO: pvc is nil Oct 5 12:04:59.118: INFO: Deleting PersistentVolume "local-pvwc98r" STEP: Cleaning up PVC and PV Oct 5 12:04:59.123: INFO: pvc is nil Oct 5 12:04:59.123: INFO: Deleting PersistentVolume "local-pv2pht7" STEP: Cleaning up PVC and PV Oct 5 12:04:59.127: INFO: pvc is nil Oct 5 12:04:59.127: INFO: Deleting PersistentVolume "local-pvrbjx2" STEP: Cleaning up PVC and PV Oct 5 12:04:59.132: INFO: pvc is nil Oct 5 12:04:59.132: INFO: Deleting PersistentVolume "local-pvds6wx" STEP: Cleaning up PVC and PV Oct 5 12:04:59.136: INFO: pvc is nil Oct 5 12:04:59.136: INFO: Deleting PersistentVolume "local-pvq99db" STEP: Unmount tmpfs mount point on node "v122-worker" at path "/tmp/local-volume-test-8e9bf6ca-255a-41b2-9587-2b3565176fe8" Oct 5 12:04:59.141: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-8e9bf6ca-255a-41b2-9587-2b3565176fe8"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker-pr8vs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:04:59.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:04:59.251: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8e9bf6ca-255a-41b2-9587-2b3565176fe8] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker-pr8vs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:04:59.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "v122-worker" at path "/tmp/local-volume-test-534c2bb7-e3d8-4cda-939b-3b543e519895" Oct 5 12:04:59.343: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-534c2bb7-e3d8-4cda-939b-3b543e519895"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker-pr8vs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:04:59.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:04:59.550: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-534c2bb7-e3d8-4cda-939b-3b543e519895] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker-pr8vs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:04:59.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "v122-worker" at path "/tmp/local-volume-test-2a37a7f2-6be7-46d4-a7d3-51f314294e20" Oct 5 12:04:59.662: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-2a37a7f2-6be7-46d4-a7d3-51f314294e20"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker-pr8vs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:04:59.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:04:59.836: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-2a37a7f2-6be7-46d4-a7d3-51f314294e20] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker-pr8vs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:04:59.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "v122-worker" at path "/tmp/local-volume-test-cabbf990-a9a2-4d59-9259-4dd121330d27" Oct 5 12:04:59.946: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-cabbf990-a9a2-4d59-9259-4dd121330d27"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker-pr8vs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:04:59.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:05:00.024: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-cabbf990-a9a2-4d59-9259-4dd121330d27] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker-pr8vs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:00.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "v122-worker" at path "/tmp/local-volume-test-f4b2cffc-24a4-4003-b013-5d9cfd992fb7" Oct 5 12:05:00.165: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-f4b2cffc-24a4-4003-b013-5d9cfd992fb7"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker-pr8vs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:00.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:05:00.289: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f4b2cffc-24a4-4003-b013-5d9cfd992fb7] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker-pr8vs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:00.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "v122-worker" at path "/tmp/local-volume-test-5fffd71f-a7df-414e-bea6-6e53d402cf66" Oct 5 12:05:00.389: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-5fffd71f-a7df-414e-bea6-6e53d402cf66"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker-pr8vs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:00.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:05:00.490: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-5fffd71f-a7df-414e-bea6-6e53d402cf66] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker-pr8vs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:00.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "v122-worker" at path "/tmp/local-volume-test-2e0658c2-1228-4773-b684-93fdf179abad" Oct 5 12:05:00.621: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-2e0658c2-1228-4773-b684-93fdf179abad"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker-pr8vs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:00.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:05:00.789: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-2e0658c2-1228-4773-b684-93fdf179abad] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker-pr8vs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:00.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "v122-worker" at path "/tmp/local-volume-test-e1cd3260-dbe9-47f6-bd45-2e9c3905863c" Oct 5 12:05:00.926: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-e1cd3260-dbe9-47f6-bd45-2e9c3905863c"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker-pr8vs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:00.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:05:01.048: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e1cd3260-dbe9-47f6-bd45-2e9c3905863c] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker-pr8vs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:01.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "v122-worker" at path "/tmp/local-volume-test-910a4719-c0c9-402d-8496-837916afbdce" Oct 5 12:05:01.140: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-910a4719-c0c9-402d-8496-837916afbdce"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker-pr8vs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:01.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:05:01.278: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-910a4719-c0c9-402d-8496-837916afbdce] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker-pr8vs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:01.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "v122-worker" at path "/tmp/local-volume-test-8073e474-6c56-4b92-a86f-7dc451ac466f" Oct 5 12:05:01.419: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-8073e474-6c56-4b92-a86f-7dc451ac466f"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker-pr8vs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:01.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:05:01.554: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8073e474-6c56-4b92-a86f-7dc451ac466f] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker-pr8vs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:01.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up 10 local volumes on node "v122-worker2" STEP: Cleaning up PVC and PV Oct 5 12:05:01.682: INFO: pvc is nil Oct 5 12:05:01.682: INFO: Deleting PersistentVolume "local-pvn2nj4" STEP: Cleaning up PVC and PV Oct 5 12:05:01.687: INFO: pvc is nil Oct 5 12:05:01.687: INFO: Deleting PersistentVolume "local-pv2mrgj" STEP: Cleaning up PVC and PV Oct 5 12:05:01.691: INFO: pvc is nil Oct 5 12:05:01.691: INFO: Deleting PersistentVolume "local-pvl6gxz" STEP: Cleaning up PVC and PV Oct 5 12:05:01.699: INFO: pvc is nil Oct 5 12:05:01.699: INFO: Deleting PersistentVolume "local-pvl8bdb" STEP: Cleaning up PVC and PV Oct 5 12:05:01.702: INFO: pvc is nil Oct 5 12:05:01.702: INFO: Deleting PersistentVolume "local-pvqdd9b" STEP: Cleaning up PVC and PV Oct 5 12:05:01.706: INFO: pvc is nil Oct 5 12:05:01.706: INFO: Deleting PersistentVolume "local-pv9bq7s" STEP: Cleaning up PVC and PV Oct 5 12:05:01.714: INFO: pvc is nil Oct 5 12:05:01.715: INFO: Deleting PersistentVolume "local-pvdwh7p" STEP: Cleaning up PVC and PV Oct 5 12:05:01.718: INFO: pvc is nil Oct 5 12:05:01.718: INFO: Deleting PersistentVolume "local-pvnpcwl" STEP: Cleaning up PVC and PV Oct 5 12:05:01.721: INFO: pvc is nil Oct 5 12:05:01.721: INFO: Deleting PersistentVolume "local-pv4jdkp" STEP: Cleaning up PVC and PV Oct 5 12:05:01.724: INFO: pvc is nil Oct 5 12:05:01.724: INFO: Deleting PersistentVolume "local-pv9rfsr" STEP: Unmount tmpfs mount point on node "v122-worker2" at path "/tmp/local-volume-test-66c6b6fa-a058-495f-87f7-a6838943c3e2" Oct 5 12:05:01.728: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-66c6b6fa-a058-495f-87f7-a6838943c3e2"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker2-cnkm2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:01.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:05:01.868: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-66c6b6fa-a058-495f-87f7-a6838943c3e2] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker2-cnkm2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:01.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "v122-worker2" at path "/tmp/local-volume-test-e74281b7-f337-46dd-863c-0f6b07059fa8" Oct 5 12:05:02.010: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-e74281b7-f337-46dd-863c-0f6b07059fa8"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker2-cnkm2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:02.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:05:02.170: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e74281b7-f337-46dd-863c-0f6b07059fa8] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker2-cnkm2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:02.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "v122-worker2" at path "/tmp/local-volume-test-d439dd17-fdc6-438c-8271-5f55bfd6dc69" Oct 5 12:05:02.309: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-d439dd17-fdc6-438c-8271-5f55bfd6dc69"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker2-cnkm2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:02.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:05:02.457: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-d439dd17-fdc6-438c-8271-5f55bfd6dc69] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker2-cnkm2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:02.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "v122-worker2" at path "/tmp/local-volume-test-f9489208-00ef-400f-a6aa-40f18ae5ae3e" Oct 5 12:05:02.610: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-f9489208-00ef-400f-a6aa-40f18ae5ae3e"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker2-cnkm2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:02.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:05:02.775: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f9489208-00ef-400f-a6aa-40f18ae5ae3e] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker2-cnkm2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:02.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "v122-worker2" at path "/tmp/local-volume-test-23e2001c-f84e-44f7-9550-371fc91ca242" Oct 5 12:05:02.924: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-23e2001c-f84e-44f7-9550-371fc91ca242"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker2-cnkm2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:02.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:05:03.066: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-23e2001c-f84e-44f7-9550-371fc91ca242] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker2-cnkm2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:03.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "v122-worker2" at path "/tmp/local-volume-test-a1177ee8-e77e-43a2-a7d0-e1fde136f82a" Oct 5 12:05:03.188: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-a1177ee8-e77e-43a2-a7d0-e1fde136f82a"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker2-cnkm2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:03.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:05:03.319: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a1177ee8-e77e-43a2-a7d0-e1fde136f82a] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker2-cnkm2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:03.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "v122-worker2" at path "/tmp/local-volume-test-15b1831a-de8b-4301-b4ea-f87dd5caa7eb" Oct 5 12:05:03.452: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-15b1831a-de8b-4301-b4ea-f87dd5caa7eb"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker2-cnkm2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:03.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:05:03.542: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-15b1831a-de8b-4301-b4ea-f87dd5caa7eb] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker2-cnkm2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:03.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "v122-worker2" at path "/tmp/local-volume-test-c3b56b2c-9d86-4b94-8654-a1b1a4339efe" Oct 5 12:05:03.700: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-c3b56b2c-9d86-4b94-8654-a1b1a4339efe"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker2-cnkm2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:03.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:05:03.830: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c3b56b2c-9d86-4b94-8654-a1b1a4339efe] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker2-cnkm2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:03.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "v122-worker2" at path "/tmp/local-volume-test-e108ceed-0780-42c8-9647-aab11a25b01a" Oct 5 12:05:03.974: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-e108ceed-0780-42c8-9647-aab11a25b01a"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker2-cnkm2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:03.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:05:04.137: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e108ceed-0780-42c8-9647-aab11a25b01a] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker2-cnkm2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:04.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "v122-worker2" at path "/tmp/local-volume-test-c98725a0-a095-40d1-acf3-696247c7bf23" Oct 5 12:05:04.283: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-c98725a0-a095-40d1-acf3-696247c7bf23"] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker2-cnkm2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:04.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:05:04.374: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c98725a0-a095-40d1-acf3-696247c7bf23] Namespace:persistent-local-volumes-test-1714 PodName:hostexec-v122-worker2-cnkm2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:04.374: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:05:04.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1714" for this suite. • [SLOW TEST:90.269 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:441 should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:531 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","total":-1,"completed":2,"skipped":23,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:04:55.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "v122-worker2" using path "/tmp/local-volume-test-a298f44e-cfc2-482d-9552-4e0d4988fccf" Oct 5 12:04:59.884: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-a298f44e-cfc2-482d-9552-4e0d4988fccf && dd if=/dev/zero of=/tmp/local-volume-test-a298f44e-cfc2-482d-9552-4e0d4988fccf/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-a298f44e-cfc2-482d-9552-4e0d4988fccf/file] Namespace:persistent-local-volumes-test-2733 PodName:hostexec-v122-worker2-6lb8q ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:04:59.884: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:05:00.047: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-a298f44e-cfc2-482d-9552-4e0d4988fccf/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-2733 PodName:hostexec-v122-worker2-6lb8q ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:00.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:05:00.170: INFO: Creating a PV followed by a PVC Oct 5 12:05:00.181: INFO: Waiting for PV local-pvsr74x to bind to PVC pvc-9hmw2 Oct 5 12:05:00.181: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-9hmw2] to have phase Bound Oct 5 12:05:00.184: INFO: PersistentVolumeClaim pvc-9hmw2 found but phase is Pending instead of Bound. Oct 5 12:05:02.188: INFO: PersistentVolumeClaim pvc-9hmw2 found but phase is Pending instead of Bound. Oct 5 12:05:04.192: INFO: PersistentVolumeClaim pvc-9hmw2 found but phase is Pending instead of Bound. Oct 5 12:05:06.197: INFO: PersistentVolumeClaim pvc-9hmw2 found and phase=Bound (6.015380107s) Oct 5 12:05:06.197: INFO: Waiting up to 3m0s for PersistentVolume local-pvsr74x to have phase Bound Oct 5 12:05:06.200: INFO: PersistentVolume local-pvsr74x found and phase=Bound (3.094827ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Oct 5 12:05:14.225: INFO: pod "pod-a8cc58b9-78cc-442a-bce8-70c96e63ae0f" created on Node "v122-worker2" STEP: Writing in pod1 Oct 5 12:05:14.225: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2733 PodName:pod-a8cc58b9-78cc-442a-bce8-70c96e63ae0f ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:05:14.225: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:05:14.351: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n18 bytes (18B) copied, 0.000181 seconds, 97.1KB/s", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Oct 5 12:05:14.351: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-2733 PodName:pod-a8cc58b9-78cc-442a-bce8-70c96e63ae0f ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:05:14.351: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:05:14.485: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-a8cc58b9-78cc-442a-bce8-70c96e63ae0f in namespace persistent-local-volumes-test-2733 [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:05:14.490: INFO: Deleting PersistentVolumeClaim "pvc-9hmw2" Oct 5 12:05:14.495: INFO: Deleting PersistentVolume "local-pvsr74x" Oct 5 12:05:14.499: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-a298f44e-cfc2-482d-9552-4e0d4988fccf/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-2733 PodName:hostexec-v122-worker2-6lb8q ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:14.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop8" on node "v122-worker2" at path /tmp/local-volume-test-a298f44e-cfc2-482d-9552-4e0d4988fccf/file Oct 5 12:05:14.644: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop8] Namespace:persistent-local-volumes-test-2733 PodName:hostexec-v122-worker2-6lb8q ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:14.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-a298f44e-cfc2-482d-9552-4e0d4988fccf Oct 5 12:05:14.799: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a298f44e-cfc2-482d-9552-4e0d4988fccf] Namespace:persistent-local-volumes-test-2733 PodName:hostexec-v122-worker2-6lb8q ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:14.799: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:05:14.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2733" for this suite. • [SLOW TEST:19.116 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":3,"skipped":138,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:05:14.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Oct 5 12:05:19.045: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-5caf8b74-962c-441e-bce5-3fae55f41712-backend && ln -s /tmp/local-volume-test-5caf8b74-962c-441e-bce5-3fae55f41712-backend /tmp/local-volume-test-5caf8b74-962c-441e-bce5-3fae55f41712] Namespace:persistent-local-volumes-test-3566 PodName:hostexec-v122-worker2-2rnkv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:19.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:05:19.203: INFO: Creating a PV followed by a PVC Oct 5 12:05:19.212: INFO: Waiting for PV local-pvq6mks to bind to PVC pvc-vxwc5 Oct 5 12:05:19.212: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-vxwc5] to have phase Bound Oct 5 12:05:19.215: INFO: PersistentVolumeClaim pvc-vxwc5 found but phase is Pending instead of Bound. Oct 5 12:05:21.219: INFO: PersistentVolumeClaim pvc-vxwc5 found and phase=Bound (2.007429035s) Oct 5 12:05:21.219: INFO: Waiting up to 3m0s for PersistentVolume local-pvq6mks to have phase Bound Oct 5 12:05:21.222: INFO: PersistentVolume local-pvq6mks found and phase=Bound (3.078636ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Oct 5 12:05:23.248: INFO: pod "pod-285fe2d7-19f6-4611-97ae-7be8daa62905" created on Node "v122-worker2" STEP: Writing in pod1 Oct 5 12:05:23.248: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3566 PodName:pod-285fe2d7-19f6-4611-97ae-7be8daa62905 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:05:23.248: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:05:23.378: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Oct 5 12:05:23.378: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3566 PodName:pod-285fe2d7-19f6-4611-97ae-7be8daa62905 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:05:23.378: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:05:23.512: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Oct 5 12:05:27.535: INFO: pod "pod-2441eb14-9ea5-4004-aa3e-fe6620090a8e" created on Node "v122-worker2" Oct 5 12:05:27.535: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3566 PodName:pod-2441eb14-9ea5-4004-aa3e-fe6620090a8e ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:05:27.535: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:05:27.640: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Oct 5 12:05:27.640: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-5caf8b74-962c-441e-bce5-3fae55f41712 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3566 PodName:pod-2441eb14-9ea5-4004-aa3e-fe6620090a8e ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:05:27.640: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:05:27.707: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-5caf8b74-962c-441e-bce5-3fae55f41712 > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Oct 5 12:05:27.707: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3566 PodName:pod-285fe2d7-19f6-4611-97ae-7be8daa62905 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:05:27.707: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:05:27.835: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-5caf8b74-962c-441e-bce5-3fae55f41712", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-285fe2d7-19f6-4611-97ae-7be8daa62905 in namespace persistent-local-volumes-test-3566 STEP: Deleting pod2 STEP: Deleting pod pod-2441eb14-9ea5-4004-aa3e-fe6620090a8e in namespace persistent-local-volumes-test-3566 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:05:27.845: INFO: Deleting PersistentVolumeClaim "pvc-vxwc5" Oct 5 12:05:27.850: INFO: Deleting PersistentVolume "local-pvq6mks" STEP: Removing the test directory Oct 5 12:05:27.854: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-5caf8b74-962c-441e-bce5-3fae55f41712 && rm -r /tmp/local-volume-test-5caf8b74-962c-441e-bce5-3fae55f41712-backend] Namespace:persistent-local-volumes-test-3566 PodName:hostexec-v122-worker2-2rnkv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:27.854: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:05:27.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3566" for this suite. • [SLOW TEST:13.007 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":4,"skipped":148,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:05:01.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not require VolumeAttach for drivers without attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:339 STEP: Building a driver namespace object, basename csi-mock-volumes-7636 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Oct 5 12:05:01.634: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7636-548/csi-attacher Oct 5 12:05:01.637: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7636 Oct 5 12:05:01.637: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-7636 Oct 5 12:05:01.640: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7636 Oct 5 12:05:01.643: INFO: creating *v1.Role: csi-mock-volumes-7636-548/external-attacher-cfg-csi-mock-volumes-7636 Oct 5 12:05:01.646: INFO: creating *v1.RoleBinding: csi-mock-volumes-7636-548/csi-attacher-role-cfg Oct 5 12:05:01.649: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7636-548/csi-provisioner Oct 5 12:05:01.652: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7636 Oct 5 12:05:01.652: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-7636 Oct 5 12:05:01.655: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7636 Oct 5 12:05:01.658: INFO: creating *v1.Role: csi-mock-volumes-7636-548/external-provisioner-cfg-csi-mock-volumes-7636 Oct 5 12:05:01.661: INFO: creating *v1.RoleBinding: csi-mock-volumes-7636-548/csi-provisioner-role-cfg Oct 5 12:05:01.663: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7636-548/csi-resizer Oct 5 12:05:01.665: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7636 Oct 5 12:05:01.665: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-7636 Oct 5 12:05:01.668: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7636 Oct 5 12:05:01.670: INFO: creating *v1.Role: csi-mock-volumes-7636-548/external-resizer-cfg-csi-mock-volumes-7636 Oct 5 12:05:01.673: INFO: creating *v1.RoleBinding: csi-mock-volumes-7636-548/csi-resizer-role-cfg Oct 5 12:05:01.676: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7636-548/csi-snapshotter Oct 5 12:05:01.680: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7636 Oct 5 12:05:01.680: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-7636 Oct 5 12:05:01.682: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7636 Oct 5 12:05:01.685: INFO: creating *v1.Role: csi-mock-volumes-7636-548/external-snapshotter-leaderelection-csi-mock-volumes-7636 Oct 5 12:05:01.689: INFO: creating *v1.RoleBinding: csi-mock-volumes-7636-548/external-snapshotter-leaderelection Oct 5 12:05:01.697: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7636-548/csi-mock Oct 5 12:05:01.700: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7636 Oct 5 12:05:01.703: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7636 Oct 5 12:05:01.706: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7636 Oct 5 12:05:01.713: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7636 Oct 5 12:05:01.715: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7636 Oct 5 12:05:01.717: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7636 Oct 5 12:05:01.720: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7636 Oct 5 12:05:01.722: INFO: creating *v1.StatefulSet: csi-mock-volumes-7636-548/csi-mockplugin Oct 5 12:05:01.727: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-7636 Oct 5 12:05:01.730: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-7636" Oct 5 12:05:01.732: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-7636 to register on node v122-worker STEP: Creating pod Oct 5 12:05:11.252: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Oct 5 12:05:11.259: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-rmqrb] to have phase Bound Oct 5 12:05:11.263: INFO: PersistentVolumeClaim pvc-rmqrb found but phase is Pending instead of Bound. Oct 5 12:05:13.266: INFO: PersistentVolumeClaim pvc-rmqrb found and phase=Bound (2.006613933s) STEP: Checking if VolumeAttachment was created for the pod STEP: Deleting pod pvc-volume-tester-pntb9 Oct 5 12:05:15.295: INFO: Deleting pod "pvc-volume-tester-pntb9" in namespace "csi-mock-volumes-7636" Oct 5 12:05:15.300: INFO: Wait up to 5m0s for pod "pvc-volume-tester-pntb9" to be fully deleted STEP: Deleting claim pvc-rmqrb Oct 5 12:05:19.315: INFO: Waiting up to 2m0s for PersistentVolume pvc-a40d135e-d122-43c9-8e54-4500c0a6bfd3 to get deleted Oct 5 12:05:19.319: INFO: PersistentVolume pvc-a40d135e-d122-43c9-8e54-4500c0a6bfd3 found and phase=Bound (3.199979ms) Oct 5 12:05:21.324: INFO: PersistentVolume pvc-a40d135e-d122-43c9-8e54-4500c0a6bfd3 was removed STEP: Deleting storageclass csi-mock-volumes-7636-scqlsbz STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-7636 STEP: Waiting for namespaces [csi-mock-volumes-7636] to vanish STEP: uninstalling csi mock driver Oct 5 12:05:27.340: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7636-548/csi-attacher Oct 5 12:05:27.344: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7636 Oct 5 12:05:27.348: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7636 Oct 5 12:05:27.353: INFO: deleting *v1.Role: csi-mock-volumes-7636-548/external-attacher-cfg-csi-mock-volumes-7636 Oct 5 12:05:27.357: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7636-548/csi-attacher-role-cfg Oct 5 12:05:27.361: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7636-548/csi-provisioner Oct 5 12:05:27.366: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7636 Oct 5 12:05:27.370: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7636 Oct 5 12:05:27.374: INFO: deleting *v1.Role: csi-mock-volumes-7636-548/external-provisioner-cfg-csi-mock-volumes-7636 Oct 5 12:05:27.378: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7636-548/csi-provisioner-role-cfg Oct 5 12:05:27.382: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7636-548/csi-resizer Oct 5 12:05:27.386: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7636 Oct 5 12:05:27.390: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7636 Oct 5 12:05:27.394: INFO: deleting *v1.Role: csi-mock-volumes-7636-548/external-resizer-cfg-csi-mock-volumes-7636 Oct 5 12:05:27.397: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7636-548/csi-resizer-role-cfg Oct 5 12:05:27.401: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7636-548/csi-snapshotter Oct 5 12:05:27.412: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7636 Oct 5 12:05:27.416: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7636 Oct 5 12:05:27.419: INFO: deleting *v1.Role: csi-mock-volumes-7636-548/external-snapshotter-leaderelection-csi-mock-volumes-7636 Oct 5 12:05:27.423: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7636-548/external-snapshotter-leaderelection Oct 5 12:05:27.427: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7636-548/csi-mock Oct 5 12:05:27.431: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7636 Oct 5 12:05:27.435: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7636 Oct 5 12:05:27.439: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7636 Oct 5 12:05:27.442: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7636 Oct 5 12:05:27.447: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7636 Oct 5 12:05:27.452: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7636 Oct 5 12:05:27.456: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7636 Oct 5 12:05:27.460: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7636-548/csi-mockplugin Oct 5 12:05:27.465: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-7636 STEP: deleting the driver namespace: csi-mock-volumes-7636-548 STEP: Waiting for namespaces [csi-mock-volumes-7636-548] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:05:33.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:31.944 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI attach test using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:317 should not require VolumeAttach for drivers without attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:339 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment","total":-1,"completed":3,"skipped":86,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:05:33.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Oct 5 12:05:33.545: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:05:33.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-7852" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.041 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 schedule pods each with a PD, delete pod and verify detach [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:93 for RW PD with pod delete grace period of "default (30s)" /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:135 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:03:40.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume without restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:688 STEP: Building a driver namespace object, basename csi-mock-volumes-6913 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Oct 5 12:03:40.258: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6913-4286/csi-attacher Oct 5 12:03:40.263: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6913 Oct 5 12:03:40.263: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-6913 Oct 5 12:03:40.267: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6913 Oct 5 12:03:40.271: INFO: creating *v1.Role: csi-mock-volumes-6913-4286/external-attacher-cfg-csi-mock-volumes-6913 Oct 5 12:03:40.275: INFO: creating *v1.RoleBinding: csi-mock-volumes-6913-4286/csi-attacher-role-cfg Oct 5 12:03:40.290: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6913-4286/csi-provisioner Oct 5 12:03:40.298: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6913 Oct 5 12:03:40.298: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-6913 Oct 5 12:03:40.304: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6913 Oct 5 12:03:40.307: INFO: creating *v1.Role: csi-mock-volumes-6913-4286/external-provisioner-cfg-csi-mock-volumes-6913 Oct 5 12:03:40.311: INFO: creating *v1.RoleBinding: csi-mock-volumes-6913-4286/csi-provisioner-role-cfg Oct 5 12:03:40.315: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6913-4286/csi-resizer Oct 5 12:03:40.318: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6913 Oct 5 12:03:40.318: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-6913 Oct 5 12:03:40.322: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6913 Oct 5 12:03:40.326: INFO: creating *v1.Role: csi-mock-volumes-6913-4286/external-resizer-cfg-csi-mock-volumes-6913 Oct 5 12:03:40.329: INFO: creating *v1.RoleBinding: csi-mock-volumes-6913-4286/csi-resizer-role-cfg Oct 5 12:03:40.332: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6913-4286/csi-snapshotter Oct 5 12:03:40.335: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6913 Oct 5 12:03:40.335: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-6913 Oct 5 12:03:40.337: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6913 Oct 5 12:03:40.340: INFO: creating *v1.Role: csi-mock-volumes-6913-4286/external-snapshotter-leaderelection-csi-mock-volumes-6913 Oct 5 12:03:40.343: INFO: creating *v1.RoleBinding: csi-mock-volumes-6913-4286/external-snapshotter-leaderelection Oct 5 12:03:40.345: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6913-4286/csi-mock Oct 5 12:03:40.348: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6913 Oct 5 12:03:40.352: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6913 Oct 5 12:03:40.354: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6913 Oct 5 12:03:40.357: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6913 Oct 5 12:03:40.360: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6913 Oct 5 12:03:40.363: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6913 Oct 5 12:03:40.365: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6913 Oct 5 12:03:40.369: INFO: creating *v1.StatefulSet: csi-mock-volumes-6913-4286/csi-mockplugin Oct 5 12:03:40.374: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-6913 Oct 5 12:03:40.378: INFO: creating *v1.StatefulSet: csi-mock-volumes-6913-4286/csi-mockplugin-resizer Oct 5 12:03:40.382: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-6913" Oct 5 12:03:40.384: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-6913 to register on node v122-worker STEP: Creating pod Oct 5 12:03:49.905: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Oct 5 12:03:49.912: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-zjv9d] to have phase Bound Oct 5 12:03:49.916: INFO: PersistentVolumeClaim pvc-zjv9d found but phase is Pending instead of Bound. Oct 5 12:03:51.920: INFO: PersistentVolumeClaim pvc-zjv9d found and phase=Bound (2.00748991s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-2snhj Oct 5 12:05:17.965: INFO: Deleting pod "pvc-volume-tester-2snhj" in namespace "csi-mock-volumes-6913" Oct 5 12:05:17.970: INFO: Wait up to 5m0s for pod "pvc-volume-tester-2snhj" to be fully deleted STEP: Deleting claim pvc-zjv9d Oct 5 12:05:19.985: INFO: Waiting up to 2m0s for PersistentVolume pvc-22281c4f-5e7f-4211-9197-aa173e343717 to get deleted Oct 5 12:05:19.988: INFO: PersistentVolume pvc-22281c4f-5e7f-4211-9197-aa173e343717 found and phase=Bound (2.748708ms) Oct 5 12:05:21.992: INFO: PersistentVolume pvc-22281c4f-5e7f-4211-9197-aa173e343717 was removed STEP: Deleting storageclass csi-mock-volumes-6913-scnl2s7 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-6913 STEP: Waiting for namespaces [csi-mock-volumes-6913] to vanish STEP: uninstalling csi mock driver Oct 5 12:05:28.008: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6913-4286/csi-attacher Oct 5 12:05:28.012: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6913 Oct 5 12:05:28.018: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6913 Oct 5 12:05:28.021: INFO: deleting *v1.Role: csi-mock-volumes-6913-4286/external-attacher-cfg-csi-mock-volumes-6913 Oct 5 12:05:28.026: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6913-4286/csi-attacher-role-cfg Oct 5 12:05:28.031: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6913-4286/csi-provisioner Oct 5 12:05:28.035: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6913 Oct 5 12:05:28.039: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6913 Oct 5 12:05:28.042: INFO: deleting *v1.Role: csi-mock-volumes-6913-4286/external-provisioner-cfg-csi-mock-volumes-6913 Oct 5 12:05:28.047: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6913-4286/csi-provisioner-role-cfg Oct 5 12:05:28.050: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6913-4286/csi-resizer Oct 5 12:05:28.054: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6913 Oct 5 12:05:28.058: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6913 Oct 5 12:05:28.062: INFO: deleting *v1.Role: csi-mock-volumes-6913-4286/external-resizer-cfg-csi-mock-volumes-6913 Oct 5 12:05:28.066: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6913-4286/csi-resizer-role-cfg Oct 5 12:05:28.069: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6913-4286/csi-snapshotter Oct 5 12:05:28.073: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6913 Oct 5 12:05:28.077: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6913 Oct 5 12:05:28.081: INFO: deleting *v1.Role: csi-mock-volumes-6913-4286/external-snapshotter-leaderelection-csi-mock-volumes-6913 Oct 5 12:05:28.085: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6913-4286/external-snapshotter-leaderelection Oct 5 12:05:28.090: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6913-4286/csi-mock Oct 5 12:05:28.094: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6913 Oct 5 12:05:28.099: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6913 Oct 5 12:05:28.104: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6913 Oct 5 12:05:28.109: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6913 Oct 5 12:05:28.113: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6913 Oct 5 12:05:28.117: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6913 Oct 5 12:05:28.122: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6913 Oct 5 12:05:28.128: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6913-4286/csi-mockplugin Oct 5 12:05:28.134: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-6913 Oct 5 12:05:28.139: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6913-4286/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-6913-4286 STEP: Waiting for namespaces [csi-mock-volumes-6913-4286] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:05:34.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:114.007 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI online volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:673 should expand volume without restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:688 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":4,"skipped":93,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:05:33.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Oct 5 12:05:33.677: INFO: The status of Pod test-hostpath-type-vcxww is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:05:35.681: INFO: The status of Pod test-hostpath-type-vcxww is Running (Ready = true) STEP: running on node v122-worker STEP: Create a character device for further testing Oct 5 12:05:35.684: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-3320 PodName:test-hostpath-type-vcxww ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:05:35.684: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to mount character device 'achardev' successfully when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:277 [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:05:37.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-3320" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should be able to mount character device 'achardev' successfully when HostPathType is HostPathCharDev","total":-1,"completed":4,"skipped":141,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:04:55.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should call NodeUnstage after NodeStage success /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:829 STEP: Building a driver namespace object, basename csi-mock-volumes-4725 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Oct 5 12:04:55.659: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4725-8033/csi-attacher Oct 5 12:04:55.662: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4725 Oct 5 12:04:55.662: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4725 Oct 5 12:04:55.666: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4725 Oct 5 12:04:55.676: INFO: creating *v1.Role: csi-mock-volumes-4725-8033/external-attacher-cfg-csi-mock-volumes-4725 Oct 5 12:04:55.680: INFO: creating *v1.RoleBinding: csi-mock-volumes-4725-8033/csi-attacher-role-cfg Oct 5 12:04:55.684: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4725-8033/csi-provisioner Oct 5 12:04:55.688: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4725 Oct 5 12:04:55.688: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4725 Oct 5 12:04:55.692: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4725 Oct 5 12:04:55.696: INFO: creating *v1.Role: csi-mock-volumes-4725-8033/external-provisioner-cfg-csi-mock-volumes-4725 Oct 5 12:04:55.700: INFO: creating *v1.RoleBinding: csi-mock-volumes-4725-8033/csi-provisioner-role-cfg Oct 5 12:04:55.704: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4725-8033/csi-resizer Oct 5 12:04:55.708: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4725 Oct 5 12:04:55.708: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4725 Oct 5 12:04:55.712: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4725 Oct 5 12:04:55.715: INFO: creating *v1.Role: csi-mock-volumes-4725-8033/external-resizer-cfg-csi-mock-volumes-4725 Oct 5 12:04:55.720: INFO: creating *v1.RoleBinding: csi-mock-volumes-4725-8033/csi-resizer-role-cfg Oct 5 12:04:55.724: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4725-8033/csi-snapshotter Oct 5 12:04:55.727: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4725 Oct 5 12:04:55.727: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4725 Oct 5 12:04:55.731: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4725 Oct 5 12:04:55.734: INFO: creating *v1.Role: csi-mock-volumes-4725-8033/external-snapshotter-leaderelection-csi-mock-volumes-4725 Oct 5 12:04:55.738: INFO: creating *v1.RoleBinding: csi-mock-volumes-4725-8033/external-snapshotter-leaderelection Oct 5 12:04:55.742: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4725-8033/csi-mock Oct 5 12:04:55.745: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4725 Oct 5 12:04:55.749: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4725 Oct 5 12:04:55.752: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4725 Oct 5 12:04:55.756: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4725 Oct 5 12:04:55.759: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4725 Oct 5 12:04:55.763: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4725 Oct 5 12:04:55.766: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4725 Oct 5 12:04:55.770: INFO: creating *v1.StatefulSet: csi-mock-volumes-4725-8033/csi-mockplugin Oct 5 12:04:55.776: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4725 Oct 5 12:04:55.779: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4725" Oct 5 12:04:55.782: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4725 to register on node v122-worker2 STEP: Creating pod Oct 5 12:05:05.301: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Oct 5 12:05:05.307: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-r46nq] to have phase Bound Oct 5 12:05:05.310: INFO: PersistentVolumeClaim pvc-r46nq found but phase is Pending instead of Bound. Oct 5 12:05:07.317: INFO: PersistentVolumeClaim pvc-r46nq found and phase=Bound (2.01024215s) Oct 5 12:05:07.328: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-r46nq] to have phase Bound Oct 5 12:05:07.332: INFO: PersistentVolumeClaim pvc-r46nq found and phase=Bound (2.928275ms) STEP: Waiting for expected CSI calls STEP: Waiting for pod to be running STEP: Deleting the previously created pod Oct 5 12:05:15.392: INFO: Deleting pod "pvc-volume-tester-xbnxn" in namespace "csi-mock-volumes-4725" Oct 5 12:05:15.398: INFO: Wait up to 5m0s for pod "pvc-volume-tester-xbnxn" to be fully deleted STEP: Waiting for all remaining expected CSI calls STEP: Deleting pod pvc-volume-tester-xbnxn Oct 5 12:05:18.414: INFO: Deleting pod "pvc-volume-tester-xbnxn" in namespace "csi-mock-volumes-4725" STEP: Deleting claim pvc-r46nq Oct 5 12:05:18.426: INFO: Waiting up to 2m0s for PersistentVolume pvc-f09c756c-7aed-4793-8982-e66fb9e5e8a9 to get deleted Oct 5 12:05:18.429: INFO: PersistentVolume pvc-f09c756c-7aed-4793-8982-e66fb9e5e8a9 found and phase=Bound (2.998248ms) Oct 5 12:05:20.433: INFO: PersistentVolume pvc-f09c756c-7aed-4793-8982-e66fb9e5e8a9 was removed STEP: Deleting storageclass csi-mock-volumes-4725-sc5px2v STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4725 STEP: Waiting for namespaces [csi-mock-volumes-4725] to vanish STEP: uninstalling csi mock driver Oct 5 12:05:26.448: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4725-8033/csi-attacher Oct 5 12:05:26.453: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4725 Oct 5 12:05:26.457: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4725 Oct 5 12:05:26.462: INFO: deleting *v1.Role: csi-mock-volumes-4725-8033/external-attacher-cfg-csi-mock-volumes-4725 Oct 5 12:05:26.466: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4725-8033/csi-attacher-role-cfg Oct 5 12:05:26.470: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4725-8033/csi-provisioner Oct 5 12:05:26.474: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4725 Oct 5 12:05:26.479: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4725 Oct 5 12:05:26.483: INFO: deleting *v1.Role: csi-mock-volumes-4725-8033/external-provisioner-cfg-csi-mock-volumes-4725 Oct 5 12:05:26.487: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4725-8033/csi-provisioner-role-cfg Oct 5 12:05:26.491: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4725-8033/csi-resizer Oct 5 12:05:26.495: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4725 Oct 5 12:05:26.500: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4725 Oct 5 12:05:26.504: INFO: deleting *v1.Role: csi-mock-volumes-4725-8033/external-resizer-cfg-csi-mock-volumes-4725 Oct 5 12:05:26.509: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4725-8033/csi-resizer-role-cfg Oct 5 12:05:26.513: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4725-8033/csi-snapshotter Oct 5 12:05:26.517: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4725 Oct 5 12:05:26.521: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4725 Oct 5 12:05:26.524: INFO: deleting *v1.Role: csi-mock-volumes-4725-8033/external-snapshotter-leaderelection-csi-mock-volumes-4725 Oct 5 12:05:26.529: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4725-8033/external-snapshotter-leaderelection Oct 5 12:05:26.533: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4725-8033/csi-mock Oct 5 12:05:26.537: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4725 Oct 5 12:05:26.543: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4725 Oct 5 12:05:26.549: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4725 Oct 5 12:05:26.553: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4725 Oct 5 12:05:26.557: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4725 Oct 5 12:05:26.561: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4725 Oct 5 12:05:26.565: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4725 Oct 5 12:05:26.569: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4725-8033/csi-mockplugin Oct 5 12:05:26.573: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-4725 STEP: deleting the driver namespace: csi-mock-volumes-4725-8033 STEP: Waiting for namespaces [csi-mock-volumes-4725-8033] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:05:38.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:43.021 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI NodeStage error cases [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:735 should call NodeUnstage after NodeStage success /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:829 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should call NodeUnstage after NodeStage success","total":-1,"completed":3,"skipped":114,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:05:04.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should require VolumeAttach for drivers with attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:339 STEP: Building a driver namespace object, basename csi-mock-volumes-6930 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Oct 5 12:05:04.609: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6930-3803/csi-attacher Oct 5 12:05:04.613: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6930 Oct 5 12:05:04.613: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-6930 Oct 5 12:05:04.616: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6930 Oct 5 12:05:04.620: INFO: creating *v1.Role: csi-mock-volumes-6930-3803/external-attacher-cfg-csi-mock-volumes-6930 Oct 5 12:05:04.624: INFO: creating *v1.RoleBinding: csi-mock-volumes-6930-3803/csi-attacher-role-cfg Oct 5 12:05:04.628: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6930-3803/csi-provisioner Oct 5 12:05:04.631: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6930 Oct 5 12:05:04.631: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-6930 Oct 5 12:05:04.635: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6930 Oct 5 12:05:04.640: INFO: creating *v1.Role: csi-mock-volumes-6930-3803/external-provisioner-cfg-csi-mock-volumes-6930 Oct 5 12:05:04.644: INFO: creating *v1.RoleBinding: csi-mock-volumes-6930-3803/csi-provisioner-role-cfg Oct 5 12:05:04.648: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6930-3803/csi-resizer Oct 5 12:05:04.651: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6930 Oct 5 12:05:04.651: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-6930 Oct 5 12:05:04.655: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6930 Oct 5 12:05:04.659: INFO: creating *v1.Role: csi-mock-volumes-6930-3803/external-resizer-cfg-csi-mock-volumes-6930 Oct 5 12:05:04.662: INFO: creating *v1.RoleBinding: csi-mock-volumes-6930-3803/csi-resizer-role-cfg Oct 5 12:05:04.666: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6930-3803/csi-snapshotter Oct 5 12:05:04.670: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6930 Oct 5 12:05:04.670: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-6930 Oct 5 12:05:04.674: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6930 Oct 5 12:05:04.678: INFO: creating *v1.Role: csi-mock-volumes-6930-3803/external-snapshotter-leaderelection-csi-mock-volumes-6930 Oct 5 12:05:04.681: INFO: creating *v1.RoleBinding: csi-mock-volumes-6930-3803/external-snapshotter-leaderelection Oct 5 12:05:04.685: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6930-3803/csi-mock Oct 5 12:05:04.689: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6930 Oct 5 12:05:04.693: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6930 Oct 5 12:05:04.697: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6930 Oct 5 12:05:04.700: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6930 Oct 5 12:05:04.704: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6930 Oct 5 12:05:04.708: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6930 Oct 5 12:05:04.712: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6930 Oct 5 12:05:04.716: INFO: creating *v1.StatefulSet: csi-mock-volumes-6930-3803/csi-mockplugin Oct 5 12:05:04.723: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-6930 Oct 5 12:05:04.726: INFO: creating *v1.StatefulSet: csi-mock-volumes-6930-3803/csi-mockplugin-attacher Oct 5 12:05:04.731: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-6930" Oct 5 12:05:04.735: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-6930 to register on node v122-worker2 STEP: Creating pod Oct 5 12:05:14.253: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Oct 5 12:05:14.260: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-b76xx] to have phase Bound Oct 5 12:05:14.264: INFO: PersistentVolumeClaim pvc-b76xx found but phase is Pending instead of Bound. Oct 5 12:05:16.269: INFO: PersistentVolumeClaim pvc-b76xx found and phase=Bound (2.009216959s) STEP: Checking if VolumeAttachment was created for the pod STEP: Deleting pod pvc-volume-tester-cgmvm Oct 5 12:05:24.299: INFO: Deleting pod "pvc-volume-tester-cgmvm" in namespace "csi-mock-volumes-6930" Oct 5 12:05:24.303: INFO: Wait up to 5m0s for pod "pvc-volume-tester-cgmvm" to be fully deleted STEP: Deleting claim pvc-b76xx Oct 5 12:05:26.319: INFO: Waiting up to 2m0s for PersistentVolume pvc-54e60ab5-c2f7-439c-b32c-6d297109b039 to get deleted Oct 5 12:05:26.322: INFO: PersistentVolume pvc-54e60ab5-c2f7-439c-b32c-6d297109b039 found and phase=Bound (3.368194ms) Oct 5 12:05:28.326: INFO: PersistentVolume pvc-54e60ab5-c2f7-439c-b32c-6d297109b039 found and phase=Released (2.007052626s) Oct 5 12:05:30.330: INFO: PersistentVolume pvc-54e60ab5-c2f7-439c-b32c-6d297109b039 was removed STEP: Deleting storageclass csi-mock-volumes-6930-scb89vw STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-6930 STEP: Waiting for namespaces [csi-mock-volumes-6930] to vanish STEP: uninstalling csi mock driver Oct 5 12:05:36.345: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6930-3803/csi-attacher Oct 5 12:05:36.349: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6930 Oct 5 12:05:36.354: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6930 Oct 5 12:05:36.359: INFO: deleting *v1.Role: csi-mock-volumes-6930-3803/external-attacher-cfg-csi-mock-volumes-6930 Oct 5 12:05:36.363: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6930-3803/csi-attacher-role-cfg Oct 5 12:05:36.367: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6930-3803/csi-provisioner Oct 5 12:05:36.371: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6930 Oct 5 12:05:36.375: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6930 Oct 5 12:05:36.379: INFO: deleting *v1.Role: csi-mock-volumes-6930-3803/external-provisioner-cfg-csi-mock-volumes-6930 Oct 5 12:05:36.383: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6930-3803/csi-provisioner-role-cfg Oct 5 12:05:36.386: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6930-3803/csi-resizer Oct 5 12:05:36.390: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6930 Oct 5 12:05:36.395: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6930 Oct 5 12:05:36.399: INFO: deleting *v1.Role: csi-mock-volumes-6930-3803/external-resizer-cfg-csi-mock-volumes-6930 Oct 5 12:05:36.404: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6930-3803/csi-resizer-role-cfg Oct 5 12:05:36.408: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6930-3803/csi-snapshotter Oct 5 12:05:36.416: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6930 Oct 5 12:05:36.420: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6930 Oct 5 12:05:36.424: INFO: deleting *v1.Role: csi-mock-volumes-6930-3803/external-snapshotter-leaderelection-csi-mock-volumes-6930 Oct 5 12:05:36.429: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6930-3803/external-snapshotter-leaderelection Oct 5 12:05:36.433: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6930-3803/csi-mock Oct 5 12:05:36.437: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6930 Oct 5 12:05:36.441: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6930 Oct 5 12:05:36.445: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6930 Oct 5 12:05:36.450: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6930 Oct 5 12:05:36.454: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6930 Oct 5 12:05:36.458: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6930 Oct 5 12:05:36.462: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6930 Oct 5 12:05:36.466: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6930-3803/csi-mockplugin Oct 5 12:05:36.470: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-6930 Oct 5 12:05:36.474: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6930-3803/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-6930-3803 STEP: Waiting for namespaces [csi-mock-volumes-6930-3803] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:05:42.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:37.995 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI attach test using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:317 should require VolumeAttach for drivers with attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:339 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment","total":-1,"completed":3,"skipped":39,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:05:38.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:106 STEP: Creating a pod to test downward API volume plugin Oct 5 12:05:38.788: INFO: Waiting up to 5m0s for pod "metadata-volume-fb64a8bc-d88a-4ffe-acba-5bb950a5bd47" in namespace "downward-api-10" to be "Succeeded or Failed" Oct 5 12:05:38.791: INFO: Pod "metadata-volume-fb64a8bc-d88a-4ffe-acba-5bb950a5bd47": Phase="Pending", Reason="", readiness=false. Elapsed: 3.326948ms Oct 5 12:05:40.796: INFO: Pod "metadata-volume-fb64a8bc-d88a-4ffe-acba-5bb950a5bd47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007809956s Oct 5 12:05:42.801: INFO: Pod "metadata-volume-fb64a8bc-d88a-4ffe-acba-5bb950a5bd47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012783849s STEP: Saw pod success Oct 5 12:05:42.801: INFO: Pod "metadata-volume-fb64a8bc-d88a-4ffe-acba-5bb950a5bd47" satisfied condition "Succeeded or Failed" Oct 5 12:05:42.804: INFO: Trying to get logs from node v122-worker pod metadata-volume-fb64a8bc-d88a-4ffe-acba-5bb950a5bd47 container client-container: STEP: delete the pod Oct 5 12:05:42.819: INFO: Waiting for pod metadata-volume-fb64a8bc-d88a-4ffe-acba-5bb950a5bd47 to disappear Oct 5 12:05:42.822: INFO: Pod metadata-volume-fb64a8bc-d88a-4ffe-acba-5bb950a5bd47 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:05:42.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-10" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":4,"skipped":191,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:05:42.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:57 Oct 5 12:05:42.987: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:05:42.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3833" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:91 S [SKIPPING] in Spec Setup (BeforeEach) [0.043 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:457 should create none metrics for pvc controller before creating any PV or PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:555 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:61 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:05:37.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:630 [It] all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:653 STEP: Create a PVC STEP: Create 2 pods to use this PVC STEP: Wait for all pods are running [AfterEach] Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:644 STEP: Clean PV local-pvnm9tz [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:05:45.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2293" for this suite. • [SLOW TEST:8.085 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:625 all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:653 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running","total":-1,"completed":5,"skipped":154,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:05:42.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Oct 5 12:05:42.602: INFO: The status of Pod test-hostpath-type-t6zk7 is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:05:44.606: INFO: The status of Pod test-hostpath-type-t6zk7 is Running (Ready = true) STEP: running on node v122-worker STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should be able to mount directory 'adir' successfully when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:76 [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:05:48.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-9531" for this suite. • [SLOW TEST:6.106 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount directory 'adir' successfully when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:76 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should be able to mount directory 'adir' successfully when HostPathType is HostPathDirectory","total":-1,"completed":4,"skipped":68,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:05:46.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "v122-worker" using path "/tmp/local-volume-test-76b7fa90-ac29-4a62-9339-32f22ab102d4" Oct 5 12:05:48.077: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-76b7fa90-ac29-4a62-9339-32f22ab102d4 && dd if=/dev/zero of=/tmp/local-volume-test-76b7fa90-ac29-4a62-9339-32f22ab102d4/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-76b7fa90-ac29-4a62-9339-32f22ab102d4/file] Namespace:persistent-local-volumes-test-1218 PodName:hostexec-v122-worker-tq4dp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:48.077: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:05:48.290: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-76b7fa90-ac29-4a62-9339-32f22ab102d4/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1218 PodName:hostexec-v122-worker-tq4dp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:48.290: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:05:48.430: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop8 && mount -t ext4 /dev/loop8 /tmp/local-volume-test-76b7fa90-ac29-4a62-9339-32f22ab102d4 && chmod o+rwx /tmp/local-volume-test-76b7fa90-ac29-4a62-9339-32f22ab102d4] Namespace:persistent-local-volumes-test-1218 PodName:hostexec-v122-worker-tq4dp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:48.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:05:48.919: INFO: Creating a PV followed by a PVC Oct 5 12:05:48.928: INFO: Waiting for PV local-pvrgwqv to bind to PVC pvc-glzcr Oct 5 12:05:48.928: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-glzcr] to have phase Bound Oct 5 12:05:48.931: INFO: PersistentVolumeClaim pvc-glzcr found but phase is Pending instead of Bound. Oct 5 12:05:50.936: INFO: PersistentVolumeClaim pvc-glzcr found and phase=Bound (2.007809237s) Oct 5 12:05:50.936: INFO: Waiting up to 3m0s for PersistentVolume local-pvrgwqv to have phase Bound Oct 5 12:05:50.939: INFO: PersistentVolume local-pvrgwqv found and phase=Bound (3.073281ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Oct 5 12:05:52.962: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:45799 --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-1218 exec pod-f8500971-5362-4490-bf30-b1d2f0f9a156 --namespace=persistent-local-volumes-test-1218 -- stat -c %g /mnt/volume1' Oct 5 12:05:53.216: INFO: stderr: "" Oct 5 12:05:53.216: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-f8500971-5362-4490-bf30-b1d2f0f9a156 in namespace persistent-local-volumes-test-1218 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:05:53.222: INFO: Deleting PersistentVolumeClaim "pvc-glzcr" Oct 5 12:05:53.227: INFO: Deleting PersistentVolume "local-pvrgwqv" Oct 5 12:05:53.231: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-76b7fa90-ac29-4a62-9339-32f22ab102d4] Namespace:persistent-local-volumes-test-1218 PodName:hostexec-v122-worker-tq4dp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:53.231: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:05:53.331: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-76b7fa90-ac29-4a62-9339-32f22ab102d4/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1218 PodName:hostexec-v122-worker-tq4dp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:53.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop8" on node "v122-worker" at path /tmp/local-volume-test-76b7fa90-ac29-4a62-9339-32f22ab102d4/file Oct 5 12:05:53.478: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop8] Namespace:persistent-local-volumes-test-1218 PodName:hostexec-v122-worker-tq4dp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:53.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-76b7fa90-ac29-4a62-9339-32f22ab102d4 Oct 5 12:05:53.623: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-76b7fa90-ac29-4a62-9339-32f22ab102d4] Namespace:persistent-local-volumes-test-1218 PodName:hostexec-v122-worker-tq4dp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:53.623: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:05:53.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1218" for this suite. • [SLOW TEST:7.773 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":6,"skipped":181,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:05:48.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Oct 5 12:05:48.724: INFO: The status of Pod test-hostpath-type-62hx5 is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:05:50.728: INFO: The status of Pod test-hostpath-type-62hx5 is Running (Ready = true) STEP: running on node v122-worker STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should be able to mount file 'afile' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:147 [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:05:56.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-4343" for this suite. • [SLOW TEST:8.106 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount file 'afile' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:147 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should be able to mount file 'afile' successfully when HostPathType is HostPathUnset","total":-1,"completed":5,"skipped":76,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:05:53.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Oct 5 12:05:53.868: INFO: The status of Pod test-hostpath-type-jhn64 is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:05:55.872: INFO: The status of Pod test-hostpath-type-jhn64 is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:05:57.872: INFO: The status of Pod test-hostpath-type-jhn64 is Running (Ready = true) STEP: running on node v122-worker STEP: Create a character device for further testing Oct 5 12:05:57.876: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-6536 PodName:test-hostpath-type-jhn64 ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:05:57.876: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting character device 'achardev' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:290 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:05:59.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-6536" for this suite. • [SLOW TEST:6.172 seconds] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting character device 'achardev' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:290 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathFile","total":-1,"completed":7,"skipped":208,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:05:34.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Oct 5 12:05:40.230: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-bb5f14ec-427d-4977-a21b-dedd6d4d2e04-backend && mount --bind /tmp/local-volume-test-bb5f14ec-427d-4977-a21b-dedd6d4d2e04-backend /tmp/local-volume-test-bb5f14ec-427d-4977-a21b-dedd6d4d2e04-backend && ln -s /tmp/local-volume-test-bb5f14ec-427d-4977-a21b-dedd6d4d2e04-backend /tmp/local-volume-test-bb5f14ec-427d-4977-a21b-dedd6d4d2e04] Namespace:persistent-local-volumes-test-1249 PodName:hostexec-v122-worker2-zqzxx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:05:40.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:05:40.366: INFO: Creating a PV followed by a PVC Oct 5 12:05:40.375: INFO: Waiting for PV local-pvlccpd to bind to PVC pvc-8hv68 Oct 5 12:05:40.375: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-8hv68] to have phase Bound Oct 5 12:05:40.378: INFO: PersistentVolumeClaim pvc-8hv68 found but phase is Pending instead of Bound. Oct 5 12:05:42.382: INFO: PersistentVolumeClaim pvc-8hv68 found but phase is Pending instead of Bound. Oct 5 12:05:44.386: INFO: PersistentVolumeClaim pvc-8hv68 found but phase is Pending instead of Bound. Oct 5 12:05:46.391: INFO: PersistentVolumeClaim pvc-8hv68 found but phase is Pending instead of Bound. Oct 5 12:05:48.395: INFO: PersistentVolumeClaim pvc-8hv68 found but phase is Pending instead of Bound. Oct 5 12:05:50.400: INFO: PersistentVolumeClaim pvc-8hv68 found and phase=Bound (10.025120907s) Oct 5 12:05:50.400: INFO: Waiting up to 3m0s for PersistentVolume local-pvlccpd to have phase Bound Oct 5 12:05:50.403: INFO: PersistentVolume local-pvlccpd found and phase=Bound (3.148613ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Oct 5 12:05:56.428: INFO: pod "pod-b7b4cd0f-e103-4c2c-b5ab-61fe0e730370" created on Node "v122-worker2" STEP: Writing in pod1 Oct 5 12:05:56.428: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1249 PodName:pod-b7b4cd0f-e103-4c2c-b5ab-61fe0e730370 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:05:56.428: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:05:56.556: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Oct 5 12:05:56.556: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1249 PodName:pod-b7b4cd0f-e103-4c2c-b5ab-61fe0e730370 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:05:56.556: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:05:56.682: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-b7b4cd0f-e103-4c2c-b5ab-61fe0e730370 in namespace persistent-local-volumes-test-1249 STEP: Creating pod2 STEP: Creating a pod Oct 5 12:06:00.709: INFO: pod "pod-717cdbfb-f0cc-4427-8ea8-680da41a5a99" created on Node "v122-worker2" STEP: Reading in pod2 Oct 5 12:06:00.709: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1249 PodName:pod-717cdbfb-f0cc-4427-8ea8-680da41a5a99 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:06:00.709: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:06:00.843: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-717cdbfb-f0cc-4427-8ea8-680da41a5a99 in namespace persistent-local-volumes-test-1249 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:06:00.848: INFO: Deleting PersistentVolumeClaim "pvc-8hv68" Oct 5 12:06:00.852: INFO: Deleting PersistentVolume "local-pvlccpd" STEP: Removing the test directory Oct 5 12:06:00.856: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-bb5f14ec-427d-4977-a21b-dedd6d4d2e04 && umount /tmp/local-volume-test-bb5f14ec-427d-4977-a21b-dedd6d4d2e04-backend && rm -r /tmp/local-volume-test-bb5f14ec-427d-4977-a21b-dedd6d4d2e04-backend] Namespace:persistent-local-volumes-test-1249 PodName:hostexec-v122-worker2-zqzxx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:06:00.856: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:06:01.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1249" for this suite. • [SLOW TEST:26.833 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":102,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:06:01.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:57 Oct 5 12:06:01.052: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:06:01.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3374" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:91 S [SKIPPING] in Spec Setup (BeforeEach) [0.045 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create prometheus metrics for volume provisioning and attach/detach [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:110 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:61 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:05:56.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Oct 5 12:06:02.961: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-5411 PodName:hostexec-v122-worker2-n9ccp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:06:02.961: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:06:03.113: INFO: exec v122-worker2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Oct 5 12:06:03.113: INFO: exec v122-worker2: stdout: "0\n" Oct 5 12:06:03.113: INFO: exec v122-worker2: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Oct 5 12:06:03.113: INFO: exec v122-worker2: exit code: 0 Oct 5 12:06:03.113: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:06:03.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5411" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [6.219 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1250 ------------------------------ SSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:04:22.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] two pods: should call NodeStage after previous NodeUnstage final error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:962 STEP: Building a driver namespace object, basename csi-mock-volumes-6911 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Oct 5 12:04:22.698: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6911-8514/csi-attacher Oct 5 12:04:22.702: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6911 Oct 5 12:04:22.702: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-6911 Oct 5 12:04:22.706: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6911 Oct 5 12:04:22.710: INFO: creating *v1.Role: csi-mock-volumes-6911-8514/external-attacher-cfg-csi-mock-volumes-6911 Oct 5 12:04:22.714: INFO: creating *v1.RoleBinding: csi-mock-volumes-6911-8514/csi-attacher-role-cfg Oct 5 12:04:22.718: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6911-8514/csi-provisioner Oct 5 12:04:22.722: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6911 Oct 5 12:04:22.722: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-6911 Oct 5 12:04:22.726: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6911 Oct 5 12:04:22.729: INFO: creating *v1.Role: csi-mock-volumes-6911-8514/external-provisioner-cfg-csi-mock-volumes-6911 Oct 5 12:04:22.733: INFO: creating *v1.RoleBinding: csi-mock-volumes-6911-8514/csi-provisioner-role-cfg Oct 5 12:04:22.737: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6911-8514/csi-resizer Oct 5 12:04:22.741: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6911 Oct 5 12:04:22.741: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-6911 Oct 5 12:04:22.744: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6911 Oct 5 12:04:22.748: INFO: creating *v1.Role: csi-mock-volumes-6911-8514/external-resizer-cfg-csi-mock-volumes-6911 Oct 5 12:04:22.751: INFO: creating *v1.RoleBinding: csi-mock-volumes-6911-8514/csi-resizer-role-cfg Oct 5 12:04:22.755: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6911-8514/csi-snapshotter Oct 5 12:04:22.759: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6911 Oct 5 12:04:22.759: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-6911 Oct 5 12:04:22.762: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6911 Oct 5 12:04:22.766: INFO: creating *v1.Role: csi-mock-volumes-6911-8514/external-snapshotter-leaderelection-csi-mock-volumes-6911 Oct 5 12:04:22.770: INFO: creating *v1.RoleBinding: csi-mock-volumes-6911-8514/external-snapshotter-leaderelection Oct 5 12:04:22.774: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6911-8514/csi-mock Oct 5 12:04:22.778: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6911 Oct 5 12:04:22.782: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6911 Oct 5 12:04:22.787: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6911 Oct 5 12:04:22.790: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6911 Oct 5 12:04:22.794: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6911 Oct 5 12:04:22.798: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6911 Oct 5 12:04:22.802: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6911 Oct 5 12:04:22.806: INFO: creating *v1.StatefulSet: csi-mock-volumes-6911-8514/csi-mockplugin Oct 5 12:04:22.813: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-6911 Oct 5 12:04:22.817: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-6911" Oct 5 12:04:22.821: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-6911 to register on node v122-worker I1005 12:04:31.876049 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I1005 12:04:31.877949 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-6911","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1005 12:04:31.879810 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I1005 12:04:31.882168 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I1005 12:04:31.968496 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-6911","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1005 12:04:32.785972 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-6911"},"Error":"","FullError":null} STEP: Creating pod Oct 5 12:04:39.100: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Oct 5 12:04:39.107: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-v9cp5] to have phase Bound Oct 5 12:04:39.110: INFO: PersistentVolumeClaim pvc-v9cp5 found but phase is Pending instead of Bound. I1005 12:04:39.117943 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-cf6a3b41-de21-4191-8a25-5cf68ba5694f","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-cf6a3b41-de21-4191-8a25-5cf68ba5694f"}}},"Error":"","FullError":null} Oct 5 12:04:41.115: INFO: PersistentVolumeClaim pvc-v9cp5 found and phase=Bound (2.007300541s) Oct 5 12:04:41.133: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-v9cp5] to have phase Bound Oct 5 12:04:41.136: INFO: PersistentVolumeClaim pvc-v9cp5 found and phase=Bound (2.919238ms) I1005 12:04:46.738665 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:04:46.741128 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Oct 5 12:04:46.743: INFO: >>> kubeConfig: /root/.kube/config I1005 12:04:46.879937 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-cf6a3b41-de21-4191-8a25-5cf68ba5694f/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-cf6a3b41-de21-4191-8a25-5cf68ba5694f","storage.kubernetes.io/csiProvisionerIdentity":"1664971471883-8081-csi-mock-csi-mock-volumes-6911"}},"Response":{},"Error":"","FullError":null} I1005 12:04:47.572023 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:04:47.574472 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Oct 5 12:04:47.576: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:04:47.711: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:04:47.856: INFO: >>> kubeConfig: /root/.kube/config I1005 12:04:47.983109 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-cf6a3b41-de21-4191-8a25-5cf68ba5694f/globalmount","target_path":"/var/lib/kubelet/pods/5a40190d-4f6c-4b17-a50d-7d1c3be30df1/volumes/kubernetes.io~csi/pvc-cf6a3b41-de21-4191-8a25-5cf68ba5694f/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-cf6a3b41-de21-4191-8a25-5cf68ba5694f","storage.kubernetes.io/csiProvisionerIdentity":"1664971471883-8081-csi-mock-csi-mock-volumes-6911"}},"Response":{},"Error":"","FullError":null} Oct 5 12:04:53.143: INFO: Deleting pod "pvc-volume-tester-jmrzx" in namespace "csi-mock-volumes-6911" Oct 5 12:04:53.148: INFO: Wait up to 5m0s for pod "pvc-volume-tester-jmrzx" to be fully deleted Oct 5 12:04:53.601: INFO: >>> kubeConfig: /root/.kube/config I1005 12:04:53.688975 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/5a40190d-4f6c-4b17-a50d-7d1c3be30df1/volumes/kubernetes.io~csi/pvc-cf6a3b41-de21-4191-8a25-5cf68ba5694f/mount"},"Response":{},"Error":"","FullError":null} I1005 12:04:53.705526 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:04:53.707731 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-cf6a3b41-de21-4191-8a25-5cf68ba5694f/globalmount"},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake final error","FullError":{"code":3,"message":"fake final error"}} I1005 12:04:54.310897 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:04:54.313362 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-cf6a3b41-de21-4191-8a25-5cf68ba5694f/globalmount"},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake final error","FullError":{"code":3,"message":"fake final error"}} I1005 12:04:55.418490 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:04:55.420696 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-cf6a3b41-de21-4191-8a25-5cf68ba5694f/globalmount"},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake final error","FullError":{"code":3,"message":"fake final error"}} I1005 12:04:57.432344 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:04:57.434009 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-cf6a3b41-de21-4191-8a25-5cf68ba5694f/globalmount"},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake final error","FullError":{"code":3,"message":"fake final error"}} I1005 12:05:00.255740 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:05:00.258393 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Oct 5 12:05:00.260: INFO: >>> kubeConfig: /root/.kube/config I1005 12:05:00.355162 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-cf6a3b41-de21-4191-8a25-5cf68ba5694f/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-cf6a3b41-de21-4191-8a25-5cf68ba5694f","storage.kubernetes.io/csiProvisionerIdentity":"1664971471883-8081-csi-mock-csi-mock-volumes-6911"}},"Response":{},"Error":"","FullError":null} I1005 12:05:00.969801 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:05:00.972069 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Oct 5 12:05:00.974: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:05:01.085: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:05:01.243: INFO: >>> kubeConfig: /root/.kube/config I1005 12:05:01.358065 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-cf6a3b41-de21-4191-8a25-5cf68ba5694f/globalmount","target_path":"/var/lib/kubelet/pods/15c2b377-dddb-4c05-99ba-ede175d4a268/volumes/kubernetes.io~csi/pvc-cf6a3b41-de21-4191-8a25-5cf68ba5694f/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-cf6a3b41-de21-4191-8a25-5cf68ba5694f","storage.kubernetes.io/csiProvisionerIdentity":"1664971471883-8081-csi-mock-csi-mock-volumes-6911"}},"Response":{},"Error":"","FullError":null} Oct 5 12:05:05.168: INFO: Deleting pod "pvc-volume-tester-2wdtc" in namespace "csi-mock-volumes-6911" Oct 5 12:05:05.173: INFO: Wait up to 5m0s for pod "pvc-volume-tester-2wdtc" to be fully deleted Oct 5 12:05:06.607: INFO: >>> kubeConfig: /root/.kube/config I1005 12:05:06.755984 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/15c2b377-dddb-4c05-99ba-ede175d4a268/volumes/kubernetes.io~csi/pvc-cf6a3b41-de21-4191-8a25-5cf68ba5694f/mount"},"Response":{},"Error":"","FullError":null} I1005 12:05:06.811784 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:05:06.814013 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-cf6a3b41-de21-4191-8a25-5cf68ba5694f/globalmount"},"Response":{},"Error":"","FullError":null} STEP: Waiting for all remaining expected CSI calls STEP: Deleting pod pvc-volume-tester-jmrzx Oct 5 12:05:10.179: INFO: Deleting pod "pvc-volume-tester-jmrzx" in namespace "csi-mock-volumes-6911" STEP: Deleting pod pvc-volume-tester-2wdtc Oct 5 12:05:10.183: INFO: Deleting pod "pvc-volume-tester-2wdtc" in namespace "csi-mock-volumes-6911" STEP: Deleting claim pvc-v9cp5 Oct 5 12:05:10.193: INFO: Waiting up to 2m0s for PersistentVolume pvc-cf6a3b41-de21-4191-8a25-5cf68ba5694f to get deleted Oct 5 12:05:10.196: INFO: PersistentVolume pvc-cf6a3b41-de21-4191-8a25-5cf68ba5694f found and phase=Bound (2.842932ms) I1005 12:05:10.212271 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} Oct 5 12:05:12.200: INFO: PersistentVolume pvc-cf6a3b41-de21-4191-8a25-5cf68ba5694f was removed STEP: Deleting storageclass csi-mock-volumes-6911-sc2whz7 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-6911 STEP: Waiting for namespaces [csi-mock-volumes-6911] to vanish STEP: uninstalling csi mock driver Oct 5 12:05:24.236: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6911-8514/csi-attacher Oct 5 12:05:24.241: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6911 Oct 5 12:05:24.247: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6911 Oct 5 12:05:24.251: INFO: deleting *v1.Role: csi-mock-volumes-6911-8514/external-attacher-cfg-csi-mock-volumes-6911 Oct 5 12:05:24.256: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6911-8514/csi-attacher-role-cfg Oct 5 12:05:24.260: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6911-8514/csi-provisioner Oct 5 12:05:24.265: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6911 Oct 5 12:05:24.270: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6911 Oct 5 12:05:24.275: INFO: deleting *v1.Role: csi-mock-volumes-6911-8514/external-provisioner-cfg-csi-mock-volumes-6911 Oct 5 12:05:24.280: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6911-8514/csi-provisioner-role-cfg Oct 5 12:05:24.284: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6911-8514/csi-resizer Oct 5 12:05:24.289: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6911 Oct 5 12:05:24.293: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6911 Oct 5 12:05:24.298: INFO: deleting *v1.Role: csi-mock-volumes-6911-8514/external-resizer-cfg-csi-mock-volumes-6911 Oct 5 12:05:24.303: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6911-8514/csi-resizer-role-cfg Oct 5 12:05:24.307: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6911-8514/csi-snapshotter Oct 5 12:05:24.312: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6911 Oct 5 12:05:24.316: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6911 Oct 5 12:05:24.320: INFO: deleting *v1.Role: csi-mock-volumes-6911-8514/external-snapshotter-leaderelection-csi-mock-volumes-6911 Oct 5 12:05:24.324: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6911-8514/external-snapshotter-leaderelection Oct 5 12:05:24.329: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6911-8514/csi-mock Oct 5 12:05:24.336: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6911 Oct 5 12:05:24.341: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6911 Oct 5 12:05:24.346: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6911 Oct 5 12:05:24.350: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6911 Oct 5 12:05:24.355: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6911 Oct 5 12:05:24.359: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6911 Oct 5 12:05:24.363: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6911 Oct 5 12:05:24.368: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6911-8514/csi-mockplugin Oct 5 12:05:24.373: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-6911 STEP: deleting the driver namespace: csi-mock-volumes-6911-8514 STEP: Waiting for namespaces [csi-mock-volumes-6911-8514] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:06:08.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:105.785 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI NodeUnstage error cases [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:901 two pods: should call NodeStage after previous NodeUnstage final error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:962 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage final error","total":-1,"completed":4,"skipped":241,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:06:01.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "v122-worker" using path "/tmp/local-volume-test-740ded68-de97-4ce4-9b66-53b9b4e0c6a4" Oct 5 12:06:03.158: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-740ded68-de97-4ce4-9b66-53b9b4e0c6a4 && dd if=/dev/zero of=/tmp/local-volume-test-740ded68-de97-4ce4-9b66-53b9b4e0c6a4/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-740ded68-de97-4ce4-9b66-53b9b4e0c6a4/file] Namespace:persistent-local-volumes-test-479 PodName:hostexec-v122-worker-drpzm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:06:03.158: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:06:03.326: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-740ded68-de97-4ce4-9b66-53b9b4e0c6a4/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-479 PodName:hostexec-v122-worker-drpzm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:06:03.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:06:03.465: INFO: Creating a PV followed by a PVC Oct 5 12:06:03.475: INFO: Waiting for PV local-pv849df to bind to PVC pvc-64z5p Oct 5 12:06:03.475: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-64z5p] to have phase Bound Oct 5 12:06:03.479: INFO: PersistentVolumeClaim pvc-64z5p found but phase is Pending instead of Bound. Oct 5 12:06:05.483: INFO: PersistentVolumeClaim pvc-64z5p found and phase=Bound (2.008439751s) Oct 5 12:06:05.483: INFO: Waiting up to 3m0s for PersistentVolume local-pv849df to have phase Bound Oct 5 12:06:05.487: INFO: PersistentVolume local-pv849df found and phase=Bound (3.264027ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Oct 5 12:06:07.511: INFO: pod "pod-93f3b5c2-eef1-45d8-b652-86ad64cc54e6" created on Node "v122-worker" STEP: Writing in pod1 Oct 5 12:06:07.511: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-479 PodName:pod-93f3b5c2-eef1-45d8-b652-86ad64cc54e6 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:06:07.511: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:06:07.629: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Oct 5 12:06:07.629: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-479 PodName:pod-93f3b5c2-eef1-45d8-b652-86ad64cc54e6 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:06:07.629: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:06:07.752: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Oct 5 12:06:09.772: INFO: pod "pod-8163209c-2f6f-4434-8120-40bdaa07c743" created on Node "v122-worker" Oct 5 12:06:09.772: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-479 PodName:pod-8163209c-2f6f-4434-8120-40bdaa07c743 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:06:09.772: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:06:09.891: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Oct 5 12:06:09.891: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /dev/loop8 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-479 PodName:pod-8163209c-2f6f-4434-8120-40bdaa07c743 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:06:09.891: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:06:09.965: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /dev/loop8 > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Oct 5 12:06:09.965: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-479 PodName:pod-93f3b5c2-eef1-45d8-b652-86ad64cc54e6 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:06:09.965: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:06:10.070: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/dev/loop8", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-93f3b5c2-eef1-45d8-b652-86ad64cc54e6 in namespace persistent-local-volumes-test-479 STEP: Deleting pod2 STEP: Deleting pod pod-8163209c-2f6f-4434-8120-40bdaa07c743 in namespace persistent-local-volumes-test-479 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:06:10.081: INFO: Deleting PersistentVolumeClaim "pvc-64z5p" Oct 5 12:06:10.085: INFO: Deleting PersistentVolume "local-pv849df" Oct 5 12:06:10.088: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-740ded68-de97-4ce4-9b66-53b9b4e0c6a4/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-479 PodName:hostexec-v122-worker-drpzm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:06:10.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop8" on node "v122-worker" at path /tmp/local-volume-test-740ded68-de97-4ce4-9b66-53b9b4e0c6a4/file Oct 5 12:06:10.218: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop8] Namespace:persistent-local-volumes-test-479 PodName:hostexec-v122-worker-drpzm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:06:10.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-740ded68-de97-4ce4-9b66-53b9b4e0c6a4 Oct 5 12:06:10.332: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-740ded68-de97-4ce4-9b66-53b9b4e0c6a4] Namespace:persistent-local-volumes-test-479 PodName:hostexec-v122-worker-drpzm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:06:10.332: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:06:10.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-479" for this suite. • [SLOW TEST:9.373 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":6,"skipped":127,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:04:26.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] two pods: should call NodeStage after previous NodeUnstage transient error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:962 STEP: Building a driver namespace object, basename csi-mock-volumes-3568 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Oct 5 12:04:26.543: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3568-3648/csi-attacher Oct 5 12:04:26.547: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3568 Oct 5 12:04:26.547: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-3568 Oct 5 12:04:26.550: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3568 Oct 5 12:04:26.554: INFO: creating *v1.Role: csi-mock-volumes-3568-3648/external-attacher-cfg-csi-mock-volumes-3568 Oct 5 12:04:26.557: INFO: creating *v1.RoleBinding: csi-mock-volumes-3568-3648/csi-attacher-role-cfg Oct 5 12:04:26.561: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3568-3648/csi-provisioner Oct 5 12:04:26.564: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3568 Oct 5 12:04:26.564: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-3568 Oct 5 12:04:26.568: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3568 Oct 5 12:04:26.571: INFO: creating *v1.Role: csi-mock-volumes-3568-3648/external-provisioner-cfg-csi-mock-volumes-3568 Oct 5 12:04:26.575: INFO: creating *v1.RoleBinding: csi-mock-volumes-3568-3648/csi-provisioner-role-cfg Oct 5 12:04:26.578: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3568-3648/csi-resizer Oct 5 12:04:26.582: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3568 Oct 5 12:04:26.582: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-3568 Oct 5 12:04:26.585: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3568 Oct 5 12:04:26.588: INFO: creating *v1.Role: csi-mock-volumes-3568-3648/external-resizer-cfg-csi-mock-volumes-3568 Oct 5 12:04:26.591: INFO: creating *v1.RoleBinding: csi-mock-volumes-3568-3648/csi-resizer-role-cfg Oct 5 12:04:26.594: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3568-3648/csi-snapshotter Oct 5 12:04:26.597: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3568 Oct 5 12:04:26.597: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-3568 Oct 5 12:04:26.600: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3568 Oct 5 12:04:26.604: INFO: creating *v1.Role: csi-mock-volumes-3568-3648/external-snapshotter-leaderelection-csi-mock-volumes-3568 Oct 5 12:04:26.607: INFO: creating *v1.RoleBinding: csi-mock-volumes-3568-3648/external-snapshotter-leaderelection Oct 5 12:04:26.610: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3568-3648/csi-mock Oct 5 12:04:26.613: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3568 Oct 5 12:04:26.616: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3568 Oct 5 12:04:26.619: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3568 Oct 5 12:04:26.623: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3568 Oct 5 12:04:26.626: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3568 Oct 5 12:04:26.629: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3568 Oct 5 12:04:26.633: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3568 Oct 5 12:04:26.636: INFO: creating *v1.StatefulSet: csi-mock-volumes-3568-3648/csi-mockplugin Oct 5 12:04:26.643: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-3568 Oct 5 12:04:26.647: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-3568" Oct 5 12:04:26.650: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-3568 to register on node v122-worker2 I1005 12:04:40.749411 33 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I1005 12:04:40.752161 33 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-3568","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1005 12:04:40.754589 33 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I1005 12:04:40.757635 33 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I1005 12:04:40.855453 33 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-3568","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1005 12:04:41.544396 33 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-3568"},"Error":"","FullError":null} STEP: Creating pod Oct 5 12:04:53.051: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Oct 5 12:04:53.064: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-f47rn] to have phase Bound Oct 5 12:04:53.067: INFO: PersistentVolumeClaim pvc-f47rn found but phase is Pending instead of Bound. I1005 12:04:53.082206 33 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-c13077ab-d440-4221-8b2e-4e929259c11e","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-c13077ab-d440-4221-8b2e-4e929259c11e"}}},"Error":"","FullError":null} Oct 5 12:04:55.070: INFO: PersistentVolumeClaim pvc-f47rn found and phase=Bound (2.006328893s) Oct 5 12:04:55.082: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-f47rn] to have phase Bound Oct 5 12:04:55.086: INFO: PersistentVolumeClaim pvc-f47rn found and phase=Bound (3.847161ms) I1005 12:04:55.308646 33 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:04:55.311819 33 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Oct 5 12:04:55.314: INFO: >>> kubeConfig: /root/.kube/config I1005 12:04:55.471448 33 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c13077ab-d440-4221-8b2e-4e929259c11e/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-c13077ab-d440-4221-8b2e-4e929259c11e","storage.kubernetes.io/csiProvisionerIdentity":"1664971480759-8081-csi-mock-csi-mock-volumes-3568"}},"Response":{},"Error":"","FullError":null} I1005 12:04:55.477952 33 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:04:55.480441 33 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Oct 5 12:04:55.482: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:04:55.643: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:04:55.749: INFO: >>> kubeConfig: /root/.kube/config I1005 12:04:55.863284 33 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c13077ab-d440-4221-8b2e-4e929259c11e/globalmount","target_path":"/var/lib/kubelet/pods/71e65f18-02c4-4fe7-a49b-6ba1be080f5b/volumes/kubernetes.io~csi/pvc-c13077ab-d440-4221-8b2e-4e929259c11e/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-c13077ab-d440-4221-8b2e-4e929259c11e","storage.kubernetes.io/csiProvisionerIdentity":"1664971480759-8081-csi-mock-csi-mock-volumes-3568"}},"Response":{},"Error":"","FullError":null} Oct 5 12:04:59.101: INFO: Deleting pod "pvc-volume-tester-prssn" in namespace "csi-mock-volumes-3568" Oct 5 12:04:59.113: INFO: Wait up to 5m0s for pod "pvc-volume-tester-prssn" to be fully deleted Oct 5 12:04:59.648: INFO: >>> kubeConfig: /root/.kube/config I1005 12:04:59.797257 33 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/71e65f18-02c4-4fe7-a49b-6ba1be080f5b/volumes/kubernetes.io~csi/pvc-c13077ab-d440-4221-8b2e-4e929259c11e/mount"},"Response":{},"Error":"","FullError":null} I1005 12:04:59.852873 33 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:04:59.855090 33 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c13077ab-d440-4221-8b2e-4e929259c11e/globalmount"},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake transient error","FullError":{"code":4,"message":"fake transient error"}} I1005 12:05:00.459377 33 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:05:00.461760 33 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c13077ab-d440-4221-8b2e-4e929259c11e/globalmount"},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake transient error","FullError":{"code":4,"message":"fake transient error"}} I1005 12:05:01.566694 33 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:05:01.569525 33 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c13077ab-d440-4221-8b2e-4e929259c11e/globalmount"},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake transient error","FullError":{"code":4,"message":"fake transient error"}} I1005 12:05:03.582668 33 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:05:03.584881 33 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c13077ab-d440-4221-8b2e-4e929259c11e/globalmount"},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake transient error","FullError":{"code":4,"message":"fake transient error"}} I1005 12:05:03.882777 33 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:05:03.885769 33 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Oct 5 12:05:03.888: INFO: >>> kubeConfig: /root/.kube/config I1005 12:05:04.007670 33 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c13077ab-d440-4221-8b2e-4e929259c11e/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-c13077ab-d440-4221-8b2e-4e929259c11e","storage.kubernetes.io/csiProvisionerIdentity":"1664971480759-8081-csi-mock-csi-mock-volumes-3568"}},"Response":{},"Error":"","FullError":null} I1005 12:05:04.183690 33 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:05:04.185814 33 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Oct 5 12:05:04.188: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:05:04.314: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:05:04.437: INFO: >>> kubeConfig: /root/.kube/config I1005 12:05:04.546969 33 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c13077ab-d440-4221-8b2e-4e929259c11e/globalmount","target_path":"/var/lib/kubelet/pods/406bffdf-488f-4afa-a463-d6681829210d/volumes/kubernetes.io~csi/pvc-c13077ab-d440-4221-8b2e-4e929259c11e/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-c13077ab-d440-4221-8b2e-4e929259c11e","storage.kubernetes.io/csiProvisionerIdentity":"1664971480759-8081-csi-mock-csi-mock-volumes-3568"}},"Response":{},"Error":"","FullError":null} Oct 5 12:05:07.137: INFO: Deleting pod "pvc-volume-tester-hmvvw" in namespace "csi-mock-volumes-3568" Oct 5 12:05:07.142: INFO: Wait up to 5m0s for pod "pvc-volume-tester-hmvvw" to be fully deleted I1005 12:05:08.314685 33 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:05:08.317779 33 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetVolumeStats","Request":{"volume_id":"4","volume_path":"/var/lib/kubelet/pods/406bffdf-488f-4afa-a463-d6681829210d/volumes/kubernetes.io~csi/pvc-c13077ab-d440-4221-8b2e-4e929259c11e/mount"},"Response":{"usage":[{"total":1073741824,"unit":1}],"volume_condition":{}},"Error":"","FullError":null} Oct 5 12:05:08.625: INFO: >>> kubeConfig: /root/.kube/config I1005 12:05:08.756746 33 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/406bffdf-488f-4afa-a463-d6681829210d/volumes/kubernetes.io~csi/pvc-c13077ab-d440-4221-8b2e-4e929259c11e/mount"},"Response":{},"Error":"","FullError":null} I1005 12:05:08.831217 33 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:05:08.833482 33 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c13077ab-d440-4221-8b2e-4e929259c11e/globalmount"},"Response":{},"Error":"","FullError":null} STEP: Waiting for all remaining expected CSI calls STEP: Deleting pod pvc-volume-tester-prssn Oct 5 12:05:14.152: INFO: Deleting pod "pvc-volume-tester-prssn" in namespace "csi-mock-volumes-3568" STEP: Deleting pod pvc-volume-tester-hmvvw Oct 5 12:05:14.155: INFO: Deleting pod "pvc-volume-tester-hmvvw" in namespace "csi-mock-volumes-3568" STEP: Deleting claim pvc-f47rn Oct 5 12:05:14.167: INFO: Waiting up to 2m0s for PersistentVolume pvc-c13077ab-d440-4221-8b2e-4e929259c11e to get deleted Oct 5 12:05:14.171: INFO: PersistentVolume pvc-c13077ab-d440-4221-8b2e-4e929259c11e found and phase=Bound (3.177504ms) I1005 12:05:14.191382 33 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} Oct 5 12:05:16.175: INFO: PersistentVolume pvc-c13077ab-d440-4221-8b2e-4e929259c11e was removed STEP: Deleting storageclass csi-mock-volumes-3568-scn7m8l STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-3568 STEP: Waiting for namespaces [csi-mock-volumes-3568] to vanish STEP: uninstalling csi mock driver Oct 5 12:05:28.213: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3568-3648/csi-attacher Oct 5 12:05:28.217: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3568 Oct 5 12:05:28.223: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3568 Oct 5 12:05:28.229: INFO: deleting *v1.Role: csi-mock-volumes-3568-3648/external-attacher-cfg-csi-mock-volumes-3568 Oct 5 12:05:28.234: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3568-3648/csi-attacher-role-cfg Oct 5 12:05:28.238: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3568-3648/csi-provisioner Oct 5 12:05:28.242: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3568 Oct 5 12:05:28.246: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3568 Oct 5 12:05:28.249: INFO: deleting *v1.Role: csi-mock-volumes-3568-3648/external-provisioner-cfg-csi-mock-volumes-3568 Oct 5 12:05:28.252: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3568-3648/csi-provisioner-role-cfg Oct 5 12:05:28.256: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3568-3648/csi-resizer Oct 5 12:05:28.260: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3568 Oct 5 12:05:28.263: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3568 Oct 5 12:05:28.267: INFO: deleting *v1.Role: csi-mock-volumes-3568-3648/external-resizer-cfg-csi-mock-volumes-3568 Oct 5 12:05:28.271: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3568-3648/csi-resizer-role-cfg Oct 5 12:05:28.274: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3568-3648/csi-snapshotter Oct 5 12:05:28.278: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3568 Oct 5 12:05:28.281: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3568 Oct 5 12:05:28.284: INFO: deleting *v1.Role: csi-mock-volumes-3568-3648/external-snapshotter-leaderelection-csi-mock-volumes-3568 Oct 5 12:05:28.287: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3568-3648/external-snapshotter-leaderelection Oct 5 12:05:28.290: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3568-3648/csi-mock Oct 5 12:05:28.294: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3568 Oct 5 12:05:28.296: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3568 Oct 5 12:05:28.300: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3568 Oct 5 12:05:28.303: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3568 Oct 5 12:05:28.305: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3568 Oct 5 12:05:28.309: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3568 Oct 5 12:05:28.312: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3568 Oct 5 12:05:28.315: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3568-3648/csi-mockplugin Oct 5 12:05:28.319: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-3568 STEP: deleting the driver namespace: csi-mock-volumes-3568-3648 STEP: Waiting for namespaces [csi-mock-volumes-3568-3648] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:06:12.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:105.871 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI NodeUnstage error cases [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:901 two pods: should call NodeStage after previous NodeUnstage transient error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:962 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage transient error","total":-1,"completed":3,"skipped":33,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:06:10.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] new files should be created with FSGroup ownership when container is root /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:55 STEP: Creating a pod to test emptydir 0644 on tmpfs Oct 5 12:06:10.551: INFO: Waiting up to 5m0s for pod "pod-cce156ff-197b-4ced-b003-231bc95e7715" in namespace "emptydir-1149" to be "Succeeded or Failed" Oct 5 12:06:10.554: INFO: Pod "pod-cce156ff-197b-4ced-b003-231bc95e7715": Phase="Pending", Reason="", readiness=false. Elapsed: 2.605863ms Oct 5 12:06:12.557: INFO: Pod "pod-cce156ff-197b-4ced-b003-231bc95e7715": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006450722s Oct 5 12:06:14.562: INFO: Pod "pod-cce156ff-197b-4ced-b003-231bc95e7715": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010553388s STEP: Saw pod success Oct 5 12:06:14.562: INFO: Pod "pod-cce156ff-197b-4ced-b003-231bc95e7715" satisfied condition "Succeeded or Failed" Oct 5 12:06:14.565: INFO: Trying to get logs from node v122-worker pod pod-cce156ff-197b-4ced-b003-231bc95e7715 container test-container: STEP: delete the pod Oct 5 12:06:14.579: INFO: Waiting for pod pod-cce156ff-197b-4ced-b003-231bc95e7715 to disappear Oct 5 12:06:14.582: INFO: Pod pod-cce156ff-197b-4ced-b003-231bc95e7715 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:06:14.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1149" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root","total":-1,"completed":7,"skipped":156,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:06:12.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Oct 5 12:06:12.407: INFO: The status of Pod test-hostpath-type-84b85 is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:06:14.412: INFO: The status of Pod test-hostpath-type-84b85 is Running (Ready = true) STEP: running on node v122-worker STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should fail on mounting directory 'adir' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:94 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:06:18.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-1028" for this suite. • [SLOW TEST:6.106 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting directory 'adir' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:94 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathCharDev","total":-1,"completed":4,"skipped":43,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:06:03.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "v122-worker2" at path "/tmp/local-volume-test-139fa0c2-c9f0-4ace-88b7-78bd3969225d" Oct 5 12:06:07.200: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-139fa0c2-c9f0-4ace-88b7-78bd3969225d" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-139fa0c2-c9f0-4ace-88b7-78bd3969225d" "/tmp/local-volume-test-139fa0c2-c9f0-4ace-88b7-78bd3969225d"] Namespace:persistent-local-volumes-test-6529 PodName:hostexec-v122-worker2-j7pkd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:06:07.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:06:07.318: INFO: Creating a PV followed by a PVC Oct 5 12:06:07.327: INFO: Waiting for PV local-pvm4249 to bind to PVC pvc-hkzzv Oct 5 12:06:07.328: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-hkzzv] to have phase Bound Oct 5 12:06:07.331: INFO: PersistentVolumeClaim pvc-hkzzv found but phase is Pending instead of Bound. Oct 5 12:06:09.335: INFO: PersistentVolumeClaim pvc-hkzzv found but phase is Pending instead of Bound. Oct 5 12:06:11.340: INFO: PersistentVolumeClaim pvc-hkzzv found but phase is Pending instead of Bound. Oct 5 12:06:13.345: INFO: PersistentVolumeClaim pvc-hkzzv found but phase is Pending instead of Bound. Oct 5 12:06:15.350: INFO: PersistentVolumeClaim pvc-hkzzv found but phase is Pending instead of Bound. Oct 5 12:06:17.355: INFO: PersistentVolumeClaim pvc-hkzzv found but phase is Pending instead of Bound. Oct 5 12:06:19.358: INFO: PersistentVolumeClaim pvc-hkzzv found and phase=Bound (12.030821542s) Oct 5 12:06:19.358: INFO: Waiting up to 3m0s for PersistentVolume local-pvm4249 to have phase Bound Oct 5 12:06:19.362: INFO: PersistentVolume local-pvm4249 found and phase=Bound (3.078609ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Oct 5 12:06:21.385: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:45799 --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-6529 exec pod-5df669db-3a4b-431d-af4a-936168194ad1 --namespace=persistent-local-volumes-test-6529 -- stat -c %g /mnt/volume1' Oct 5 12:06:21.550: INFO: stderr: "" Oct 5 12:06:21.550: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-5df669db-3a4b-431d-af4a-936168194ad1 in namespace persistent-local-volumes-test-6529 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:06:21.555: INFO: Deleting PersistentVolumeClaim "pvc-hkzzv" Oct 5 12:06:21.559: INFO: Deleting PersistentVolume "local-pvm4249" STEP: Unmount tmpfs mount point on node "v122-worker2" at path "/tmp/local-volume-test-139fa0c2-c9f0-4ace-88b7-78bd3969225d" Oct 5 12:06:21.564: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-139fa0c2-c9f0-4ace-88b7-78bd3969225d"] Namespace:persistent-local-volumes-test-6529 PodName:hostexec-v122-worker2-j7pkd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:06:21.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:06:21.695: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-139fa0c2-c9f0-4ace-88b7-78bd3969225d] Namespace:persistent-local-volumes-test-6529 PodName:hostexec-v122-worker2-j7pkd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:06:21.695: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:06:21.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6529" for this suite. • [SLOW TEST:18.712 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":6,"skipped":147,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:05:43.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not be passed when CSIDriver does not exist /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:494 STEP: Building a driver namespace object, basename csi-mock-volumes-3226 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Oct 5 12:05:43.149: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3226-2419/csi-attacher Oct 5 12:05:43.153: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3226 Oct 5 12:05:43.153: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-3226 Oct 5 12:05:43.157: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3226 Oct 5 12:05:43.161: INFO: creating *v1.Role: csi-mock-volumes-3226-2419/external-attacher-cfg-csi-mock-volumes-3226 Oct 5 12:05:43.165: INFO: creating *v1.RoleBinding: csi-mock-volumes-3226-2419/csi-attacher-role-cfg Oct 5 12:05:43.169: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3226-2419/csi-provisioner Oct 5 12:05:43.172: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3226 Oct 5 12:05:43.172: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-3226 Oct 5 12:05:43.176: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3226 Oct 5 12:05:43.179: INFO: creating *v1.Role: csi-mock-volumes-3226-2419/external-provisioner-cfg-csi-mock-volumes-3226 Oct 5 12:05:43.183: INFO: creating *v1.RoleBinding: csi-mock-volumes-3226-2419/csi-provisioner-role-cfg Oct 5 12:05:43.185: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3226-2419/csi-resizer Oct 5 12:05:43.189: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3226 Oct 5 12:05:43.189: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-3226 Oct 5 12:05:43.192: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3226 Oct 5 12:05:43.195: INFO: creating *v1.Role: csi-mock-volumes-3226-2419/external-resizer-cfg-csi-mock-volumes-3226 Oct 5 12:05:43.198: INFO: creating *v1.RoleBinding: csi-mock-volumes-3226-2419/csi-resizer-role-cfg Oct 5 12:05:43.202: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3226-2419/csi-snapshotter Oct 5 12:05:43.206: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3226 Oct 5 12:05:43.206: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-3226 Oct 5 12:05:43.209: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3226 Oct 5 12:05:43.212: INFO: creating *v1.Role: csi-mock-volumes-3226-2419/external-snapshotter-leaderelection-csi-mock-volumes-3226 Oct 5 12:05:43.215: INFO: creating *v1.RoleBinding: csi-mock-volumes-3226-2419/external-snapshotter-leaderelection Oct 5 12:05:43.219: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3226-2419/csi-mock Oct 5 12:05:43.222: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3226 Oct 5 12:05:43.225: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3226 Oct 5 12:05:43.228: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3226 Oct 5 12:05:43.231: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3226 Oct 5 12:05:43.234: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3226 Oct 5 12:05:43.237: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3226 Oct 5 12:05:43.240: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3226 Oct 5 12:05:43.243: INFO: creating *v1.StatefulSet: csi-mock-volumes-3226-2419/csi-mockplugin Oct 5 12:05:43.248: INFO: creating *v1.StatefulSet: csi-mock-volumes-3226-2419/csi-mockplugin-attacher Oct 5 12:05:43.251: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-3226 to register on node v122-worker2 STEP: Creating pod Oct 5 12:05:48.267: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Oct 5 12:05:48.274: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-s8lx6] to have phase Bound Oct 5 12:05:48.279: INFO: PersistentVolumeClaim pvc-s8lx6 found but phase is Pending instead of Bound. Oct 5 12:05:50.284: INFO: PersistentVolumeClaim pvc-s8lx6 found and phase=Bound (2.009604354s) STEP: Deleting the previously created pod Oct 5 12:06:06.302: INFO: Deleting pod "pvc-volume-tester-fpv9m" in namespace "csi-mock-volumes-3226" Oct 5 12:06:06.307: INFO: Wait up to 5m0s for pod "pvc-volume-tester-fpv9m" to be fully deleted STEP: Checking CSI driver logs Oct 5 12:06:08.333: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/3341d4dc-a26e-4511-b20f-1889700143c4/volumes/kubernetes.io~csi/pvc-6bbd2daa-1148-4f80-a790-dc4b0c19cdcb/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-fpv9m Oct 5 12:06:08.333: INFO: Deleting pod "pvc-volume-tester-fpv9m" in namespace "csi-mock-volumes-3226" STEP: Deleting claim pvc-s8lx6 Oct 5 12:06:08.343: INFO: Waiting up to 2m0s for PersistentVolume pvc-6bbd2daa-1148-4f80-a790-dc4b0c19cdcb to get deleted Oct 5 12:06:08.347: INFO: PersistentVolume pvc-6bbd2daa-1148-4f80-a790-dc4b0c19cdcb found and phase=Bound (4.554318ms) Oct 5 12:06:10.351: INFO: PersistentVolume pvc-6bbd2daa-1148-4f80-a790-dc4b0c19cdcb was removed STEP: Deleting storageclass csi-mock-volumes-3226-sc9bch2 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-3226 STEP: Waiting for namespaces [csi-mock-volumes-3226] to vanish STEP: uninstalling csi mock driver Oct 5 12:06:16.365: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3226-2419/csi-attacher Oct 5 12:06:16.370: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3226 Oct 5 12:06:16.375: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3226 Oct 5 12:06:16.380: INFO: deleting *v1.Role: csi-mock-volumes-3226-2419/external-attacher-cfg-csi-mock-volumes-3226 Oct 5 12:06:16.384: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3226-2419/csi-attacher-role-cfg Oct 5 12:06:16.389: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3226-2419/csi-provisioner Oct 5 12:06:16.393: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3226 Oct 5 12:06:16.398: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3226 Oct 5 12:06:16.402: INFO: deleting *v1.Role: csi-mock-volumes-3226-2419/external-provisioner-cfg-csi-mock-volumes-3226 Oct 5 12:06:16.407: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3226-2419/csi-provisioner-role-cfg Oct 5 12:06:16.412: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3226-2419/csi-resizer Oct 5 12:06:16.417: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3226 Oct 5 12:06:16.421: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3226 Oct 5 12:06:16.426: INFO: deleting *v1.Role: csi-mock-volumes-3226-2419/external-resizer-cfg-csi-mock-volumes-3226 Oct 5 12:06:16.430: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3226-2419/csi-resizer-role-cfg Oct 5 12:06:16.435: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3226-2419/csi-snapshotter Oct 5 12:06:16.439: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3226 Oct 5 12:06:16.444: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3226 Oct 5 12:06:16.448: INFO: deleting *v1.Role: csi-mock-volumes-3226-2419/external-snapshotter-leaderelection-csi-mock-volumes-3226 Oct 5 12:06:16.453: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3226-2419/external-snapshotter-leaderelection Oct 5 12:06:16.458: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3226-2419/csi-mock Oct 5 12:06:16.462: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3226 Oct 5 12:06:16.467: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3226 Oct 5 12:06:16.471: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3226 Oct 5 12:06:16.476: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3226 Oct 5 12:06:16.480: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3226 Oct 5 12:06:16.485: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3226 Oct 5 12:06:16.489: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3226 Oct 5 12:06:16.494: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3226-2419/csi-mockplugin Oct 5 12:06:16.499: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3226-2419/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-3226-2419 STEP: Waiting for namespaces [csi-mock-volumes-3226-2419] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:06:22.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:39.454 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:444 should not be passed when CSIDriver does not exist /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:494 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist","total":-1,"completed":5,"skipped":296,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:06:18.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Oct 5 12:06:18.505: INFO: The status of Pod test-hostpath-type-55kqv is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:06:20.509: INFO: The status of Pod test-hostpath-type-55kqv is Running (Ready = true) STEP: running on node v122-worker STEP: Create a character device for further testing Oct 5 12:06:20.512: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-8069 PodName:test-hostpath-type-55kqv ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:06:20.512: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to mount character device 'achardev' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:281 [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:06:22.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-8069" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should be able to mount character device 'achardev' successfully when HostPathType is HostPathUnset","total":-1,"completed":5,"skipped":44,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:06:22.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Oct 5 12:06:22.578: INFO: The status of Pod test-hostpath-type-2m42g is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:06:24.582: INFO: The status of Pod test-hostpath-type-2m42g is Running (Ready = true) STEP: running on node v122-worker STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should fail on mounting directory 'adir' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:89 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:06:28.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-9136" for this suite. • [SLOW TEST:6.112 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting directory 'adir' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:89 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathSocket","total":-1,"completed":6,"skipped":297,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:06:08.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Oct 5 12:06:10.514: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-b8bc2d97-21b4-43d2-8fd1-b82224ad2647] Namespace:persistent-local-volumes-test-6780 PodName:hostexec-v122-worker2-cz4n9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:06:10.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:06:10.615: INFO: Creating a PV followed by a PVC Oct 5 12:06:10.624: INFO: Waiting for PV local-pvcdc5h to bind to PVC pvc-ns4lg Oct 5 12:06:10.624: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-ns4lg] to have phase Bound Oct 5 12:06:10.627: INFO: PersistentVolumeClaim pvc-ns4lg found but phase is Pending instead of Bound. Oct 5 12:06:12.631: INFO: PersistentVolumeClaim pvc-ns4lg found but phase is Pending instead of Bound. Oct 5 12:06:14.635: INFO: PersistentVolumeClaim pvc-ns4lg found but phase is Pending instead of Bound. Oct 5 12:06:16.639: INFO: PersistentVolumeClaim pvc-ns4lg found but phase is Pending instead of Bound. Oct 5 12:06:18.643: INFO: PersistentVolumeClaim pvc-ns4lg found but phase is Pending instead of Bound. Oct 5 12:06:20.647: INFO: PersistentVolumeClaim pvc-ns4lg found and phase=Bound (10.022842086s) Oct 5 12:06:20.647: INFO: Waiting up to 3m0s for PersistentVolume local-pvcdc5h to have phase Bound Oct 5 12:06:20.649: INFO: PersistentVolume local-pvcdc5h found and phase=Bound (2.938296ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Oct 5 12:06:24.674: INFO: pod "pod-249ab2b4-6793-4d84-824e-353f8d8a5436" created on Node "v122-worker2" STEP: Writing in pod1 Oct 5 12:06:24.674: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6780 PodName:pod-249ab2b4-6793-4d84-824e-353f8d8a5436 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:06:24.674: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:06:24.777: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Oct 5 12:06:24.777: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6780 PodName:pod-249ab2b4-6793-4d84-824e-353f8d8a5436 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:06:24.777: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:06:24.900: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-249ab2b4-6793-4d84-824e-353f8d8a5436 in namespace persistent-local-volumes-test-6780 STEP: Creating pod2 STEP: Creating a pod Oct 5 12:06:30.925: INFO: pod "pod-4f6ec1c1-017d-4404-8560-d707e28d0ae3" created on Node "v122-worker2" STEP: Reading in pod2 Oct 5 12:06:30.926: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6780 PodName:pod-4f6ec1c1-017d-4404-8560-d707e28d0ae3 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:06:30.926: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:06:31.049: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-4f6ec1c1-017d-4404-8560-d707e28d0ae3 in namespace persistent-local-volumes-test-6780 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:06:31.054: INFO: Deleting PersistentVolumeClaim "pvc-ns4lg" Oct 5 12:06:31.058: INFO: Deleting PersistentVolume "local-pvcdc5h" STEP: Removing the test directory Oct 5 12:06:31.062: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-b8bc2d97-21b4-43d2-8fd1-b82224ad2647] Namespace:persistent-local-volumes-test-6780 PodName:hostexec-v122-worker2-cz4n9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:06:31.062: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:06:31.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6780" for this suite. • [SLOW TEST:22.739 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":281,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:06:31.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:77 Oct 5 12:06:31.262: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:06:31.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-4990" for this suite. [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:111 Oct 5 12:06:31.271: INFO: AfterEach: Cleaning up test resources Oct 5 12:06:31.271: INFO: pvc is nil Oct 5 12:06:31.271: INFO: pv is nil S [SKIPPING] in Spec Setup (BeforeEach) [0.044 seconds] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:142 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:06:22.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "v122-worker2" using path "/tmp/local-volume-test-c6fd2a9f-0f53-4652-97c8-e7ab466b6742" Oct 5 12:06:30.737: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-c6fd2a9f-0f53-4652-97c8-e7ab466b6742 && dd if=/dev/zero of=/tmp/local-volume-test-c6fd2a9f-0f53-4652-97c8-e7ab466b6742/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-c6fd2a9f-0f53-4652-97c8-e7ab466b6742/file] Namespace:persistent-local-volumes-test-8058 PodName:hostexec-v122-worker2-9hn46 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:06:30.737: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:06:30.914: INFO: exec v122-worker2: command: mkdir -p /tmp/local-volume-test-c6fd2a9f-0f53-4652-97c8-e7ab466b6742 && dd if=/dev/zero of=/tmp/local-volume-test-c6fd2a9f-0f53-4652-97c8-e7ab466b6742/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-c6fd2a9f-0f53-4652-97c8-e7ab466b6742/file Oct 5 12:06:30.914: INFO: exec v122-worker2: stdout: "" Oct 5 12:06:30.914: INFO: exec v122-worker2: stderr: "5120+0 records in\n5120+0 records out\n20971520 bytes (21 MB, 20 MiB) copied, 0.0223904 s, 937 MB/s\nlosetup: /tmp/local-volume-test-c6fd2a9f-0f53-4652-97c8-e7ab466b6742/file: failed to set up loop device: No such device or address\n" Oct 5 12:06:30.914: INFO: exec v122-worker2: exit code: 0 Oct 5 12:06:30.914: FAIL: Unexpected error: : { Err: { s: "command terminated with exit code 1", }, Code: 1, } command terminated with exit code 1 occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).createAndSetupLoopDevice(0xc001372000, 0xc002e367c0, 0x3b, 0xc004ee4ed0, 0x1400000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:133 +0x45b k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).setupLocalVolumeBlock(0xc001372000, 0xc004ee4ed0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:146 +0x65 k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).setupLocalVolumeBlockFS(0xc001372000, 0xc004ee4ed0, 0x0, 0x78cd2a8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:174 +0x5a k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).Create(0xc001372000, 0xc004ee4ed0, 0x7030fcb, 0x7, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:308 +0x391 k8s.io/kubernetes/test/e2e/storage.setupLocalVolumes(0xc005204d80, 0x70587e6, 0x11, 0xc004ee4ed0, 0x1, 0x0, 0x0, 0xc005182c00) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:837 +0x157 k8s.io/kubernetes/test/e2e/storage.setupLocalVolumesPVCsPVs(0xc005204d80, 0x70587e6, 0x11, 0xc004ee4ed0, 0x1, 0x703610f, 0x9, 0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1102 +0x87 k8s.io/kubernetes/test/e2e/storage.glob..func21.2.1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 +0xb6 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0004cfb00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0004cfb00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc0004cfb00, 0x729c7d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "persistent-local-volumes-test-8058". STEP: Found 4 events. Oct 5 12:06:30.920: INFO: At 2022-10-05 12:06:22 +0000 UTC - event for hostexec-v122-worker2-9hn46: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-8058/hostexec-v122-worker2-9hn46 to v122-worker2 Oct 5 12:06:30.920: INFO: At 2022-10-05 12:06:24 +0000 UTC - event for hostexec-v122-worker2-9hn46: {kubelet v122-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Oct 5 12:06:30.920: INFO: At 2022-10-05 12:06:25 +0000 UTC - event for hostexec-v122-worker2-9hn46: {kubelet v122-worker2} Created: Created container agnhost-container Oct 5 12:06:30.920: INFO: At 2022-10-05 12:06:25 +0000 UTC - event for hostexec-v122-worker2-9hn46: {kubelet v122-worker2} Started: Started container agnhost-container Oct 5 12:06:30.923: INFO: POD NODE PHASE GRACE CONDITIONS Oct 5 12:06:30.923: INFO: hostexec-v122-worker2-9hn46 v122-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-05 12:06:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-10-05 12:06:26 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-10-05 12:06:26 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-05 12:06:22 +0000 UTC }] Oct 5 12:06:30.923: INFO: Oct 5 12:06:30.927: INFO: Logging node info for node v122-control-plane Oct 5 12:06:30.929: INFO: Node Info: &Node{ObjectMeta:{v122-control-plane 0bba5de9-314a-4743-bf02-bde0ec06daf3 5868 0 2022-10-05 11:59:47 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:v122-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-10-05 11:59:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-10-05 11:59:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2022-10-05 12:00:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-10-05 12:00:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v122/v122-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-10-05 12:05:22 +0000 UTC,LastTransitionTime:2022-10-05 11:59:42 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-10-05 12:05:22 +0000 UTC,LastTransitionTime:2022-10-05 11:59:42 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-10-05 12:05:22 +0000 UTC,LastTransitionTime:2022-10-05 11:59:42 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-10-05 12:05:22 +0000 UTC,LastTransitionTime:2022-10-05 12:00:21 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.7,},NodeAddress{Type:Hostname,Address:v122-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:90a9e9edfe9d44d59ee2bec7a8da01cd,SystemUUID:2e684780-1fcb-4016-9109-255b79db130f,BootID:5bd644eb-fc54-4a37-951a-44566369b55e,KernelVersion:5.15.0-48-generic,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.22.15,KubeProxyVersion:v1.22.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:df9770adc7c9d21f7f2c2f8e04efed16e9ca12f9c00aad56e5753bd1819ad95f k8s.gcr.io/kube-proxy:v1.22.15],SizeBytes:105434107,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:393eaa5fb5582428fe7b343ba5b729dbdb8a7d23832e3e9183f515fee5e478ca k8s.gcr.io/kube-apiserver:v1.22.15],SizeBytes:74691038,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:0c81eb77294a8fb7706e538aae9423a6301e00bae81cebf0a67c42f27e6714c7 k8s.gcr.io/kube-controller-manager:v1.22.15],SizeBytes:67534962,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:974ce90d1dfaa71470d7e80077a3dd85ed92848347005df49d710c1576511a2c k8s.gcr.io/kube-scheduler:v1.22.15],SizeBytes:53936244,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220726-ed811e41],SizeBytes:25818452,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220607-9a4d8d2a],SizeBytes:2859509,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 5 12:06:30.930: INFO: Logging kubelet events for node v122-control-plane Oct 5 12:06:30.934: INFO: Logging pods the kubelet thinks is on node v122-control-plane Oct 5 12:06:30.970: INFO: create-loop-devs-lvpbc started at 2022-10-05 12:00:14 +0000 UTC (0+1 container statuses recorded) Oct 5 12:06:30.970: INFO: Container loopdev ready: true, restart count 0 Oct 5 12:06:30.970: INFO: etcd-v122-control-plane started at 2022-10-05 11:59:52 +0000 UTC (0+1 container statuses recorded) Oct 5 12:06:30.970: INFO: Container etcd ready: true, restart count 0 Oct 5 12:06:30.970: INFO: kube-apiserver-v122-control-plane started at 2022-10-05 11:59:52 +0000 UTC (0+1 container statuses recorded) Oct 5 12:06:30.970: INFO: Container kube-apiserver ready: true, restart count 0 Oct 5 12:06:30.970: INFO: kube-controller-manager-v122-control-plane started at 2022-10-05 11:59:52 +0000 UTC (0+1 container statuses recorded) Oct 5 12:06:30.970: INFO: Container kube-controller-manager ready: true, restart count 0 Oct 5 12:06:30.970: INFO: kube-scheduler-v122-control-plane started at 2022-10-05 11:59:52 +0000 UTC (0+1 container statuses recorded) Oct 5 12:06:30.970: INFO: Container kube-scheduler ready: true, restart count 0 Oct 5 12:06:30.970: INFO: kindnet-g8rqz started at 2022-10-05 12:00:05 +0000 UTC (0+1 container statuses recorded) Oct 5 12:06:30.970: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 12:06:30.970: INFO: kube-proxy-xtt57 started at 2022-10-05 12:00:05 +0000 UTC (0+1 container statuses recorded) Oct 5 12:06:30.970: INFO: Container kube-proxy ready: true, restart count 0 Oct 5 12:06:31.035: INFO: Latency metrics for node v122-control-plane Oct 5 12:06:31.035: INFO: Logging node info for node v122-worker Oct 5 12:06:31.038: INFO: Node Info: &Node{ObjectMeta:{v122-worker 8286eab4-ee46-4103-bc96-cf44e85cf562 6265 0 2022-10-05 12:00:08 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:v122-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-5388":"csi-mock-csi-mock-volumes-5388"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-10-05 12:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2022-10-05 12:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-10-05 12:00:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-10-05 12:03:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v122/v122-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-10-05 12:04:09 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-10-05 12:04:09 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-10-05 12:04:09 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-10-05 12:04:09 +0000 UTC,LastTransitionTime:2022-10-05 12:00:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.6,},NodeAddress{Type:Hostname,Address:v122-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8ce5667169114cc58989bd26cdb88021,SystemUUID:f1b8869e-1c17-4972-b832-4d15146806a4,BootID:5bd644eb-fc54-4a37-951a-44566369b55e,KernelVersion:5.15.0-48-generic,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.22.15,KubeProxyVersion:v1.22.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:df9770adc7c9d21f7f2c2f8e04efed16e9ca12f9c00aad56e5753bd1819ad95f k8s.gcr.io/kube-proxy:v1.22.15],SizeBytes:105434107,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:393eaa5fb5582428fe7b343ba5b729dbdb8a7d23832e3e9183f515fee5e478ca k8s.gcr.io/kube-apiserver:v1.22.15],SizeBytes:74691038,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:0c81eb77294a8fb7706e538aae9423a6301e00bae81cebf0a67c42f27e6714c7 k8s.gcr.io/kube-controller-manager:v1.22.15],SizeBytes:67534962,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:974ce90d1dfaa71470d7e80077a3dd85ed92848347005df49d710c1576511a2c k8s.gcr.io/kube-scheduler:v1.22.15],SizeBytes:53936244,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220726-ed811e41],SizeBytes:25818452,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220607-9a4d8d2a],SizeBytes:2859509,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 5 12:06:31.039: INFO: Logging kubelet events for node v122-worker Oct 5 12:06:31.043: INFO: Logging pods the kubelet thinks is on node v122-worker Oct 5 12:06:31.051: INFO: pvc-volume-tester-v4tcb started at 2022-10-05 12:05:05 +0000 UTC (0+1 container statuses recorded) Oct 5 12:06:31.051: INFO: Container volume-tester ready: false, restart count 0 Oct 5 12:06:31.051: INFO: csi-mockplugin-0 started at 2022-10-05 12:04:53 +0000 UTC (0+4 container statuses recorded) Oct 5 12:06:31.051: INFO: Container busybox ready: true, restart count 0 Oct 5 12:06:31.051: INFO: Container csi-provisioner ready: true, restart count 0 Oct 5 12:06:31.051: INFO: Container driver-registrar ready: true, restart count 0 Oct 5 12:06:31.051: INFO: Container mock ready: true, restart count 0 Oct 5 12:06:31.051: INFO: create-loop-devs-f76cj started at 2022-10-05 12:00:14 +0000 UTC (0+1 container statuses recorded) Oct 5 12:06:31.051: INFO: Container loopdev ready: true, restart count 0 Oct 5 12:06:31.051: INFO: test-hostpath-type-2m42g started at 2022-10-05 12:06:22 +0000 UTC (0+1 container statuses recorded) Oct 5 12:06:31.051: INFO: Container host-path-testing ready: true, restart count 0 Oct 5 12:06:31.051: INFO: kindnet-rkh8m started at 2022-10-05 12:00:09 +0000 UTC (0+1 container statuses recorded) Oct 5 12:06:31.051: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 12:06:31.051: INFO: kube-proxy-xkzrn started at 2022-10-05 12:00:09 +0000 UTC (0+1 container statuses recorded) Oct 5 12:06:31.051: INFO: Container kube-proxy ready: true, restart count 0 Oct 5 12:06:31.051: INFO: pod-secrets-9d1aa6a5-fe49-413f-85a9-4c2a8e6f4e5b started at 2022-10-05 12:03:08 +0000 UTC (0+1 container statuses recorded) Oct 5 12:06:31.051: INFO: Container creates-volume-test ready: false, restart count 0 Oct 5 12:06:31.051: INFO: pod-secrets-76b16dac-27d0-4343-a0fe-b8ed5dd81977 started at 2022-10-05 12:06:14 +0000 UTC (0+1 container statuses recorded) Oct 5 12:06:31.051: INFO: Container creates-volume-test ready: false, restart count 0 Oct 5 12:06:31.051: INFO: inline-volume-zgcwq started at 2022-10-05 12:05:51 +0000 UTC (0+1 container statuses recorded) Oct 5 12:06:31.051: INFO: Container volume-tester ready: false, restart count 0 Oct 5 12:06:31.152: INFO: Latency metrics for node v122-worker Oct 5 12:06:31.152: INFO: Logging node info for node v122-worker2 Oct 5 12:06:31.155: INFO: Node Info: &Node{ObjectMeta:{v122-worker2 e098b7b6-6804-492f-b9ec-650d1924542e 7494 0 2022-10-05 12:00:08 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:v122-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-3443":"csi-mock-csi-mock-volumes-3443","csi-mock-csi-mock-volumes-4859":"csi-mock-csi-mock-volumes-4859"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-10-05 12:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2022-10-05 12:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-10-05 12:00:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kube-controller-manager Update v1 2022-10-05 12:05:40 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-10-05 12:05:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v122/v122-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-10-05 12:06:09 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-10-05 12:06:09 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-10-05 12:06:09 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-10-05 12:06:09 +0000 UTC,LastTransitionTime:2022-10-05 12:00:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.5,},NodeAddress{Type:Hostname,Address:v122-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:feea07f38e414515ae57b946e27fa7bb,SystemUUID:07d898dc-4331-403b-9bdf-da8ef413d01c,BootID:5bd644eb-fc54-4a37-951a-44566369b55e,KernelVersion:5.15.0-48-generic,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.22.15,KubeProxyVersion:v1.22.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:df9770adc7c9d21f7f2c2f8e04efed16e9ca12f9c00aad56e5753bd1819ad95f k8s.gcr.io/kube-proxy:v1.22.15],SizeBytes:105434107,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:393eaa5fb5582428fe7b343ba5b729dbdb8a7d23832e3e9183f515fee5e478ca k8s.gcr.io/kube-apiserver:v1.22.15],SizeBytes:74691038,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:0c81eb77294a8fb7706e538aae9423a6301e00bae81cebf0a67c42f27e6714c7 k8s.gcr.io/kube-controller-manager:v1.22.15],SizeBytes:67534962,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:974ce90d1dfaa71470d7e80077a3dd85ed92848347005df49d710c1576511a2c k8s.gcr.io/kube-scheduler:v1.22.15],SizeBytes:53936244,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220726-ed811e41],SizeBytes:25818452,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220607-9a4d8d2a],SizeBytes:2859509,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-3443^4],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-3443^4,DevicePath:,},},Config:nil,},} Oct 5 12:06:31.156: INFO: Logging kubelet events for node v122-worker2 Oct 5 12:06:31.161: INFO: Logging pods the kubelet thinks is on node v122-worker2 Oct 5 12:06:31.178: INFO: pod-3ab8a4ae-b3e8-4faf-add9-0aba4e67e474 started at 2022-10-05 12:06:25 +0000 UTC (0+1 container statuses recorded) Oct 5 12:06:31.178: INFO: Container write-pod ready: false, restart count 0 Oct 5 12:06:31.178: INFO: coredns-78fcd69978-vrzs8 started at 2022-10-05 12:00:18 +0000 UTC (0+1 container statuses recorded) Oct 5 12:06:31.178: INFO: Container coredns ready: true, restart count 0 Oct 5 12:06:31.178: INFO: csi-mockplugin-attacher-0 started at 2022-10-05 12:05:28 +0000 UTC (0+1 container statuses recorded) Oct 5 12:06:31.178: INFO: Container csi-attacher ready: true, restart count 0 Oct 5 12:06:31.178: INFO: pvc-volume-tester-sclqv started at 2022-10-05 12:05:39 +0000 UTC (0+1 container statuses recorded) Oct 5 12:06:31.178: INFO: Container volume-tester ready: true, restart count 0 Oct 5 12:06:31.178: INFO: pod-4f6ec1c1-017d-4404-8560-d707e28d0ae3 started at 2022-10-05 12:06:24 +0000 UTC (0+1 container statuses recorded) Oct 5 12:06:31.178: INFO: Container write-pod ready: true, restart count 0 Oct 5 12:06:31.178: INFO: hostexec-v122-worker2-pthbp started at (0+0 container statuses recorded) Oct 5 12:06:31.178: INFO: create-loop-devs-6sf59 started at 2022-10-05 12:00:14 +0000 UTC (0+1 container statuses recorded) Oct 5 12:06:31.178: INFO: Container loopdev ready: true, restart count 0 Oct 5 12:06:31.178: INFO: pod-249ab2b4-6793-4d84-824e-353f8d8a5436 started at 2022-10-05 12:06:20 +0000 UTC (0+1 container statuses recorded) Oct 5 12:06:31.178: INFO: Container write-pod ready: false, restart count 0 Oct 5 12:06:31.178: INFO: pod-011e923f-2871-4544-a438-86d32bc5cee1 started at 2022-10-05 12:06:20 +0000 UTC (0+1 container statuses recorded) Oct 5 12:06:31.178: INFO: Container write-pod ready: true, restart count 0 Oct 5 12:06:31.178: INFO: hostexec-v122-worker2-rkx8w started at 2022-10-05 12:06:21 +0000 UTC (0+1 container statuses recorded) Oct 5 12:06:31.178: INFO: Container agnhost-container ready: true, restart count 0 Oct 5 12:06:31.178: INFO: hostexec-v122-worker2-9hn46 started at 2022-10-05 12:06:22 +0000 UTC (0+1 container statuses recorded) Oct 5 12:06:31.178: INFO: Container agnhost-container ready: true, restart count 0 Oct 5 12:06:31.178: INFO: pvc-volume-tester-6w4f2 started at 2022-10-05 12:03:24 +0000 UTC (0+1 container statuses recorded) Oct 5 12:06:31.178: INFO: Container volume-tester ready: true, restart count 0 Oct 5 12:06:31.178: INFO: kindnet-vqtz2 started at 2022-10-05 12:00:09 +0000 UTC (0+1 container statuses recorded) Oct 5 12:06:31.178: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 12:06:31.178: INFO: kube-proxy-pwsq7 started at 2022-10-05 12:00:09 +0000 UTC (0+1 container statuses recorded) Oct 5 12:06:31.178: INFO: Container kube-proxy ready: true, restart count 0 Oct 5 12:06:31.178: INFO: hostexec-v122-worker2-8xzz7 started at 2022-10-05 12:06:00 +0000 UTC (0+1 container statuses recorded) Oct 5 12:06:31.178: INFO: Container agnhost-container ready: true, restart count 0 Oct 5 12:06:31.178: INFO: csi-mockplugin-0 started at 2022-10-05 12:05:28 +0000 UTC (0+3 container statuses recorded) Oct 5 12:06:31.178: INFO: Container csi-provisioner ready: true, restart count 0 Oct 5 12:06:31.178: INFO: Container driver-registrar ready: true, restart count 0 Oct 5 12:06:31.178: INFO: Container mock ready: true, restart count 0 Oct 5 12:06:31.178: INFO: coredns-78fcd69978-srwh8 started at 2022-10-05 12:00:18 +0000 UTC (0+1 container statuses recorded) Oct 5 12:06:31.178: INFO: Container coredns ready: true, restart count 0 Oct 5 12:06:31.178: INFO: local-path-provisioner-58c8ccd54c-lkwwv started at 2022-10-05 12:00:18 +0000 UTC (0+1 container statuses recorded) Oct 5 12:06:31.178: INFO: Container local-path-provisioner ready: true, restart count 0 Oct 5 12:06:31.178: INFO: csi-mockplugin-0 started at 2022-10-05 12:03:12 +0000 UTC (0+3 container statuses recorded) Oct 5 12:06:31.178: INFO: Container csi-provisioner ready: true, restart count 0 Oct 5 12:06:31.178: INFO: Container driver-registrar ready: true, restart count 0 Oct 5 12:06:31.178: INFO: Container mock ready: true, restart count 0 Oct 5 12:06:31.178: INFO: hostexec-v122-worker2-cz4n9 started at 2022-10-05 12:06:08 +0000 UTC (0+1 container statuses recorded) Oct 5 12:06:31.178: INFO: Container agnhost-container ready: true, restart count 0 Oct 5 12:06:31.745: INFO: Latency metrics for node v122-worker2 Oct 5 12:06:31.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8058" for this suite. • Failure in Spec Setup (BeforeEach) [9.080 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 Oct 5 12:06:30.914: Unexpected error: : { Err: { s: "command terminated with exit code 1", }, Code: 1, } command terminated with exit code 1 occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:133 ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:06:00.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "v122-worker2" using path "/tmp/local-volume-test-4f173c23-cc36-4573-9aab-b22f96005e3d" Oct 5 12:06:06.135: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-4f173c23-cc36-4573-9aab-b22f96005e3d && dd if=/dev/zero of=/tmp/local-volume-test-4f173c23-cc36-4573-9aab-b22f96005e3d/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-4f173c23-cc36-4573-9aab-b22f96005e3d/file] Namespace:persistent-local-volumes-test-3745 PodName:hostexec-v122-worker2-8xzz7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:06:06.135: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:06:06.265: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-4f173c23-cc36-4573-9aab-b22f96005e3d/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-3745 PodName:hostexec-v122-worker2-8xzz7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:06:06.265: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:06:06.381: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop9 && mount -t ext4 /dev/loop9 /tmp/local-volume-test-4f173c23-cc36-4573-9aab-b22f96005e3d && chmod o+rwx /tmp/local-volume-test-4f173c23-cc36-4573-9aab-b22f96005e3d] Namespace:persistent-local-volumes-test-3745 PodName:hostexec-v122-worker2-8xzz7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:06:06.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:06:06.867: INFO: Creating a PV followed by a PVC Oct 5 12:06:06.877: INFO: Waiting for PV local-pvffngt to bind to PVC pvc-zkhfh Oct 5 12:06:06.877: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-zkhfh] to have phase Bound Oct 5 12:06:06.881: INFO: PersistentVolumeClaim pvc-zkhfh found but phase is Pending instead of Bound. Oct 5 12:06:08.885: INFO: PersistentVolumeClaim pvc-zkhfh found but phase is Pending instead of Bound. Oct 5 12:06:10.890: INFO: PersistentVolumeClaim pvc-zkhfh found but phase is Pending instead of Bound. Oct 5 12:06:12.895: INFO: PersistentVolumeClaim pvc-zkhfh found but phase is Pending instead of Bound. Oct 5 12:06:14.900: INFO: PersistentVolumeClaim pvc-zkhfh found but phase is Pending instead of Bound. Oct 5 12:06:16.905: INFO: PersistentVolumeClaim pvc-zkhfh found but phase is Pending instead of Bound. Oct 5 12:06:18.910: INFO: PersistentVolumeClaim pvc-zkhfh found but phase is Pending instead of Bound. Oct 5 12:06:20.915: INFO: PersistentVolumeClaim pvc-zkhfh found and phase=Bound (14.03745348s) Oct 5 12:06:20.915: INFO: Waiting up to 3m0s for PersistentVolume local-pvffngt to have phase Bound Oct 5 12:06:20.918: INFO: PersistentVolume local-pvffngt found and phase=Bound (3.21051ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Oct 5 12:06:24.944: INFO: pod "pod-011e923f-2871-4544-a438-86d32bc5cee1" created on Node "v122-worker2" STEP: Writing in pod1 Oct 5 12:06:24.944: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3745 PodName:pod-011e923f-2871-4544-a438-86d32bc5cee1 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:06:24.944: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:06:25.061: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Oct 5 12:06:25.061: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3745 PodName:pod-011e923f-2871-4544-a438-86d32bc5cee1 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:06:25.061: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:06:25.122: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Oct 5 12:06:33.141: INFO: pod "pod-3ab8a4ae-b3e8-4faf-add9-0aba4e67e474" created on Node "v122-worker2" Oct 5 12:06:33.141: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3745 PodName:pod-3ab8a4ae-b3e8-4faf-add9-0aba4e67e474 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:06:33.141: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:06:33.273: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Oct 5 12:06:33.273: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-4f173c23-cc36-4573-9aab-b22f96005e3d > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3745 PodName:pod-3ab8a4ae-b3e8-4faf-add9-0aba4e67e474 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:06:33.273: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:06:33.383: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-4f173c23-cc36-4573-9aab-b22f96005e3d > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Oct 5 12:06:33.383: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3745 PodName:pod-011e923f-2871-4544-a438-86d32bc5cee1 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:06:33.383: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:06:33.473: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-4f173c23-cc36-4573-9aab-b22f96005e3d", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-011e923f-2871-4544-a438-86d32bc5cee1 in namespace persistent-local-volumes-test-3745 STEP: Deleting pod2 STEP: Deleting pod pod-3ab8a4ae-b3e8-4faf-add9-0aba4e67e474 in namespace persistent-local-volumes-test-3745 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:06:33.483: INFO: Deleting PersistentVolumeClaim "pvc-zkhfh" Oct 5 12:06:33.488: INFO: Deleting PersistentVolume "local-pvffngt" Oct 5 12:06:33.494: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-4f173c23-cc36-4573-9aab-b22f96005e3d] Namespace:persistent-local-volumes-test-3745 PodName:hostexec-v122-worker2-8xzz7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:06:33.494: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:06:33.644: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-4f173c23-cc36-4573-9aab-b22f96005e3d/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-3745 PodName:hostexec-v122-worker2-8xzz7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:06:33.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop9" on node "v122-worker2" at path /tmp/local-volume-test-4f173c23-cc36-4573-9aab-b22f96005e3d/file Oct 5 12:06:33.798: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop9] Namespace:persistent-local-volumes-test-3745 PodName:hostexec-v122-worker2-8xzz7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:06:33.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-4f173c23-cc36-4573-9aab-b22f96005e3d Oct 5 12:06:33.887: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4f173c23-cc36-4573-9aab-b22f96005e3d] Namespace:persistent-local-volumes-test-3745 PodName:hostexec-v122-worker2-8xzz7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:06:33.887: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:06:34.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3745" for this suite. • [SLOW TEST:33.945 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":8,"skipped":242,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:06:31.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Oct 5 12:06:31.405: INFO: The status of Pod test-hostpath-type-9dhmt is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:06:33.408: INFO: The status of Pod test-hostpath-type-9dhmt is Running (Ready = true) STEP: running on node v122-worker STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should fail on mounting directory 'adir' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:84 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:06:37.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-8829" for this suite. • [SLOW TEST:6.109 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting directory 'adir' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:84 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathFile","total":-1,"completed":6,"skipped":341,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:06:34.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37 [It] should support r/w [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:65 STEP: Creating a pod to test hostPath r/w Oct 5 12:06:34.124: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6928" to be "Succeeded or Failed" Oct 5 12:06:34.127: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.589041ms Oct 5 12:06:36.131: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2.006951531s Oct 5 12:06:38.137: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 4.012436032s Oct 5 12:06:40.142: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017467268s STEP: Saw pod success Oct 5 12:06:40.142: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Oct 5 12:06:40.145: INFO: Trying to get logs from node v122-worker pod pod-host-path-test container test-container-2: STEP: delete the pod Oct 5 12:06:40.161: INFO: Waiting for pod pod-host-path-test to disappear Oct 5 12:06:40.164: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:06:40.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-6928" for this suite. • [SLOW TEST:6.087 seconds] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support r/w [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:65 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should support r/w [NodeConformance]","total":-1,"completed":9,"skipped":300,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:06:40.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename regional-pd STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:68 Oct 5 12:06:40.229: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:06:40.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "regional-pd-264" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.049 seconds] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 RegionalPD [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:76 should provision storage [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:77 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:06:21.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "v122-worker2" using path "/tmp/local-volume-test-e3b2967c-1088-4a86-ba9a-6bbdd8d11421" Oct 5 12:06:25.937: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-e3b2967c-1088-4a86-ba9a-6bbdd8d11421 && dd if=/dev/zero of=/tmp/local-volume-test-e3b2967c-1088-4a86-ba9a-6bbdd8d11421/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-e3b2967c-1088-4a86-ba9a-6bbdd8d11421/file] Namespace:persistent-local-volumes-test-7293 PodName:hostexec-v122-worker2-rkx8w ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:06:25.937: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:06:26.145: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-e3b2967c-1088-4a86-ba9a-6bbdd8d11421/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-7293 PodName:hostexec-v122-worker2-rkx8w ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:06:26.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:06:26.313: INFO: Creating a PV followed by a PVC Oct 5 12:06:26.321: INFO: Waiting for PV local-pvvkvvm to bind to PVC pvc-7trqz Oct 5 12:06:26.321: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-7trqz] to have phase Bound Oct 5 12:06:26.324: INFO: PersistentVolumeClaim pvc-7trqz found but phase is Pending instead of Bound. Oct 5 12:06:28.329: INFO: PersistentVolumeClaim pvc-7trqz found but phase is Pending instead of Bound. Oct 5 12:06:30.333: INFO: PersistentVolumeClaim pvc-7trqz found but phase is Pending instead of Bound. Oct 5 12:06:32.337: INFO: PersistentVolumeClaim pvc-7trqz found but phase is Pending instead of Bound. Oct 5 12:06:34.341: INFO: PersistentVolumeClaim pvc-7trqz found but phase is Pending instead of Bound. Oct 5 12:06:36.346: INFO: PersistentVolumeClaim pvc-7trqz found and phase=Bound (10.024549854s) Oct 5 12:06:36.346: INFO: Waiting up to 3m0s for PersistentVolume local-pvvkvvm to have phase Bound Oct 5 12:06:36.349: INFO: PersistentVolume local-pvvkvvm found and phase=Bound (3.555328ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Oct 5 12:06:40.371: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:45799 --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-7293 exec pod-822abc8c-8b62-4169-bbaf-4d3062cf8408 --namespace=persistent-local-volumes-test-7293 -- stat -c %g /mnt/volume1' Oct 5 12:06:40.597: INFO: stderr: "" Oct 5 12:06:40.597: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-822abc8c-8b62-4169-bbaf-4d3062cf8408 in namespace persistent-local-volumes-test-7293 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:06:40.603: INFO: Deleting PersistentVolumeClaim "pvc-7trqz" Oct 5 12:06:40.608: INFO: Deleting PersistentVolume "local-pvvkvvm" Oct 5 12:06:40.613: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-e3b2967c-1088-4a86-ba9a-6bbdd8d11421/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-7293 PodName:hostexec-v122-worker2-rkx8w ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:06:40.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop8" on node "v122-worker2" at path /tmp/local-volume-test-e3b2967c-1088-4a86-ba9a-6bbdd8d11421/file Oct 5 12:06:40.754: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop8] Namespace:persistent-local-volumes-test-7293 PodName:hostexec-v122-worker2-rkx8w ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:06:40.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-e3b2967c-1088-4a86-ba9a-6bbdd8d11421 Oct 5 12:06:40.880: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e3b2967c-1088-4a86-ba9a-6bbdd8d11421] Namespace:persistent-local-volumes-test-7293 PodName:hostexec-v122-worker2-rkx8w ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:06:40.880: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:06:41.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7293" for this suite. • [SLOW TEST:19.161 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":7,"skipped":163,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:06:40.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "v122-worker" using path "/tmp/local-volume-test-1af6d69e-4a59-4ed3-ba2a-adc40c8f164d" Oct 5 12:06:42.340: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-1af6d69e-4a59-4ed3-ba2a-adc40c8f164d && dd if=/dev/zero of=/tmp/local-volume-test-1af6d69e-4a59-4ed3-ba2a-adc40c8f164d/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-1af6d69e-4a59-4ed3-ba2a-adc40c8f164d/file] Namespace:persistent-local-volumes-test-9045 PodName:hostexec-v122-worker-8cd86 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:06:42.340: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:06:42.550: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-1af6d69e-4a59-4ed3-ba2a-adc40c8f164d/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9045 PodName:hostexec-v122-worker-8cd86 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:06:42.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:06:42.653: INFO: Creating a PV followed by a PVC Oct 5 12:06:42.660: INFO: Waiting for PV local-pvkh9lr to bind to PVC pvc-rcv8j Oct 5 12:06:42.660: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-rcv8j] to have phase Bound Oct 5 12:06:42.662: INFO: PersistentVolumeClaim pvc-rcv8j found but phase is Pending instead of Bound. Oct 5 12:06:44.666: INFO: PersistentVolumeClaim pvc-rcv8j found and phase=Bound (2.005928548s) Oct 5 12:06:44.666: INFO: Waiting up to 3m0s for PersistentVolume local-pvkh9lr to have phase Bound Oct 5 12:06:44.669: INFO: PersistentVolume local-pvkh9lr found and phase=Bound (3.459591ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 Oct 5 12:06:44.676: INFO: We don't set fsGroup on block device, skipped. [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:06:44.677: INFO: Deleting PersistentVolumeClaim "pvc-rcv8j" Oct 5 12:06:44.683: INFO: Deleting PersistentVolume "local-pvkh9lr" Oct 5 12:06:44.688: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-1af6d69e-4a59-4ed3-ba2a-adc40c8f164d/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9045 PodName:hostexec-v122-worker-8cd86 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:06:44.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop9" on node "v122-worker" at path /tmp/local-volume-test-1af6d69e-4a59-4ed3-ba2a-adc40c8f164d/file Oct 5 12:06:44.839: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop9] Namespace:persistent-local-volumes-test-9045 PodName:hostexec-v122-worker-8cd86 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:06:44.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-1af6d69e-4a59-4ed3-ba2a-adc40c8f164d Oct 5 12:06:45.018: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-1af6d69e-4a59-4ed3-ba2a-adc40c8f164d] Namespace:persistent-local-volumes-test-9045 PodName:hostexec-v122-worker-8cd86 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:06:45.018: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:06:45.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9045" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [4.878 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 We don't set fsGroup on block device, skipped. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:263 ------------------------------ SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:06:45.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Oct 5 12:06:45.211: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:06:45.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-1144" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.042 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should be able to delete a non-existent PD without error [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:449 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:06:45.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91 STEP: Creating a pod to test downward API volume plugin Oct 5 12:06:45.263: INFO: Waiting up to 5m0s for pod "metadata-volume-fd9fe09b-f96b-418b-97c0-166bbd205d0e" in namespace "projected-3638" to be "Succeeded or Failed" Oct 5 12:06:45.267: INFO: Pod "metadata-volume-fd9fe09b-f96b-418b-97c0-166bbd205d0e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.458806ms Oct 5 12:06:47.271: INFO: Pod "metadata-volume-fd9fe09b-f96b-418b-97c0-166bbd205d0e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00762024s Oct 5 12:06:49.276: INFO: Pod "metadata-volume-fd9fe09b-f96b-418b-97c0-166bbd205d0e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012467591s STEP: Saw pod success Oct 5 12:06:49.276: INFO: Pod "metadata-volume-fd9fe09b-f96b-418b-97c0-166bbd205d0e" satisfied condition "Succeeded or Failed" Oct 5 12:06:49.279: INFO: Trying to get logs from node v122-worker pod metadata-volume-fd9fe09b-f96b-418b-97c0-166bbd205d0e container client-container: STEP: delete the pod Oct 5 12:06:49.295: INFO: Waiting for pod metadata-volume-fd9fe09b-f96b-418b-97c0-166bbd205d0e to disappear Oct 5 12:06:49.298: INFO: Pod metadata-volume-fd9fe09b-f96b-418b-97c0-166bbd205d0e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:06:49.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3638" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":10,"skipped":340,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:06:49.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:57 Oct 5 12:06:49.392: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:06:49.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-5782" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:91 S [SKIPPING] in Spec Setup (BeforeEach) [0.043 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create prometheus metrics for volume provisioning errors [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:156 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:61 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:06:28.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:354 STEP: Initializing test volumes Oct 5 12:06:34.756: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-a6181381-b531-404a-87a4-9bc314c5a749] Namespace:persistent-local-volumes-test-1890 PodName:hostexec-v122-worker2-pthbp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:06:34.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:06:34.885: INFO: Creating a PV followed by a PVC Oct 5 12:06:34.894: INFO: Waiting for PV local-pvmgf7j to bind to PVC pvc-gb8kt Oct 5 12:06:34.894: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-gb8kt] to have phase Bound Oct 5 12:06:34.897: INFO: PersistentVolumeClaim pvc-gb8kt found but phase is Pending instead of Bound. Oct 5 12:06:36.901: INFO: PersistentVolumeClaim pvc-gb8kt found but phase is Pending instead of Bound. Oct 5 12:06:38.904: INFO: PersistentVolumeClaim pvc-gb8kt found but phase is Pending instead of Bound. Oct 5 12:06:40.914: INFO: PersistentVolumeClaim pvc-gb8kt found but phase is Pending instead of Bound. Oct 5 12:06:42.919: INFO: PersistentVolumeClaim pvc-gb8kt found but phase is Pending instead of Bound. Oct 5 12:06:44.923: INFO: PersistentVolumeClaim pvc-gb8kt found but phase is Pending instead of Bound. Oct 5 12:06:46.927: INFO: PersistentVolumeClaim pvc-gb8kt found but phase is Pending instead of Bound. Oct 5 12:06:48.931: INFO: PersistentVolumeClaim pvc-gb8kt found but phase is Pending instead of Bound. Oct 5 12:06:50.936: INFO: PersistentVolumeClaim pvc-gb8kt found and phase=Bound (16.041780022s) Oct 5 12:06:50.936: INFO: Waiting up to 3m0s for PersistentVolume local-pvmgf7j to have phase Bound Oct 5 12:06:50.939: INFO: PersistentVolume local-pvmgf7j found and phase=Bound (3.096013ms) [It] should fail scheduling due to different NodeSelector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379 STEP: local-volume-type: dir Oct 5 12:06:50.950: INFO: Waiting up to 5m0s for pod "pod-679af94a-f391-4db9-aeae-bc5506655c46" in namespace "persistent-local-volumes-test-1890" to be "Unschedulable" Oct 5 12:06:50.954: INFO: Pod "pod-679af94a-f391-4db9-aeae-bc5506655c46": Phase="Pending", Reason="", readiness=false. Elapsed: 3.250142ms Oct 5 12:06:52.958: INFO: Pod "pod-679af94a-f391-4db9-aeae-bc5506655c46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008055092s Oct 5 12:06:52.958: INFO: Pod "pod-679af94a-f391-4db9-aeae-bc5506655c46" satisfied condition "Unschedulable" [AfterEach] Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:370 STEP: Cleaning up PVC and PV Oct 5 12:06:52.959: INFO: Deleting PersistentVolumeClaim "pvc-gb8kt" Oct 5 12:06:52.964: INFO: Deleting PersistentVolume "local-pvmgf7j" STEP: Removing the test directory Oct 5 12:06:52.969: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a6181381-b531-404a-87a4-9bc314c5a749] Namespace:persistent-local-volumes-test-1890 PodName:hostexec-v122-worker2-pthbp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:06:52.969: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:06:53.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1890" for this suite. • [SLOW TEST:24.429 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347 should fail scheduling due to different NodeSelector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector","total":-1,"completed":7,"skipped":327,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:06:41.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Oct 5 12:06:43.176: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-6b3a06e5-3c79-4c75-a6f8-bf851fe1d1f2-backend && ln -s /tmp/local-volume-test-6b3a06e5-3c79-4c75-a6f8-bf851fe1d1f2-backend /tmp/local-volume-test-6b3a06e5-3c79-4c75-a6f8-bf851fe1d1f2] Namespace:persistent-local-volumes-test-3739 PodName:hostexec-v122-worker2-r5wtn ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:06:43.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:06:43.321: INFO: Creating a PV followed by a PVC Oct 5 12:06:43.331: INFO: Waiting for PV local-pvzzvkv to bind to PVC pvc-cs7bp Oct 5 12:06:43.331: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-cs7bp] to have phase Bound Oct 5 12:06:43.337: INFO: PersistentVolumeClaim pvc-cs7bp found but phase is Pending instead of Bound. Oct 5 12:06:45.340: INFO: PersistentVolumeClaim pvc-cs7bp found but phase is Pending instead of Bound. Oct 5 12:06:47.344: INFO: PersistentVolumeClaim pvc-cs7bp found but phase is Pending instead of Bound. Oct 5 12:06:49.348: INFO: PersistentVolumeClaim pvc-cs7bp found but phase is Pending instead of Bound. Oct 5 12:06:51.353: INFO: PersistentVolumeClaim pvc-cs7bp found and phase=Bound (8.022163731s) Oct 5 12:06:51.353: INFO: Waiting up to 3m0s for PersistentVolume local-pvzzvkv to have phase Bound Oct 5 12:06:51.356: INFO: PersistentVolume local-pvzzvkv found and phase=Bound (3.129449ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Oct 5 12:06:53.379: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:45799 --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-3739 exec pod-0b4dd699-c3a2-49a9-ac0b-179867220f0c --namespace=persistent-local-volumes-test-3739 -- stat -c %g /mnt/volume1' Oct 5 12:06:53.621: INFO: stderr: "" Oct 5 12:06:53.621: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-0b4dd699-c3a2-49a9-ac0b-179867220f0c in namespace persistent-local-volumes-test-3739 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:06:53.626: INFO: Deleting PersistentVolumeClaim "pvc-cs7bp" Oct 5 12:06:53.630: INFO: Deleting PersistentVolume "local-pvzzvkv" STEP: Removing the test directory Oct 5 12:06:53.635: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6b3a06e5-3c79-4c75-a6f8-bf851fe1d1f2 && rm -r /tmp/local-volume-test-6b3a06e5-3c79-4c75-a6f8-bf851fe1d1f2-backend] Namespace:persistent-local-volumes-test-3739 PodName:hostexec-v122-worker2-r5wtn ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:06:53.635: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:06:53.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3739" for this suite. • [SLOW TEST:12.667 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":8,"skipped":205,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:06:53.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Oct 5 12:06:53.202: INFO: The status of Pod test-hostpath-type-cvpkj is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:06:55.207: INFO: The status of Pod test-hostpath-type-cvpkj is Running (Ready = true) STEP: running on node v122-worker [It] Should be able to mount socket 'asocket' successfully when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:208 [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:06:57.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-2741" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should be able to mount socket 'asocket' successfully when HostPathType is HostPathSocket","total":-1,"completed":8,"skipped":342,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:03:12.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should bringup pod after deploying CSIDriver attach=false [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:374 STEP: Building a driver namespace object, basename csi-mock-volumes-4859 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Oct 5 12:03:12.745: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4859-9110/csi-attacher Oct 5 12:03:12.749: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4859 Oct 5 12:03:12.749: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4859 Oct 5 12:03:12.752: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4859 Oct 5 12:03:12.756: INFO: creating *v1.Role: csi-mock-volumes-4859-9110/external-attacher-cfg-csi-mock-volumes-4859 Oct 5 12:03:12.760: INFO: creating *v1.RoleBinding: csi-mock-volumes-4859-9110/csi-attacher-role-cfg Oct 5 12:03:12.763: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4859-9110/csi-provisioner Oct 5 12:03:12.767: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4859 Oct 5 12:03:12.767: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4859 Oct 5 12:03:12.770: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4859 Oct 5 12:03:12.774: INFO: creating *v1.Role: csi-mock-volumes-4859-9110/external-provisioner-cfg-csi-mock-volumes-4859 Oct 5 12:03:12.777: INFO: creating *v1.RoleBinding: csi-mock-volumes-4859-9110/csi-provisioner-role-cfg Oct 5 12:03:12.780: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4859-9110/csi-resizer Oct 5 12:03:12.784: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4859 Oct 5 12:03:12.784: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4859 Oct 5 12:03:12.787: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4859 Oct 5 12:03:12.790: INFO: creating *v1.Role: csi-mock-volumes-4859-9110/external-resizer-cfg-csi-mock-volumes-4859 Oct 5 12:03:12.794: INFO: creating *v1.RoleBinding: csi-mock-volumes-4859-9110/csi-resizer-role-cfg Oct 5 12:03:12.798: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4859-9110/csi-snapshotter Oct 5 12:03:12.801: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4859 Oct 5 12:03:12.801: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4859 Oct 5 12:03:12.805: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4859 Oct 5 12:03:12.808: INFO: creating *v1.Role: csi-mock-volumes-4859-9110/external-snapshotter-leaderelection-csi-mock-volumes-4859 Oct 5 12:03:12.812: INFO: creating *v1.RoleBinding: csi-mock-volumes-4859-9110/external-snapshotter-leaderelection Oct 5 12:03:12.816: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4859-9110/csi-mock Oct 5 12:03:12.820: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4859 Oct 5 12:03:12.823: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4859 Oct 5 12:03:12.827: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4859 Oct 5 12:03:12.830: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4859 Oct 5 12:03:12.834: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4859 Oct 5 12:03:12.837: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4859 Oct 5 12:03:12.841: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4859 Oct 5 12:03:12.844: INFO: creating *v1.StatefulSet: csi-mock-volumes-4859-9110/csi-mockplugin Oct 5 12:03:12.849: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4859 to register on node v122-worker2 STEP: Creating pod Oct 5 12:03:22.370: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Oct 5 12:03:22.377: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-dlf2c] to have phase Bound Oct 5 12:03:22.380: INFO: PersistentVolumeClaim pvc-dlf2c found but phase is Pending instead of Bound. Oct 5 12:03:24.384: INFO: PersistentVolumeClaim pvc-dlf2c found and phase=Bound (2.007205901s) STEP: Checking if attaching failed and pod cannot start STEP: Checking if VolumeAttachment was created for the pod STEP: Deploy CSIDriver object with attachRequired=false Oct 5 12:05:26.414: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4859 STEP: Wait for the pod in running status STEP: Wait for the volumeattachment to be deleted up to 7m0s STEP: Deleting pod pvc-volume-tester-6w4f2 Oct 5 12:07:30.434: INFO: Deleting pod "pvc-volume-tester-6w4f2" in namespace "csi-mock-volumes-4859" Oct 5 12:07:30.441: INFO: Wait up to 5m0s for pod "pvc-volume-tester-6w4f2" to be fully deleted STEP: Deleting claim pvc-dlf2c Oct 5 12:07:32.456: INFO: Waiting up to 2m0s for PersistentVolume pvc-59a482bb-12e0-4d0b-9ae4-16da3f6ad4ae to get deleted Oct 5 12:07:32.459: INFO: PersistentVolume pvc-59a482bb-12e0-4d0b-9ae4-16da3f6ad4ae found and phase=Bound (2.756397ms) Oct 5 12:07:34.463: INFO: PersistentVolume pvc-59a482bb-12e0-4d0b-9ae4-16da3f6ad4ae was removed STEP: Deleting storageclass csi-mock-volumes-4859-sck7ksz STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4859 STEP: Waiting for namespaces [csi-mock-volumes-4859] to vanish STEP: uninstalling csi mock driver Oct 5 12:07:40.481: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4859-9110/csi-attacher Oct 5 12:07:40.486: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4859 Oct 5 12:07:40.491: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4859 Oct 5 12:07:40.495: INFO: deleting *v1.Role: csi-mock-volumes-4859-9110/external-attacher-cfg-csi-mock-volumes-4859 Oct 5 12:07:40.500: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4859-9110/csi-attacher-role-cfg Oct 5 12:07:40.504: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4859-9110/csi-provisioner Oct 5 12:07:40.509: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4859 Oct 5 12:07:40.514: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4859 Oct 5 12:07:40.518: INFO: deleting *v1.Role: csi-mock-volumes-4859-9110/external-provisioner-cfg-csi-mock-volumes-4859 Oct 5 12:07:40.523: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4859-9110/csi-provisioner-role-cfg Oct 5 12:07:40.528: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4859-9110/csi-resizer Oct 5 12:07:40.532: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4859 Oct 5 12:07:40.537: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4859 Oct 5 12:07:40.541: INFO: deleting *v1.Role: csi-mock-volumes-4859-9110/external-resizer-cfg-csi-mock-volumes-4859 Oct 5 12:07:40.545: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4859-9110/csi-resizer-role-cfg Oct 5 12:07:40.549: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4859-9110/csi-snapshotter Oct 5 12:07:40.554: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4859 Oct 5 12:07:40.558: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4859 Oct 5 12:07:40.564: INFO: deleting *v1.Role: csi-mock-volumes-4859-9110/external-snapshotter-leaderelection-csi-mock-volumes-4859 Oct 5 12:07:40.568: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4859-9110/external-snapshotter-leaderelection Oct 5 12:07:40.572: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4859-9110/csi-mock Oct 5 12:07:40.577: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4859 Oct 5 12:07:40.582: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4859 Oct 5 12:07:40.586: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4859 Oct 5 12:07:40.591: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4859 Oct 5 12:07:40.595: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4859 Oct 5 12:07:40.605: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4859 Oct 5 12:07:40.609: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4859 Oct 5 12:07:40.614: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4859-9110/csi-mockplugin STEP: deleting the driver namespace: csi-mock-volumes-4859-9110 STEP: Waiting for namespaces [csi-mock-volumes-4859-9110] to vanish Oct 5 12:07:46.630: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-4859 [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:07:46.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:274.201 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI CSIDriver deployment after pod creation using non-attachable mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:373 should bringup pod after deploying CSIDriver attach=false [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:374 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI CSIDriver deployment after pod creation using non-attachable mock driver should bringup pod after deploying CSIDriver attach=false [Slow]","total":-1,"completed":2,"skipped":25,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:03:07.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected W1005 12:03:08.512616 17 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 5 12:03:08.512: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to the key in the secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:421 STEP: Creating secret with name s-test-opt-create-0b78db5e-4af8-4a04-b2bc-c989de1d3ce7 STEP: Creating the pod [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:08:08.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2954" for this suite. • [SLOW TEST:301.321 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to the key in the secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:421 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]","total":-1,"completed":1,"skipped":98,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:08:08.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:68 Oct 5 12:08:08.596: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:08:08.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-4592" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.047 seconds] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 NFSv4 [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:78 should be mountable for NFSv4 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:79 Only supported for node OS distro [gci ubuntu custom] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:69 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:04:53.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not call NodeUnstage after NodeStage final error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:829 STEP: Building a driver namespace object, basename csi-mock-volumes-5388 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Oct 5 12:04:53.465: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5388-4974/csi-attacher Oct 5 12:04:53.469: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5388 Oct 5 12:04:53.469: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-5388 Oct 5 12:04:53.472: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5388 Oct 5 12:04:53.476: INFO: creating *v1.Role: csi-mock-volumes-5388-4974/external-attacher-cfg-csi-mock-volumes-5388 Oct 5 12:04:53.480: INFO: creating *v1.RoleBinding: csi-mock-volumes-5388-4974/csi-attacher-role-cfg Oct 5 12:04:53.484: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5388-4974/csi-provisioner Oct 5 12:04:53.488: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5388 Oct 5 12:04:53.488: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-5388 Oct 5 12:04:53.492: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5388 Oct 5 12:04:53.496: INFO: creating *v1.Role: csi-mock-volumes-5388-4974/external-provisioner-cfg-csi-mock-volumes-5388 Oct 5 12:04:53.499: INFO: creating *v1.RoleBinding: csi-mock-volumes-5388-4974/csi-provisioner-role-cfg Oct 5 12:04:53.502: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5388-4974/csi-resizer Oct 5 12:04:53.506: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5388 Oct 5 12:04:53.506: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-5388 Oct 5 12:04:53.509: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5388 Oct 5 12:04:53.512: INFO: creating *v1.Role: csi-mock-volumes-5388-4974/external-resizer-cfg-csi-mock-volumes-5388 Oct 5 12:04:53.516: INFO: creating *v1.RoleBinding: csi-mock-volumes-5388-4974/csi-resizer-role-cfg Oct 5 12:04:53.520: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5388-4974/csi-snapshotter Oct 5 12:04:53.523: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5388 Oct 5 12:04:53.524: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-5388 Oct 5 12:04:53.527: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5388 Oct 5 12:04:53.531: INFO: creating *v1.Role: csi-mock-volumes-5388-4974/external-snapshotter-leaderelection-csi-mock-volumes-5388 Oct 5 12:04:53.535: INFO: creating *v1.RoleBinding: csi-mock-volumes-5388-4974/external-snapshotter-leaderelection Oct 5 12:04:53.538: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5388-4974/csi-mock Oct 5 12:04:53.542: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5388 Oct 5 12:04:53.546: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5388 Oct 5 12:04:53.549: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5388 Oct 5 12:04:53.553: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5388 Oct 5 12:04:53.557: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5388 Oct 5 12:04:53.560: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5388 Oct 5 12:04:53.564: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5388 Oct 5 12:04:53.568: INFO: creating *v1.StatefulSet: csi-mock-volumes-5388-4974/csi-mockplugin Oct 5 12:04:53.574: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-5388 Oct 5 12:04:53.578: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-5388" Oct 5 12:04:53.581: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5388 to register on node v122-worker I1005 12:04:59.636167 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I1005 12:04:59.638177 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-5388","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1005 12:04:59.639940 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I1005 12:04:59.642485 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I1005 12:04:59.785779 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-5388","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1005 12:04:59.837982 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-5388"},"Error":"","FullError":null} STEP: Creating pod Oct 5 12:05:03.104: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Oct 5 12:05:03.110: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-9ftd6] to have phase Bound Oct 5 12:05:03.112: INFO: PersistentVolumeClaim pvc-9ftd6 found but phase is Pending instead of Bound. I1005 12:05:03.118999 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-eca6a7ea-4989-4720-a2a7-8de7e2f12c77","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-eca6a7ea-4989-4720-a2a7-8de7e2f12c77"}}},"Error":"","FullError":null} Oct 5 12:05:05.116: INFO: PersistentVolumeClaim pvc-9ftd6 found and phase=Bound (2.005971897s) Oct 5 12:05:05.133: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-9ftd6] to have phase Bound Oct 5 12:05:05.136: INFO: PersistentVolumeClaim pvc-9ftd6 found and phase=Bound (3.098976ms) STEP: Waiting for expected CSI calls I1005 12:05:06.710137 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:05:06.713220 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:05:06.716190 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-eca6a7ea-4989-4720-a2a7-8de7e2f12c77/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-eca6a7ea-4989-4720-a2a7-8de7e2f12c77","storage.kubernetes.io/csiProvisionerIdentity":"1664971499643-8081-csi-mock-csi-mock-volumes-5388"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} STEP: Deleting the previously created pod Oct 5 12:05:07.137: INFO: Deleting pod "pvc-volume-tester-v4tcb" in namespace "csi-mock-volumes-5388" Oct 5 12:05:07.143: INFO: Wait up to 5m0s for pod "pvc-volume-tester-v4tcb" to be fully deleted I1005 12:05:07.316124 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:05:07.318756 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:05:07.320939 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-eca6a7ea-4989-4720-a2a7-8de7e2f12c77/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-eca6a7ea-4989-4720-a2a7-8de7e2f12c77","storage.kubernetes.io/csiProvisionerIdentity":"1664971499643-8081-csi-mock-csi-mock-volumes-5388"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} I1005 12:05:08.424792 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:05:08.427232 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:05:08.429891 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-eca6a7ea-4989-4720-a2a7-8de7e2f12c77/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-eca6a7ea-4989-4720-a2a7-8de7e2f12c77","storage.kubernetes.io/csiProvisionerIdentity":"1664971499643-8081-csi-mock-csi-mock-volumes-5388"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} I1005 12:05:10.442558 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:05:10.445598 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:05:10.448220 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-eca6a7ea-4989-4720-a2a7-8de7e2f12c77/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-eca6a7ea-4989-4720-a2a7-8de7e2f12c77","storage.kubernetes.io/csiProvisionerIdentity":"1664971499643-8081-csi-mock-csi-mock-volumes-5388"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} I1005 12:05:14.473892 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:05:14.476792 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:05:14.479550 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-eca6a7ea-4989-4720-a2a7-8de7e2f12c77/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-eca6a7ea-4989-4720-a2a7-8de7e2f12c77","storage.kubernetes.io/csiProvisionerIdentity":"1664971499643-8081-csi-mock-csi-mock-volumes-5388"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} I1005 12:05:22.543994 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:05:22.546689 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:05:22.549061 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-eca6a7ea-4989-4720-a2a7-8de7e2f12c77/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-eca6a7ea-4989-4720-a2a7-8de7e2f12c77","storage.kubernetes.io/csiProvisionerIdentity":"1664971499643-8081-csi-mock-csi-mock-volumes-5388"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} I1005 12:05:38.582719 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:05:38.586099 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:05:38.589923 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-eca6a7ea-4989-4720-a2a7-8de7e2f12c77/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-eca6a7ea-4989-4720-a2a7-8de7e2f12c77","storage.kubernetes.io/csiProvisionerIdentity":"1664971499643-8081-csi-mock-csi-mock-volumes-5388"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} I1005 12:06:10.666746 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:06:10.668842 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:06:10.670950 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-eca6a7ea-4989-4720-a2a7-8de7e2f12c77/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-eca6a7ea-4989-4720-a2a7-8de7e2f12c77","storage.kubernetes.io/csiProvisionerIdentity":"1664971499643-8081-csi-mock-csi-mock-volumes-5388"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} STEP: Waiting for all remaining expected CSI calls STEP: Deleting pod pvc-volume-tester-v4tcb Oct 5 12:07:12.151: INFO: Deleting pod "pvc-volume-tester-v4tcb" in namespace "csi-mock-volumes-5388" STEP: Deleting claim pvc-9ftd6 Oct 5 12:07:12.163: INFO: Waiting up to 2m0s for PersistentVolume pvc-eca6a7ea-4989-4720-a2a7-8de7e2f12c77 to get deleted Oct 5 12:07:12.166: INFO: PersistentVolume pvc-eca6a7ea-4989-4720-a2a7-8de7e2f12c77 found and phase=Bound (3.209084ms) I1005 12:07:12.189711 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} Oct 5 12:07:14.172: INFO: PersistentVolume pvc-eca6a7ea-4989-4720-a2a7-8de7e2f12c77 was removed STEP: Deleting storageclass csi-mock-volumes-5388-sctt5hz STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-5388 STEP: Waiting for namespaces [csi-mock-volumes-5388] to vanish STEP: uninstalling csi mock driver Oct 5 12:07:27.213: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5388-4974/csi-attacher Oct 5 12:07:27.218: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5388 Oct 5 12:07:27.223: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5388 Oct 5 12:07:27.228: INFO: deleting *v1.Role: csi-mock-volumes-5388-4974/external-attacher-cfg-csi-mock-volumes-5388 Oct 5 12:07:27.233: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5388-4974/csi-attacher-role-cfg Oct 5 12:07:27.237: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5388-4974/csi-provisioner Oct 5 12:07:27.242: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5388 Oct 5 12:07:27.248: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5388 Oct 5 12:07:27.253: INFO: deleting *v1.Role: csi-mock-volumes-5388-4974/external-provisioner-cfg-csi-mock-volumes-5388 Oct 5 12:07:27.257: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5388-4974/csi-provisioner-role-cfg Oct 5 12:07:27.262: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5388-4974/csi-resizer Oct 5 12:07:27.267: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5388 Oct 5 12:07:27.272: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5388 Oct 5 12:07:27.277: INFO: deleting *v1.Role: csi-mock-volumes-5388-4974/external-resizer-cfg-csi-mock-volumes-5388 Oct 5 12:07:27.281: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5388-4974/csi-resizer-role-cfg Oct 5 12:07:27.286: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5388-4974/csi-snapshotter Oct 5 12:07:27.290: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5388 Oct 5 12:07:27.295: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5388 Oct 5 12:07:27.299: INFO: deleting *v1.Role: csi-mock-volumes-5388-4974/external-snapshotter-leaderelection-csi-mock-volumes-5388 Oct 5 12:07:27.304: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5388-4974/external-snapshotter-leaderelection Oct 5 12:07:27.309: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5388-4974/csi-mock Oct 5 12:07:27.313: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5388 Oct 5 12:07:27.318: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5388 Oct 5 12:07:27.322: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5388 Oct 5 12:07:27.326: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5388 Oct 5 12:07:27.331: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5388 Oct 5 12:07:27.335: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5388 Oct 5 12:07:27.340: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5388 Oct 5 12:07:27.344: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5388-4974/csi-mockplugin Oct 5 12:07:27.350: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-5388 STEP: deleting the driver namespace: csi-mock-volumes-5388-4974 STEP: Waiting for namespaces [csi-mock-volumes-5388-4974] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:08:11.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:197.974 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI NodeStage error cases [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:735 should not call NodeUnstage after NodeStage final error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:829 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should not call NodeUnstage after NodeStage final error","total":-1,"completed":5,"skipped":258,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:06:57.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should retry NodeStage after NodeStage final error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:829 STEP: Building a driver namespace object, basename csi-mock-volumes-430 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Oct 5 12:06:57.541: INFO: creating *v1.ServiceAccount: csi-mock-volumes-430-2401/csi-attacher Oct 5 12:06:57.545: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-430 Oct 5 12:06:57.545: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-430 Oct 5 12:06:57.549: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-430 Oct 5 12:06:57.553: INFO: creating *v1.Role: csi-mock-volumes-430-2401/external-attacher-cfg-csi-mock-volumes-430 Oct 5 12:06:57.556: INFO: creating *v1.RoleBinding: csi-mock-volumes-430-2401/csi-attacher-role-cfg Oct 5 12:06:57.560: INFO: creating *v1.ServiceAccount: csi-mock-volumes-430-2401/csi-provisioner Oct 5 12:06:57.564: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-430 Oct 5 12:06:57.564: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-430 Oct 5 12:06:57.568: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-430 Oct 5 12:06:57.572: INFO: creating *v1.Role: csi-mock-volumes-430-2401/external-provisioner-cfg-csi-mock-volumes-430 Oct 5 12:06:57.576: INFO: creating *v1.RoleBinding: csi-mock-volumes-430-2401/csi-provisioner-role-cfg Oct 5 12:06:57.580: INFO: creating *v1.ServiceAccount: csi-mock-volumes-430-2401/csi-resizer Oct 5 12:06:57.584: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-430 Oct 5 12:06:57.584: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-430 Oct 5 12:06:57.588: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-430 Oct 5 12:06:57.592: INFO: creating *v1.Role: csi-mock-volumes-430-2401/external-resizer-cfg-csi-mock-volumes-430 Oct 5 12:06:57.595: INFO: creating *v1.RoleBinding: csi-mock-volumes-430-2401/csi-resizer-role-cfg Oct 5 12:06:57.599: INFO: creating *v1.ServiceAccount: csi-mock-volumes-430-2401/csi-snapshotter Oct 5 12:06:57.603: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-430 Oct 5 12:06:57.603: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-430 Oct 5 12:06:57.607: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-430 Oct 5 12:06:57.611: INFO: creating *v1.Role: csi-mock-volumes-430-2401/external-snapshotter-leaderelection-csi-mock-volumes-430 Oct 5 12:06:57.614: INFO: creating *v1.RoleBinding: csi-mock-volumes-430-2401/external-snapshotter-leaderelection Oct 5 12:06:57.618: INFO: creating *v1.ServiceAccount: csi-mock-volumes-430-2401/csi-mock Oct 5 12:06:57.622: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-430 Oct 5 12:06:57.625: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-430 Oct 5 12:06:57.629: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-430 Oct 5 12:06:57.633: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-430 Oct 5 12:06:57.636: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-430 Oct 5 12:06:57.640: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-430 Oct 5 12:06:57.644: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-430 Oct 5 12:06:57.648: INFO: creating *v1.StatefulSet: csi-mock-volumes-430-2401/csi-mockplugin Oct 5 12:06:57.654: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-430 Oct 5 12:06:57.658: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-430" Oct 5 12:06:57.661: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-430 to register on node v122-worker I1005 12:06:59.690954 20 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I1005 12:06:59.693770 20 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-430","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1005 12:06:59.696048 20 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I1005 12:06:59.698777 20 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I1005 12:06:59.792996 20 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-430","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1005 12:06:59.881986 20 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-430"},"Error":"","FullError":null} STEP: Creating pod Oct 5 12:07:02.681: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Oct 5 12:07:02.688: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-qzglv] to have phase Bound Oct 5 12:07:02.691: INFO: PersistentVolumeClaim pvc-qzglv found but phase is Pending instead of Bound. I1005 12:07:02.698262 20 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-807de9f6-5212-4f13-a204-cac8e47ae1d9","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-807de9f6-5212-4f13-a204-cac8e47ae1d9"}}},"Error":"","FullError":null} Oct 5 12:07:04.695: INFO: PersistentVolumeClaim pvc-qzglv found and phase=Bound (2.007506289s) Oct 5 12:07:04.707: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-qzglv] to have phase Bound Oct 5 12:07:04.710: INFO: PersistentVolumeClaim pvc-qzglv found and phase=Bound (3.067898ms) STEP: Waiting for expected CSI calls I1005 12:07:04.890658 20 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:07:04.893818 20 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:07:04.896392 20 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-807de9f6-5212-4f13-a204-cac8e47ae1d9/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-807de9f6-5212-4f13-a204-cac8e47ae1d9","storage.kubernetes.io/csiProvisionerIdentity":"1664971619700-8081-csi-mock-csi-mock-volumes-430"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} I1005 12:07:05.497632 20 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:07:05.500487 20 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:07:05.502898 20 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-807de9f6-5212-4f13-a204-cac8e47ae1d9/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-807de9f6-5212-4f13-a204-cac8e47ae1d9","storage.kubernetes.io/csiProvisionerIdentity":"1664971619700-8081-csi-mock-csi-mock-volumes-430"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} I1005 12:07:06.606979 20 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:07:06.609603 20 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:07:06.611940 20 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-807de9f6-5212-4f13-a204-cac8e47ae1d9/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-807de9f6-5212-4f13-a204-cac8e47ae1d9","storage.kubernetes.io/csiProvisionerIdentity":"1664971619700-8081-csi-mock-csi-mock-volumes-430"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} I1005 12:07:08.628501 20 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:07:08.631344 20 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Oct 5 12:07:08.633: INFO: >>> kubeConfig: /root/.kube/config I1005 12:07:08.776487 20 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-807de9f6-5212-4f13-a204-cac8e47ae1d9/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-807de9f6-5212-4f13-a204-cac8e47ae1d9","storage.kubernetes.io/csiProvisionerIdentity":"1664971619700-8081-csi-mock-csi-mock-volumes-430"}},"Response":{},"Error":"","FullError":null} I1005 12:07:08.784953 20 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:07:08.786894 20 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Oct 5 12:07:08.789: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:07:08.914: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:07:09.066: INFO: >>> kubeConfig: /root/.kube/config I1005 12:07:09.197772 20 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-807de9f6-5212-4f13-a204-cac8e47ae1d9/globalmount","target_path":"/var/lib/kubelet/pods/ada7e4ce-7d8e-40c2-9d05-892faef9210d/volumes/kubernetes.io~csi/pvc-807de9f6-5212-4f13-a204-cac8e47ae1d9/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-807de9f6-5212-4f13-a204-cac8e47ae1d9","storage.kubernetes.io/csiProvisionerIdentity":"1664971619700-8081-csi-mock-csi-mock-volumes-430"}},"Response":{},"Error":"","FullError":null} STEP: Waiting for pod to be running STEP: Deleting the previously created pod Oct 5 12:07:11.721: INFO: Deleting pod "pvc-volume-tester-md24t" in namespace "csi-mock-volumes-430" Oct 5 12:07:11.728: INFO: Wait up to 5m0s for pod "pvc-volume-tester-md24t" to be fully deleted Oct 5 12:07:12.763: INFO: >>> kubeConfig: /root/.kube/config I1005 12:07:12.890596 20 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/ada7e4ce-7d8e-40c2-9d05-892faef9210d/volumes/kubernetes.io~csi/pvc-807de9f6-5212-4f13-a204-cac8e47ae1d9/mount"},"Response":{},"Error":"","FullError":null} I1005 12:07:12.967744 20 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:07:12.970221 20 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-807de9f6-5212-4f13-a204-cac8e47ae1d9/globalmount"},"Response":{},"Error":"","FullError":null} STEP: Waiting for all remaining expected CSI calls STEP: Deleting pod pvc-volume-tester-md24t Oct 5 12:07:14.737: INFO: Deleting pod "pvc-volume-tester-md24t" in namespace "csi-mock-volumes-430" STEP: Deleting claim pvc-qzglv Oct 5 12:07:14.748: INFO: Waiting up to 2m0s for PersistentVolume pvc-807de9f6-5212-4f13-a204-cac8e47ae1d9 to get deleted Oct 5 12:07:14.752: INFO: PersistentVolume pvc-807de9f6-5212-4f13-a204-cac8e47ae1d9 found and phase=Bound (3.374642ms) I1005 12:07:14.778252 20 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} Oct 5 12:07:16.756: INFO: PersistentVolume pvc-807de9f6-5212-4f13-a204-cac8e47ae1d9 was removed STEP: Deleting storageclass csi-mock-volumes-430-scgr4z8 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-430 STEP: Waiting for namespaces [csi-mock-volumes-430] to vanish STEP: uninstalling csi mock driver Oct 5 12:07:29.791: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-430-2401/csi-attacher Oct 5 12:07:29.797: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-430 Oct 5 12:07:29.802: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-430 Oct 5 12:07:29.807: INFO: deleting *v1.Role: csi-mock-volumes-430-2401/external-attacher-cfg-csi-mock-volumes-430 Oct 5 12:07:29.812: INFO: deleting *v1.RoleBinding: csi-mock-volumes-430-2401/csi-attacher-role-cfg Oct 5 12:07:29.816: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-430-2401/csi-provisioner Oct 5 12:07:29.821: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-430 Oct 5 12:07:29.825: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-430 Oct 5 12:07:29.830: INFO: deleting *v1.Role: csi-mock-volumes-430-2401/external-provisioner-cfg-csi-mock-volumes-430 Oct 5 12:07:29.834: INFO: deleting *v1.RoleBinding: csi-mock-volumes-430-2401/csi-provisioner-role-cfg Oct 5 12:07:29.839: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-430-2401/csi-resizer Oct 5 12:07:29.843: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-430 Oct 5 12:07:29.847: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-430 Oct 5 12:07:29.852: INFO: deleting *v1.Role: csi-mock-volumes-430-2401/external-resizer-cfg-csi-mock-volumes-430 Oct 5 12:07:29.856: INFO: deleting *v1.RoleBinding: csi-mock-volumes-430-2401/csi-resizer-role-cfg Oct 5 12:07:29.861: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-430-2401/csi-snapshotter Oct 5 12:07:29.865: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-430 Oct 5 12:07:29.870: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-430 Oct 5 12:07:29.874: INFO: deleting *v1.Role: csi-mock-volumes-430-2401/external-snapshotter-leaderelection-csi-mock-volumes-430 Oct 5 12:07:29.878: INFO: deleting *v1.RoleBinding: csi-mock-volumes-430-2401/external-snapshotter-leaderelection Oct 5 12:07:29.883: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-430-2401/csi-mock Oct 5 12:07:29.887: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-430 Oct 5 12:07:29.892: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-430 Oct 5 12:07:29.896: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-430 Oct 5 12:07:29.901: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-430 Oct 5 12:07:29.905: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-430 Oct 5 12:07:29.909: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-430 Oct 5 12:07:29.918: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-430 Oct 5 12:07:29.923: INFO: deleting *v1.StatefulSet: csi-mock-volumes-430-2401/csi-mockplugin Oct 5 12:07:29.929: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-430 STEP: deleting the driver namespace: csi-mock-volumes-430-2401 STEP: Waiting for namespaces [csi-mock-volumes-430-2401] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:08:13.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:76.503 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI NodeStage error cases [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:735 should retry NodeStage after NodeStage final error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:829 ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:05:28.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should be passed when podInfoOnMount=true /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:494 STEP: Building a driver namespace object, basename csi-mock-volumes-3443 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Oct 5 12:05:28.091: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3443-4485/csi-attacher Oct 5 12:05:28.095: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3443 Oct 5 12:05:28.095: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-3443 Oct 5 12:05:28.099: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3443 Oct 5 12:05:28.102: INFO: creating *v1.Role: csi-mock-volumes-3443-4485/external-attacher-cfg-csi-mock-volumes-3443 Oct 5 12:05:28.106: INFO: creating *v1.RoleBinding: csi-mock-volumes-3443-4485/csi-attacher-role-cfg Oct 5 12:05:28.110: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3443-4485/csi-provisioner Oct 5 12:05:28.113: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3443 Oct 5 12:05:28.113: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-3443 Oct 5 12:05:28.117: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3443 Oct 5 12:05:28.121: INFO: creating *v1.Role: csi-mock-volumes-3443-4485/external-provisioner-cfg-csi-mock-volumes-3443 Oct 5 12:05:28.124: INFO: creating *v1.RoleBinding: csi-mock-volumes-3443-4485/csi-provisioner-role-cfg Oct 5 12:05:28.128: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3443-4485/csi-resizer Oct 5 12:05:28.131: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3443 Oct 5 12:05:28.132: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-3443 Oct 5 12:05:28.135: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3443 Oct 5 12:05:28.139: INFO: creating *v1.Role: csi-mock-volumes-3443-4485/external-resizer-cfg-csi-mock-volumes-3443 Oct 5 12:05:28.144: INFO: creating *v1.RoleBinding: csi-mock-volumes-3443-4485/csi-resizer-role-cfg Oct 5 12:05:28.147: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3443-4485/csi-snapshotter Oct 5 12:05:28.151: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3443 Oct 5 12:05:28.151: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-3443 Oct 5 12:05:28.155: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3443 Oct 5 12:05:28.159: INFO: creating *v1.Role: csi-mock-volumes-3443-4485/external-snapshotter-leaderelection-csi-mock-volumes-3443 Oct 5 12:05:28.162: INFO: creating *v1.RoleBinding: csi-mock-volumes-3443-4485/external-snapshotter-leaderelection Oct 5 12:05:28.166: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3443-4485/csi-mock Oct 5 12:05:28.169: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3443 Oct 5 12:05:28.172: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3443 Oct 5 12:05:28.176: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3443 Oct 5 12:05:28.179: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3443 Oct 5 12:05:28.182: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3443 Oct 5 12:05:28.185: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3443 Oct 5 12:05:28.189: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3443 Oct 5 12:05:28.192: INFO: creating *v1.StatefulSet: csi-mock-volumes-3443-4485/csi-mockplugin Oct 5 12:05:28.198: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-3443 Oct 5 12:05:28.201: INFO: creating *v1.StatefulSet: csi-mock-volumes-3443-4485/csi-mockplugin-attacher Oct 5 12:05:28.206: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-3443" Oct 5 12:05:28.208: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-3443 to register on node v122-worker2 STEP: Creating pod Oct 5 12:05:37.727: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Oct 5 12:05:37.735: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-27f4t] to have phase Bound Oct 5 12:05:37.738: INFO: PersistentVolumeClaim pvc-27f4t found but phase is Pending instead of Bound. Oct 5 12:05:39.742: INFO: PersistentVolumeClaim pvc-27f4t found and phase=Bound (2.00752227s) STEP: checking for CSIInlineVolumes feature Oct 5 12:05:51.772: INFO: Pod inline-volume-zgcwq has the following logs: Oct 5 12:05:51.779: INFO: Deleting pod "inline-volume-zgcwq" in namespace "csi-mock-volumes-3443" Oct 5 12:05:51.784: INFO: Wait up to 5m0s for pod "inline-volume-zgcwq" to be fully deleted STEP: Deleting the previously created pod Oct 5 12:07:57.793: INFO: Deleting pod "pvc-volume-tester-sclqv" in namespace "csi-mock-volumes-3443" Oct 5 12:07:57.798: INFO: Wait up to 5m0s for pod "pvc-volume-tester-sclqv" to be fully deleted STEP: Checking CSI driver logs Oct 5 12:07:59.815: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-sclqv Oct 5 12:07:59.815: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-3443 Oct 5 12:07:59.815: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: c7298a47-f71b-4708-a040-af8364fffe31 Oct 5 12:07:59.815: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default Oct 5 12:07:59.815: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: false Oct 5 12:07:59.815: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/c7298a47-f71b-4708-a040-af8364fffe31/volumes/kubernetes.io~csi/pvc-56091811-b1e8-4fcb-ad05-8f9fdd6312c9/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-sclqv Oct 5 12:07:59.815: INFO: Deleting pod "pvc-volume-tester-sclqv" in namespace "csi-mock-volumes-3443" STEP: Deleting claim pvc-27f4t Oct 5 12:07:59.826: INFO: Waiting up to 2m0s for PersistentVolume pvc-56091811-b1e8-4fcb-ad05-8f9fdd6312c9 to get deleted Oct 5 12:07:59.829: INFO: PersistentVolume pvc-56091811-b1e8-4fcb-ad05-8f9fdd6312c9 found and phase=Bound (3.034066ms) Oct 5 12:08:01.833: INFO: PersistentVolume pvc-56091811-b1e8-4fcb-ad05-8f9fdd6312c9 was removed STEP: Deleting storageclass csi-mock-volumes-3443-scjhwlq STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-3443 STEP: Waiting for namespaces [csi-mock-volumes-3443] to vanish STEP: uninstalling csi mock driver Oct 5 12:08:07.849: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3443-4485/csi-attacher Oct 5 12:08:07.855: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3443 Oct 5 12:08:07.860: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3443 Oct 5 12:08:07.865: INFO: deleting *v1.Role: csi-mock-volumes-3443-4485/external-attacher-cfg-csi-mock-volumes-3443 Oct 5 12:08:07.869: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3443-4485/csi-attacher-role-cfg Oct 5 12:08:07.874: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3443-4485/csi-provisioner Oct 5 12:08:07.879: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3443 Oct 5 12:08:07.883: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3443 Oct 5 12:08:07.888: INFO: deleting *v1.Role: csi-mock-volumes-3443-4485/external-provisioner-cfg-csi-mock-volumes-3443 Oct 5 12:08:07.899: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3443-4485/csi-provisioner-role-cfg Oct 5 12:08:07.903: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3443-4485/csi-resizer Oct 5 12:08:07.908: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3443 Oct 5 12:08:07.912: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3443 Oct 5 12:08:07.917: INFO: deleting *v1.Role: csi-mock-volumes-3443-4485/external-resizer-cfg-csi-mock-volumes-3443 Oct 5 12:08:07.922: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3443-4485/csi-resizer-role-cfg Oct 5 12:08:07.926: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3443-4485/csi-snapshotter Oct 5 12:08:07.931: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3443 Oct 5 12:08:07.935: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3443 Oct 5 12:08:07.939: INFO: deleting *v1.Role: csi-mock-volumes-3443-4485/external-snapshotter-leaderelection-csi-mock-volumes-3443 Oct 5 12:08:07.944: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3443-4485/external-snapshotter-leaderelection Oct 5 12:08:07.948: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3443-4485/csi-mock Oct 5 12:08:07.952: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3443 Oct 5 12:08:07.956: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3443 Oct 5 12:08:07.961: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3443 Oct 5 12:08:07.966: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3443 Oct 5 12:08:07.970: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3443 Oct 5 12:08:07.975: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3443 Oct 5 12:08:07.979: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3443 Oct 5 12:08:07.984: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3443-4485/csi-mockplugin Oct 5 12:08:07.989: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-3443 Oct 5 12:08:07.994: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3443-4485/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-3443-4485 STEP: Waiting for namespaces [csi-mock-volumes-3443-4485] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:08:14.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:166.005 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:444 should be passed when podInfoOnMount=true /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:494 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true","total":-1,"completed":5,"skipped":160,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:08:08.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Oct 5 12:08:08.654: INFO: The status of Pod test-hostpath-type-8ggpr is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:08:10.659: INFO: The status of Pod test-hostpath-type-8ggpr is Running (Ready = true) STEP: running on node v122-worker STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should fail on mounting file 'afile' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:151 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:08:14.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-9610" for this suite. • [SLOW TEST:6.104 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting file 'afile' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:151 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathDirectory","total":-1,"completed":2,"skipped":106,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:08:14.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Oct 5 12:08:16.091: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-3966 PodName:hostexec-v122-worker2-677fl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:08:16.091: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:08:16.237: INFO: exec v122-worker2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Oct 5 12:08:16.237: INFO: exec v122-worker2: stdout: "0\n" Oct 5 12:08:16.237: INFO: exec v122-worker2: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Oct 5 12:08:16.237: INFO: exec v122-worker2: exit code: 0 Oct 5 12:08:16.237: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:08:16.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3966" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.208 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1250 ------------------------------ SSS ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should retry NodeStage after NodeStage final error","total":-1,"completed":9,"skipped":445,"failed":0} [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:08:13.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Oct 5 12:08:13.996: INFO: The status of Pod test-hostpath-type-vwgbr is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:08:16.000: INFO: The status of Pod test-hostpath-type-vwgbr is Running (Ready = true) STEP: running on node v122-worker2 STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should be able to mount file 'afile' successfully when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:143 [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:08:20.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-8780" for this suite. • [SLOW TEST:6.104 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount file 'afile' successfully when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:143 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should be able to mount file 'afile' successfully when HostPathType is HostPathFile","total":-1,"completed":10,"skipped":445,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:07:46.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] token should not be plumbed down when CSIDriver is not deployed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1525 STEP: Building a driver namespace object, basename csi-mock-volumes-8160 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Oct 5 12:07:46.973: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8160-1669/csi-attacher Oct 5 12:07:46.977: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8160 Oct 5 12:07:46.977: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-8160 Oct 5 12:07:46.980: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8160 Oct 5 12:07:46.984: INFO: creating *v1.Role: csi-mock-volumes-8160-1669/external-attacher-cfg-csi-mock-volumes-8160 Oct 5 12:07:46.988: INFO: creating *v1.RoleBinding: csi-mock-volumes-8160-1669/csi-attacher-role-cfg Oct 5 12:07:46.992: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8160-1669/csi-provisioner Oct 5 12:07:46.996: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8160 Oct 5 12:07:46.996: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-8160 Oct 5 12:07:47.000: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8160 Oct 5 12:07:47.004: INFO: creating *v1.Role: csi-mock-volumes-8160-1669/external-provisioner-cfg-csi-mock-volumes-8160 Oct 5 12:07:47.007: INFO: creating *v1.RoleBinding: csi-mock-volumes-8160-1669/csi-provisioner-role-cfg Oct 5 12:07:47.011: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8160-1669/csi-resizer Oct 5 12:07:47.015: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8160 Oct 5 12:07:47.015: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-8160 Oct 5 12:07:47.019: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8160 Oct 5 12:07:47.023: INFO: creating *v1.Role: csi-mock-volumes-8160-1669/external-resizer-cfg-csi-mock-volumes-8160 Oct 5 12:07:47.027: INFO: creating *v1.RoleBinding: csi-mock-volumes-8160-1669/csi-resizer-role-cfg Oct 5 12:07:47.031: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8160-1669/csi-snapshotter Oct 5 12:07:47.035: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8160 Oct 5 12:07:47.035: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-8160 Oct 5 12:07:47.039: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8160 Oct 5 12:07:47.042: INFO: creating *v1.Role: csi-mock-volumes-8160-1669/external-snapshotter-leaderelection-csi-mock-volumes-8160 Oct 5 12:07:47.046: INFO: creating *v1.RoleBinding: csi-mock-volumes-8160-1669/external-snapshotter-leaderelection Oct 5 12:07:47.050: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8160-1669/csi-mock Oct 5 12:07:47.053: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8160 Oct 5 12:07:47.057: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8160 Oct 5 12:07:47.061: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8160 Oct 5 12:07:47.065: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8160 Oct 5 12:07:47.069: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8160 Oct 5 12:07:47.072: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8160 Oct 5 12:07:47.076: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8160 Oct 5 12:07:47.080: INFO: creating *v1.StatefulSet: csi-mock-volumes-8160-1669/csi-mockplugin Oct 5 12:07:47.087: INFO: creating *v1.StatefulSet: csi-mock-volumes-8160-1669/csi-mockplugin-attacher Oct 5 12:07:47.092: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-8160 to register on node v122-worker STEP: Creating pod Oct 5 12:07:52.109: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Oct 5 12:07:52.117: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-77j84] to have phase Bound Oct 5 12:07:52.121: INFO: PersistentVolumeClaim pvc-77j84 found but phase is Pending instead of Bound. Oct 5 12:07:54.126: INFO: PersistentVolumeClaim pvc-77j84 found and phase=Bound (2.008688299s) STEP: Deleting the previously created pod Oct 5 12:08:02.147: INFO: Deleting pod "pvc-volume-tester-qzljx" in namespace "csi-mock-volumes-8160" Oct 5 12:08:02.153: INFO: Wait up to 5m0s for pod "pvc-volume-tester-qzljx" to be fully deleted STEP: Checking CSI driver logs Oct 5 12:08:04.182: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/6d0a95c0-1909-4c45-9caa-f731c4bdd248/volumes/kubernetes.io~csi/pvc-2b500ff2-5d21-4da9-92b8-793f7277687b/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-qzljx Oct 5 12:08:04.182: INFO: Deleting pod "pvc-volume-tester-qzljx" in namespace "csi-mock-volumes-8160" STEP: Deleting claim pvc-77j84 Oct 5 12:08:04.195: INFO: Waiting up to 2m0s for PersistentVolume pvc-2b500ff2-5d21-4da9-92b8-793f7277687b to get deleted Oct 5 12:08:04.198: INFO: PersistentVolume pvc-2b500ff2-5d21-4da9-92b8-793f7277687b found and phase=Bound (3.199491ms) Oct 5 12:08:06.203: INFO: PersistentVolume pvc-2b500ff2-5d21-4da9-92b8-793f7277687b found and phase=Released (2.007315246s) Oct 5 12:08:08.205: INFO: PersistentVolume pvc-2b500ff2-5d21-4da9-92b8-793f7277687b found and phase=Released (4.010242672s) Oct 5 12:08:10.209: INFO: PersistentVolume pvc-2b500ff2-5d21-4da9-92b8-793f7277687b found and phase=Released (6.014101206s) Oct 5 12:08:12.213: INFO: PersistentVolume pvc-2b500ff2-5d21-4da9-92b8-793f7277687b was removed STEP: Deleting storageclass csi-mock-volumes-8160-scfjczb STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-8160 STEP: Waiting for namespaces [csi-mock-volumes-8160] to vanish STEP: uninstalling csi mock driver Oct 5 12:08:18.228: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8160-1669/csi-attacher Oct 5 12:08:18.233: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8160 Oct 5 12:08:18.238: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8160 Oct 5 12:08:18.243: INFO: deleting *v1.Role: csi-mock-volumes-8160-1669/external-attacher-cfg-csi-mock-volumes-8160 Oct 5 12:08:18.249: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8160-1669/csi-attacher-role-cfg Oct 5 12:08:18.254: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8160-1669/csi-provisioner Oct 5 12:08:18.258: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8160 Oct 5 12:08:18.267: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8160 Oct 5 12:08:18.272: INFO: deleting *v1.Role: csi-mock-volumes-8160-1669/external-provisioner-cfg-csi-mock-volumes-8160 Oct 5 12:08:18.276: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8160-1669/csi-provisioner-role-cfg Oct 5 12:08:18.281: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8160-1669/csi-resizer Oct 5 12:08:18.285: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8160 Oct 5 12:08:18.290: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8160 Oct 5 12:08:18.294: INFO: deleting *v1.Role: csi-mock-volumes-8160-1669/external-resizer-cfg-csi-mock-volumes-8160 Oct 5 12:08:18.299: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8160-1669/csi-resizer-role-cfg Oct 5 12:08:18.303: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8160-1669/csi-snapshotter Oct 5 12:08:18.308: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8160 Oct 5 12:08:18.312: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8160 Oct 5 12:08:18.317: INFO: deleting *v1.Role: csi-mock-volumes-8160-1669/external-snapshotter-leaderelection-csi-mock-volumes-8160 Oct 5 12:08:18.321: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8160-1669/external-snapshotter-leaderelection Oct 5 12:08:18.326: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8160-1669/csi-mock Oct 5 12:08:18.330: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8160 Oct 5 12:08:18.335: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8160 Oct 5 12:08:18.340: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8160 Oct 5 12:08:18.344: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8160 Oct 5 12:08:18.349: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8160 Oct 5 12:08:18.353: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8160 Oct 5 12:08:18.357: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8160 Oct 5 12:08:18.362: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8160-1669/csi-mockplugin Oct 5 12:08:18.367: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8160-1669/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-8160-1669 STEP: Waiting for namespaces [csi-mock-volumes-8160-1669] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:08:24.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:37.504 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIServiceAccountToken /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1497 token should not be plumbed down when CSIDriver is not deployed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1525 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when CSIDriver is not deployed","total":-1,"completed":3,"skipped":147,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:08:20.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "v122-worker" using path "/tmp/local-volume-test-3486447d-087e-4085-a785-e2d087845f7a" Oct 5 12:08:24.150: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-3486447d-087e-4085-a785-e2d087845f7a && dd if=/dev/zero of=/tmp/local-volume-test-3486447d-087e-4085-a785-e2d087845f7a/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-3486447d-087e-4085-a785-e2d087845f7a/file] Namespace:persistent-local-volumes-test-7064 PodName:hostexec-v122-worker-s4mb4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:08:24.150: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:08:24.339: INFO: exec v122-worker: command: mkdir -p /tmp/local-volume-test-3486447d-087e-4085-a785-e2d087845f7a && dd if=/dev/zero of=/tmp/local-volume-test-3486447d-087e-4085-a785-e2d087845f7a/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-3486447d-087e-4085-a785-e2d087845f7a/file Oct 5 12:08:24.339: INFO: exec v122-worker: stdout: "" Oct 5 12:08:24.339: INFO: exec v122-worker: stderr: "5120+0 records in\n5120+0 records out\n20971520 bytes (21 MB, 20 MiB) copied, 0.0248625 s, 843 MB/s\nlosetup: /tmp/local-volume-test-3486447d-087e-4085-a785-e2d087845f7a/file: failed to set up loop device: No such device or address\n" Oct 5 12:08:24.339: INFO: exec v122-worker: exit code: 0 Oct 5 12:08:24.340: FAIL: Unexpected error: : { Err: { s: "command terminated with exit code 1", }, Code: 1, } command terminated with exit code 1 occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).createAndSetupLoopDevice(0xc000be0c60, 0xc000ce6e00, 0x3b, 0xc0040d3800, 0x1400000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:133 +0x45b k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).setupLocalVolumeBlock(0xc000be0c60, 0xc0040d3800, 0x0, 0x78cd2a8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:146 +0x65 k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).Create(0xc000be0c60, 0xc0040d3800, 0x702c9b3, 0x5, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:306 +0x326 k8s.io/kubernetes/test/e2e/storage.setupLocalVolumes(0xc004f0d8c0, 0x702c9b3, 0x5, 0xc0040d3800, 0x1, 0x0, 0x0, 0xc0033a9680) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:837 +0x157 k8s.io/kubernetes/test/e2e/storage.setupLocalVolumesPVCsPVs(0xc004f0d8c0, 0x702c9b3, 0x5, 0xc0040d3800, 0x1, 0x703610f, 0x9, 0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1102 +0x87 k8s.io/kubernetes/test/e2e/storage.glob..func21.2.1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 +0xb6 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000683c80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000683c80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc000683c80, 0x729c7d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "persistent-local-volumes-test-7064". STEP: Found 4 events. Oct 5 12:08:24.346: INFO: At 2022-10-05 12:08:20 +0000 UTC - event for hostexec-v122-worker-s4mb4: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-7064/hostexec-v122-worker-s4mb4 to v122-worker Oct 5 12:08:24.346: INFO: At 2022-10-05 12:08:21 +0000 UTC - event for hostexec-v122-worker-s4mb4: {kubelet v122-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Oct 5 12:08:24.346: INFO: At 2022-10-05 12:08:21 +0000 UTC - event for hostexec-v122-worker-s4mb4: {kubelet v122-worker} Created: Created container agnhost-container Oct 5 12:08:24.346: INFO: At 2022-10-05 12:08:21 +0000 UTC - event for hostexec-v122-worker-s4mb4: {kubelet v122-worker} Started: Started container agnhost-container Oct 5 12:08:24.349: INFO: POD NODE PHASE GRACE CONDITIONS Oct 5 12:08:24.349: INFO: hostexec-v122-worker-s4mb4 v122-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-05 12:08:20 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-10-05 12:08:22 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-10-05 12:08:22 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-05 12:08:20 +0000 UTC }] Oct 5 12:08:24.349: INFO: Oct 5 12:08:24.353: INFO: Logging node info for node v122-control-plane Oct 5 12:08:24.357: INFO: Node Info: &Node{ObjectMeta:{v122-control-plane 0bba5de9-314a-4743-bf02-bde0ec06daf3 5868 0 2022-10-05 11:59:47 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:v122-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-10-05 11:59:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-10-05 11:59:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2022-10-05 12:00:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-10-05 12:00:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v122/v122-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-10-05 12:05:22 +0000 UTC,LastTransitionTime:2022-10-05 11:59:42 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-10-05 12:05:22 +0000 UTC,LastTransitionTime:2022-10-05 11:59:42 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-10-05 12:05:22 +0000 UTC,LastTransitionTime:2022-10-05 11:59:42 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-10-05 12:05:22 +0000 UTC,LastTransitionTime:2022-10-05 12:00:21 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.7,},NodeAddress{Type:Hostname,Address:v122-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:90a9e9edfe9d44d59ee2bec7a8da01cd,SystemUUID:2e684780-1fcb-4016-9109-255b79db130f,BootID:5bd644eb-fc54-4a37-951a-44566369b55e,KernelVersion:5.15.0-48-generic,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.22.15,KubeProxyVersion:v1.22.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:df9770adc7c9d21f7f2c2f8e04efed16e9ca12f9c00aad56e5753bd1819ad95f k8s.gcr.io/kube-proxy:v1.22.15],SizeBytes:105434107,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:393eaa5fb5582428fe7b343ba5b729dbdb8a7d23832e3e9183f515fee5e478ca k8s.gcr.io/kube-apiserver:v1.22.15],SizeBytes:74691038,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:0c81eb77294a8fb7706e538aae9423a6301e00bae81cebf0a67c42f27e6714c7 k8s.gcr.io/kube-controller-manager:v1.22.15],SizeBytes:67534962,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:974ce90d1dfaa71470d7e80077a3dd85ed92848347005df49d710c1576511a2c k8s.gcr.io/kube-scheduler:v1.22.15],SizeBytes:53936244,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220726-ed811e41],SizeBytes:25818452,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220607-9a4d8d2a],SizeBytes:2859509,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 5 12:08:24.358: INFO: Logging kubelet events for node v122-control-plane Oct 5 12:08:24.363: INFO: Logging pods the kubelet thinks is on node v122-control-plane Oct 5 12:08:24.394: INFO: kube-controller-manager-v122-control-plane started at 2022-10-05 11:59:52 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:24.394: INFO: Container kube-controller-manager ready: true, restart count 0 Oct 5 12:08:24.394: INFO: kube-scheduler-v122-control-plane started at 2022-10-05 11:59:52 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:24.394: INFO: Container kube-scheduler ready: true, restart count 0 Oct 5 12:08:24.394: INFO: kindnet-g8rqz started at 2022-10-05 12:00:05 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:24.394: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 12:08:24.394: INFO: kube-proxy-xtt57 started at 2022-10-05 12:00:05 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:24.394: INFO: Container kube-proxy ready: true, restart count 0 Oct 5 12:08:24.394: INFO: create-loop-devs-lvpbc started at 2022-10-05 12:00:14 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:24.394: INFO: Container loopdev ready: true, restart count 0 Oct 5 12:08:24.394: INFO: etcd-v122-control-plane started at 2022-10-05 11:59:52 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:24.394: INFO: Container etcd ready: true, restart count 0 Oct 5 12:08:24.394: INFO: kube-apiserver-v122-control-plane started at 2022-10-05 11:59:52 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:24.394: INFO: Container kube-apiserver ready: true, restart count 0 Oct 5 12:08:24.458: INFO: Latency metrics for node v122-control-plane Oct 5 12:08:24.458: INFO: Logging node info for node v122-worker Oct 5 12:08:24.461: INFO: Node Info: &Node{ObjectMeta:{v122-worker 8286eab4-ee46-4103-bc96-cf44e85cf562 9352 0 2022-10-05 12:00:08 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:v122-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-10-05 12:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2022-10-05 12:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-10-05 12:00:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-10-05 12:08:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v122/v122-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-10-05 12:08:09 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-10-05 12:08:09 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-10-05 12:08:09 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-10-05 12:08:09 +0000 UTC,LastTransitionTime:2022-10-05 12:00:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.6,},NodeAddress{Type:Hostname,Address:v122-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8ce5667169114cc58989bd26cdb88021,SystemUUID:f1b8869e-1c17-4972-b832-4d15146806a4,BootID:5bd644eb-fc54-4a37-951a-44566369b55e,KernelVersion:5.15.0-48-generic,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.22.15,KubeProxyVersion:v1.22.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:df9770adc7c9d21f7f2c2f8e04efed16e9ca12f9c00aad56e5753bd1819ad95f k8s.gcr.io/kube-proxy:v1.22.15],SizeBytes:105434107,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:393eaa5fb5582428fe7b343ba5b729dbdb8a7d23832e3e9183f515fee5e478ca k8s.gcr.io/kube-apiserver:v1.22.15],SizeBytes:74691038,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:0c81eb77294a8fb7706e538aae9423a6301e00bae81cebf0a67c42f27e6714c7 k8s.gcr.io/kube-controller-manager:v1.22.15],SizeBytes:67534962,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:974ce90d1dfaa71470d7e80077a3dd85ed92848347005df49d710c1576511a2c k8s.gcr.io/kube-scheduler:v1.22.15],SizeBytes:53936244,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220726-ed811e41],SizeBytes:25818452,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220607-9a4d8d2a],SizeBytes:2859509,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 5 12:08:24.462: INFO: Logging kubelet events for node v122-worker Oct 5 12:08:24.467: INFO: Logging pods the kubelet thinks is on node v122-worker Oct 5 12:08:24.534: INFO: pod-f2ebf5ab-8b21-4123-a387-454e3a15eebe started at (0+0 container statuses recorded) Oct 5 12:08:24.535: INFO: create-loop-devs-f76cj started at 2022-10-05 12:00:14 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:24.535: INFO: Container loopdev ready: true, restart count 0 Oct 5 12:08:24.535: INFO: hostexec-v122-worker-s4mb4 started at 2022-10-05 12:08:20 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:24.535: INFO: Container agnhost-container ready: true, restart count 0 Oct 5 12:08:24.535: INFO: pod-configmaps-2bb22201-613d-442b-9f83-a9d39e6f1499 started at 2022-10-05 12:06:31 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:24.535: INFO: Container agnhost-container ready: false, restart count 0 Oct 5 12:08:24.535: INFO: hostexec-v122-worker-5bsfw started at 2022-10-05 12:08:11 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:24.535: INFO: Container agnhost-container ready: true, restart count 0 Oct 5 12:08:24.535: INFO: hostexec-v122-worker-crv8m started at 2022-10-05 12:08:16 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:24.535: INFO: Container agnhost-container ready: true, restart count 0 Oct 5 12:08:24.535: INFO: pod-741caf9c-0537-4854-971b-5ea6ff382a39 started at 2022-10-05 12:08:19 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:24.535: INFO: Container write-pod ready: true, restart count 0 Oct 5 12:08:24.535: INFO: kindnet-rkh8m started at 2022-10-05 12:00:09 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:24.535: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 12:08:24.535: INFO: kube-proxy-xkzrn started at 2022-10-05 12:00:09 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:24.535: INFO: Container kube-proxy ready: true, restart count 0 Oct 5 12:08:24.535: INFO: pod-ephm-test-projected-2cc9 started at 2022-10-05 12:06:37 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:24.535: INFO: Container test-container-subpath-projected-2cc9 ready: false, restart count 0 Oct 5 12:08:24.535: INFO: pod-secrets-9d1aa6a5-fe49-413f-85a9-4c2a8e6f4e5b started at 2022-10-05 12:03:08 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:24.535: INFO: Container creates-volume-test ready: false, restart count 0 Oct 5 12:08:24.535: INFO: pod-secrets-76b16dac-27d0-4343-a0fe-b8ed5dd81977 started at 2022-10-05 12:06:14 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:24.535: INFO: Container creates-volume-test ready: false, restart count 0 Oct 5 12:08:24.535: INFO: pod-ephm-test-projected-vm97 started at 2022-10-05 12:06:53 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:24.535: INFO: Container test-container-subpath-projected-vm97 ready: false, restart count 0 Oct 5 12:08:24.649: INFO: Latency metrics for node v122-worker Oct 5 12:08:24.649: INFO: Logging node info for node v122-worker2 Oct 5 12:08:24.653: INFO: Node Info: &Node{ObjectMeta:{v122-worker2 e098b7b6-6804-492f-b9ec-650d1924542e 9116 0 2022-10-05 12:00:08 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:v122-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-10-05 12:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2022-10-05 12:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-10-05 12:00:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-10-05 12:07:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v122/v122-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-10-05 12:07:59 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-10-05 12:07:59 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-10-05 12:07:59 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-10-05 12:07:59 +0000 UTC,LastTransitionTime:2022-10-05 12:00:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.5,},NodeAddress{Type:Hostname,Address:v122-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:feea07f38e414515ae57b946e27fa7bb,SystemUUID:07d898dc-4331-403b-9bdf-da8ef413d01c,BootID:5bd644eb-fc54-4a37-951a-44566369b55e,KernelVersion:5.15.0-48-generic,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.22.15,KubeProxyVersion:v1.22.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:df9770adc7c9d21f7f2c2f8e04efed16e9ca12f9c00aad56e5753bd1819ad95f k8s.gcr.io/kube-proxy:v1.22.15],SizeBytes:105434107,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:393eaa5fb5582428fe7b343ba5b729dbdb8a7d23832e3e9183f515fee5e478ca k8s.gcr.io/kube-apiserver:v1.22.15],SizeBytes:74691038,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:0c81eb77294a8fb7706e538aae9423a6301e00bae81cebf0a67c42f27e6714c7 k8s.gcr.io/kube-controller-manager:v1.22.15],SizeBytes:67534962,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:974ce90d1dfaa71470d7e80077a3dd85ed92848347005df49d710c1576511a2c k8s.gcr.io/kube-scheduler:v1.22.15],SizeBytes:53936244,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220726-ed811e41],SizeBytes:25818452,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220607-9a4d8d2a],SizeBytes:2859509,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 5 12:08:24.654: INFO: Logging kubelet events for node v122-worker2 Oct 5 12:08:24.659: INFO: Logging pods the kubelet thinks is on node v122-worker2 Oct 5 12:08:24.674: INFO: kindnet-vqtz2 started at 2022-10-05 12:00:09 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:24.674: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 12:08:24.674: INFO: kube-proxy-pwsq7 started at 2022-10-05 12:00:09 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:24.674: INFO: Container kube-proxy ready: true, restart count 0 Oct 5 12:08:24.674: INFO: hostexec-v122-worker2-75hw5 started at 2022-10-05 12:06:49 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:24.674: INFO: Container agnhost-container ready: true, restart count 0 Oct 5 12:08:24.674: INFO: pod-2dfdc6f1-9191-4301-96e6-b9954dca0603 started at 2022-10-05 12:07:05 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:24.674: INFO: Container write-pod ready: true, restart count 0 Oct 5 12:08:24.674: INFO: coredns-78fcd69978-srwh8 started at 2022-10-05 12:00:18 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:24.674: INFO: Container coredns ready: true, restart count 0 Oct 5 12:08:24.674: INFO: local-path-provisioner-58c8ccd54c-lkwwv started at 2022-10-05 12:00:18 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:24.675: INFO: Container local-path-provisioner ready: true, restart count 0 Oct 5 12:08:24.675: INFO: test-hostpath-type-vwgbr started at 2022-10-05 12:08:13 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:24.675: INFO: Container host-path-testing ready: true, restart count 0 Oct 5 12:08:24.675: INFO: coredns-78fcd69978-vrzs8 started at 2022-10-05 12:00:18 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:24.675: INFO: Container coredns ready: true, restart count 0 Oct 5 12:08:24.675: INFO: external-provisioner-brrs9 started at 2022-10-05 12:08:14 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:24.675: INFO: Container nfs-provisioner ready: false, restart count 0 Oct 5 12:08:24.675: INFO: create-loop-devs-6sf59 started at 2022-10-05 12:00:14 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:24.675: INFO: Container loopdev ready: true, restart count 0 Oct 5 12:08:24.675: INFO: pod-f79ff21f-4fdf-4e74-928e-a01f6395dff5 started at 2022-10-05 12:07:08 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:24.675: INFO: Container write-pod ready: false, restart count 0 Oct 5 12:08:24.803: INFO: Latency metrics for node v122-worker2 Oct 5 12:08:24.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7064" for this suite. • Failure in Spec Setup (BeforeEach) [4.752 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 Oct 5 12:08:24.340: Unexpected error: : { Err: { s: "command terminated with exit code 1", }, Code: 1, } command terminated with exit code 1 occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:133 ------------------------------ {"msg":"FAILED [sig-storage] PersistentVolumes-local [Volume type: block] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":10,"skipped":446,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: block] Set fsGroup for local volume should set fsGroup for one pod [Slow]"]} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:08:11.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Oct 5 12:08:13.484: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-67ebaf65-d65a-452d-b993-e6ecabd0b584 && mount --bind /tmp/local-volume-test-67ebaf65-d65a-452d-b993-e6ecabd0b584 /tmp/local-volume-test-67ebaf65-d65a-452d-b993-e6ecabd0b584] Namespace:persistent-local-volumes-test-7454 PodName:hostexec-v122-worker-5bsfw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:08:13.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:08:13.628: INFO: Creating a PV followed by a PVC Oct 5 12:08:13.637: INFO: Waiting for PV local-pvf5g95 to bind to PVC pvc-xth42 Oct 5 12:08:13.637: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-xth42] to have phase Bound Oct 5 12:08:13.640: INFO: PersistentVolumeClaim pvc-xth42 found but phase is Pending instead of Bound. Oct 5 12:08:15.645: INFO: PersistentVolumeClaim pvc-xth42 found but phase is Pending instead of Bound. Oct 5 12:08:17.670: INFO: PersistentVolumeClaim pvc-xth42 found but phase is Pending instead of Bound. Oct 5 12:08:19.676: INFO: PersistentVolumeClaim pvc-xth42 found and phase=Bound (6.038950572s) Oct 5 12:08:19.676: INFO: Waiting up to 3m0s for PersistentVolume local-pvf5g95 to have phase Bound Oct 5 12:08:19.681: INFO: PersistentVolume local-pvf5g95 found and phase=Bound (4.389327ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Oct 5 12:08:25.733: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:45799 --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-7454 exec pod-741caf9c-0537-4854-971b-5ea6ff382a39 --namespace=persistent-local-volumes-test-7454 -- stat -c %g /mnt/volume1' Oct 5 12:08:25.920: INFO: stderr: "" Oct 5 12:08:25.920: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Oct 5 12:08:27.936: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:45799 --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-7454 exec pod-82db6202-a785-48fe-936d-0e89fdc02235 --namespace=persistent-local-volumes-test-7454 -- stat -c %g /mnt/volume1' Oct 5 12:08:28.151: INFO: stderr: "" Oct 5 12:08:28.151: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod pod-741caf9c-0537-4854-971b-5ea6ff382a39 in namespace persistent-local-volumes-test-7454 STEP: Deleting second pod STEP: Deleting pod pod-82db6202-a785-48fe-936d-0e89fdc02235 in namespace persistent-local-volumes-test-7454 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:08:28.163: INFO: Deleting PersistentVolumeClaim "pvc-xth42" Oct 5 12:08:28.168: INFO: Deleting PersistentVolume "local-pvf5g95" STEP: Removing the test directory Oct 5 12:08:28.173: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-67ebaf65-d65a-452d-b993-e6ecabd0b584 && rm -r /tmp/local-volume-test-67ebaf65-d65a-452d-b993-e6ecabd0b584] Namespace:persistent-local-volumes-test-7454 PodName:hostexec-v122-worker-5bsfw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:08:28.173: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:08:28.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7454" for this suite. • [SLOW TEST:16.894 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":6,"skipped":291,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:08:24.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] files with FSGroup ownership should support (root,0644,tmpfs) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:67 STEP: Creating a pod to test emptydir 0644 on tmpfs Oct 5 12:08:24.478: INFO: Waiting up to 5m0s for pod "pod-f2ebf5ab-8b21-4123-a387-454e3a15eebe" in namespace "emptydir-1762" to be "Succeeded or Failed" Oct 5 12:08:24.482: INFO: Pod "pod-f2ebf5ab-8b21-4123-a387-454e3a15eebe": Phase="Pending", Reason="", readiness=false. Elapsed: 3.344275ms Oct 5 12:08:26.486: INFO: Pod "pod-f2ebf5ab-8b21-4123-a387-454e3a15eebe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008223406s Oct 5 12:08:28.492: INFO: Pod "pod-f2ebf5ab-8b21-4123-a387-454e3a15eebe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013586055s Oct 5 12:08:30.497: INFO: Pod "pod-f2ebf5ab-8b21-4123-a387-454e3a15eebe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018815507s STEP: Saw pod success Oct 5 12:08:30.497: INFO: Pod "pod-f2ebf5ab-8b21-4123-a387-454e3a15eebe" satisfied condition "Succeeded or Failed" Oct 5 12:08:30.500: INFO: Trying to get logs from node v122-worker pod pod-f2ebf5ab-8b21-4123-a387-454e3a15eebe container test-container: STEP: delete the pod Oct 5 12:08:30.516: INFO: Waiting for pod pod-f2ebf5ab-8b21-4123-a387-454e3a15eebe to disappear Oct 5 12:08:30.519: INFO: Pod pod-f2ebf5ab-8b21-4123-a387-454e3a15eebe no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:08:30.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1762" for this suite. • [SLOW TEST:6.100 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48 files with FSGroup ownership should support (root,0644,tmpfs) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:67 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)","total":-1,"completed":4,"skipped":167,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:08:14.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:144 [It] should let an external dynamic provisioner create and delete persistent volumes [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:540 Oct 5 12:08:14.776: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: creating an external dynamic provisioner pod STEP: locating the provisioner pod STEP: creating a StorageClass STEP: Creating a StorageClass Oct 5 12:08:26.917: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: creating a claim with a external provisioning annotation STEP: creating claim=&PersistentVolumeClaim{ObjectMeta:{ pvc- volume-provisioning-9651 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1572864000 0} {} 1500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*volume-provisioning-9651-externalxb5pn,VolumeMode:nil,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} Oct 5 12:08:26.925: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-ktrbf] to have phase Bound Oct 5 12:08:26.928: INFO: PersistentVolumeClaim pvc-ktrbf found but phase is Pending instead of Bound. Oct 5 12:08:28.932: INFO: PersistentVolumeClaim pvc-ktrbf found but phase is Pending instead of Bound. Oct 5 12:08:30.937: INFO: PersistentVolumeClaim pvc-ktrbf found but phase is Pending instead of Bound. Oct 5 12:08:32.942: INFO: PersistentVolumeClaim pvc-ktrbf found and phase=Bound (6.017136562s) STEP: checking the claim STEP: checking the PV STEP: deleting claim "volume-provisioning-9651"/"pvc-ktrbf" STEP: deleting the claim's PV "pvc-bcd17b70-27f4-43e6-be65-fa568559d86a" Oct 5 12:08:32.953: INFO: Waiting up to 20m0s for PersistentVolume pvc-bcd17b70-27f4-43e6-be65-fa568559d86a to get deleted Oct 5 12:08:32.957: INFO: PersistentVolume pvc-bcd17b70-27f4-43e6-be65-fa568559d86a found and phase=Bound (3.178831ms) Oct 5 12:08:37.964: INFO: PersistentVolume pvc-bcd17b70-27f4-43e6-be65-fa568559d86a was removed Oct 5 12:08:37.964: INFO: deleting claim "volume-provisioning-9651"/"pvc-ktrbf" Oct 5 12:08:37.967: INFO: deleting storage class volume-provisioning-9651-externalxb5pn STEP: Deleting pod external-provisioner-brrs9 in namespace volume-provisioning-9651 [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:08:37.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-9651" for this suite. • [SLOW TEST:23.258 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 DynamicProvisioner External /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:539 should let an external dynamic provisioner create and delete persistent volumes [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:540 ------------------------------ {"msg":"PASSED [sig-storage] Dynamic Provisioning DynamicProvisioner External should let an external dynamic provisioner create and delete persistent volumes [Slow]","total":-1,"completed":3,"skipped":111,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:08:24.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Oct 5 12:08:26.896: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-c610f4dc-6cf6-4141-8c29-412d151adc54-backend && ln -s /tmp/local-volume-test-c610f4dc-6cf6-4141-8c29-412d151adc54-backend /tmp/local-volume-test-c610f4dc-6cf6-4141-8c29-412d151adc54] Namespace:persistent-local-volumes-test-4894 PodName:hostexec-v122-worker2-9wft5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:08:26.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:08:27.040: INFO: Creating a PV followed by a PVC Oct 5 12:08:27.049: INFO: Waiting for PV local-pvndzn2 to bind to PVC pvc-6w5nf Oct 5 12:08:27.049: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-6w5nf] to have phase Bound Oct 5 12:08:27.052: INFO: PersistentVolumeClaim pvc-6w5nf found but phase is Pending instead of Bound. Oct 5 12:08:29.056: INFO: PersistentVolumeClaim pvc-6w5nf found but phase is Pending instead of Bound. Oct 5 12:08:31.060: INFO: PersistentVolumeClaim pvc-6w5nf found but phase is Pending instead of Bound. Oct 5 12:08:33.065: INFO: PersistentVolumeClaim pvc-6w5nf found but phase is Pending instead of Bound. Oct 5 12:08:35.068: INFO: PersistentVolumeClaim pvc-6w5nf found and phase=Bound (8.019718445s) Oct 5 12:08:35.068: INFO: Waiting up to 3m0s for PersistentVolume local-pvndzn2 to have phase Bound Oct 5 12:08:35.072: INFO: PersistentVolume local-pvndzn2 found and phase=Bound (3.103857ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Oct 5 12:08:37.097: INFO: pod "pod-3f941f4c-c0ca-4d01-8a14-41fb3ccb9c07" created on Node "v122-worker2" STEP: Writing in pod1 Oct 5 12:08:37.097: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4894 PodName:pod-3f941f4c-c0ca-4d01-8a14-41fb3ccb9c07 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:08:37.097: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:08:37.226: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Oct 5 12:08:37.226: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4894 PodName:pod-3f941f4c-c0ca-4d01-8a14-41fb3ccb9c07 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:08:37.226: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:08:37.342: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-3f941f4c-c0ca-4d01-8a14-41fb3ccb9c07 in namespace persistent-local-volumes-test-4894 STEP: Creating pod2 STEP: Creating a pod Oct 5 12:08:39.363: INFO: pod "pod-7f64a0eb-8000-4b9f-a058-b8892248fd5e" created on Node "v122-worker2" STEP: Reading in pod2 Oct 5 12:08:39.363: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4894 PodName:pod-7f64a0eb-8000-4b9f-a058-b8892248fd5e ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:08:39.363: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:08:39.445: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-7f64a0eb-8000-4b9f-a058-b8892248fd5e in namespace persistent-local-volumes-test-4894 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:08:39.450: INFO: Deleting PersistentVolumeClaim "pvc-6w5nf" Oct 5 12:08:39.454: INFO: Deleting PersistentVolume "local-pvndzn2" STEP: Removing the test directory Oct 5 12:08:39.458: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c610f4dc-6cf6-4141-8c29-412d151adc54 && rm -r /tmp/local-volume-test-c610f4dc-6cf6-4141-8c29-412d151adc54-backend] Namespace:persistent-local-volumes-test-4894 PodName:hostexec-v122-worker2-9wft5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:08:39.458: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:08:39.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4894" for this suite. • [SLOW TEST:14.735 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":11,"skipped":455,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: block] Set fsGroup for local volume should set fsGroup for one pod [Slow]"]} SSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:08:30.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Oct 5 12:08:34.703: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-58e683c3-ff8d-4058-8ca3-a9adae1eadac] Namespace:persistent-local-volumes-test-7601 PodName:hostexec-v122-worker-r7r4r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:08:34.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:08:34.844: INFO: Creating a PV followed by a PVC Oct 5 12:08:34.856: INFO: Waiting for PV local-pvz8drj to bind to PVC pvc-kkczt Oct 5 12:08:34.856: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-kkczt] to have phase Bound Oct 5 12:08:34.859: INFO: PersistentVolumeClaim pvc-kkczt found but phase is Pending instead of Bound. Oct 5 12:08:36.864: INFO: PersistentVolumeClaim pvc-kkczt found and phase=Bound (2.008006001s) Oct 5 12:08:36.864: INFO: Waiting up to 3m0s for PersistentVolume local-pvz8drj to have phase Bound Oct 5 12:08:36.868: INFO: PersistentVolume local-pvz8drj found and phase=Bound (3.801221ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Oct 5 12:08:38.891: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:45799 --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-7601 exec pod-bc5c4122-50a7-4db8-b5c3-8875dde8e415 --namespace=persistent-local-volumes-test-7601 -- stat -c %g /mnt/volume1' Oct 5 12:08:39.028: INFO: stderr: "" Oct 5 12:08:39.028: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Oct 5 12:08:41.044: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:45799 --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-7601 exec pod-2a161bef-758d-4581-ad3e-53df0e978cfb --namespace=persistent-local-volumes-test-7601 -- stat -c %g /mnt/volume1' Oct 5 12:08:41.239: INFO: stderr: "" Oct 5 12:08:41.239: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod pod-bc5c4122-50a7-4db8-b5c3-8875dde8e415 in namespace persistent-local-volumes-test-7601 STEP: Deleting second pod STEP: Deleting pod pod-2a161bef-758d-4581-ad3e-53df0e978cfb in namespace persistent-local-volumes-test-7601 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:08:41.254: INFO: Deleting PersistentVolumeClaim "pvc-kkczt" Oct 5 12:08:41.259: INFO: Deleting PersistentVolume "local-pvz8drj" STEP: Removing the test directory Oct 5 12:08:41.263: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-58e683c3-ff8d-4058-8ca3-a9adae1eadac] Namespace:persistent-local-volumes-test-7601 PodName:hostexec-v122-worker-r7r4r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:08:41.263: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:08:41.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7601" for this suite. • [SLOW TEST:10.744 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":5,"skipped":226,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:08:16.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:391 STEP: Setting up local volumes on node "v122-worker" STEP: Initializing test volumes Oct 5 12:08:18.324: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-1743bec1-3179-4de6-a79c-0e80ff3e496f] Namespace:persistent-local-volumes-test-5820 PodName:hostexec-v122-worker-crv8m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:08:18.324: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:08:18.488: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-f6606e8c-c563-4802-9f2a-8283b8fe3b60] Namespace:persistent-local-volumes-test-5820 PodName:hostexec-v122-worker-crv8m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:08:18.488: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:08:18.565: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-59e26f90-3c03-434c-af5a-f50a8a073c6a] Namespace:persistent-local-volumes-test-5820 PodName:hostexec-v122-worker-crv8m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:08:18.565: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:08:18.696: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-d4257c6a-731c-4848-8472-f079239f066a] Namespace:persistent-local-volumes-test-5820 PodName:hostexec-v122-worker-crv8m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:08:18.696: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:08:18.785: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-36622df2-c7df-44d0-a19d-bb84d93aa198] Namespace:persistent-local-volumes-test-5820 PodName:hostexec-v122-worker-crv8m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:08:18.785: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:08:18.868: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-bd984681-182a-4057-abce-4b9b29a1e554] Namespace:persistent-local-volumes-test-5820 PodName:hostexec-v122-worker-crv8m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:08:18.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:08:18.982: INFO: Creating a PV followed by a PVC Oct 5 12:08:18.989: INFO: Creating a PV followed by a PVC Oct 5 12:08:18.996: INFO: Creating a PV followed by a PVC Oct 5 12:08:19.004: INFO: Creating a PV followed by a PVC Oct 5 12:08:19.011: INFO: Creating a PV followed by a PVC Oct 5 12:08:19.023: INFO: Creating a PV followed by a PVC Oct 5 12:08:29.085: INFO: PVCs were not bound within 10s (that's good) STEP: Setting up local volumes on node "v122-worker2" STEP: Initializing test volumes Oct 5 12:08:31.100: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-a43e011b-457c-4a4a-8dab-bbe9242eec47] Namespace:persistent-local-volumes-test-5820 PodName:hostexec-v122-worker2-gpnqw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:08:31.100: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:08:31.251: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-30abbcb8-4747-453a-ac63-e902bd19d4fe] Namespace:persistent-local-volumes-test-5820 PodName:hostexec-v122-worker2-gpnqw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:08:31.251: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:08:31.386: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-05384555-3868-4fc0-853a-9c48477c6f0a] Namespace:persistent-local-volumes-test-5820 PodName:hostexec-v122-worker2-gpnqw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:08:31.386: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:08:31.527: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-1346340b-649d-4209-8684-c27ec6246c50] Namespace:persistent-local-volumes-test-5820 PodName:hostexec-v122-worker2-gpnqw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:08:31.527: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:08:31.684: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-17416538-b421-410e-8bba-13ea9eba8244] Namespace:persistent-local-volumes-test-5820 PodName:hostexec-v122-worker2-gpnqw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:08:31.684: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:08:31.837: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-96afa6ff-55d6-4c61-8828-841ac42e2e25] Namespace:persistent-local-volumes-test-5820 PodName:hostexec-v122-worker2-gpnqw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:08:31.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:08:31.918: INFO: Creating a PV followed by a PVC Oct 5 12:08:31.926: INFO: Creating a PV followed by a PVC Oct 5 12:08:31.934: INFO: Creating a PV followed by a PVC Oct 5 12:08:31.941: INFO: Creating a PV followed by a PVC Oct 5 12:08:31.948: INFO: Creating a PV followed by a PVC Oct 5 12:08:31.954: INFO: Creating a PV followed by a PVC Oct 5 12:08:42.012: INFO: PVCs were not bound within 10s (that's good) [It] should use volumes spread across nodes when pod management is parallel and pod has anti-affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:425 Oct 5 12:08:42.012: INFO: Runs only when number of nodes >= 3 [AfterEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:403 STEP: Cleaning up PVC and PV Oct 5 12:08:42.014: INFO: Deleting PersistentVolumeClaim "pvc-r4d6v" Oct 5 12:08:42.019: INFO: Deleting PersistentVolume "local-pvj2v66" STEP: Cleaning up PVC and PV Oct 5 12:08:42.024: INFO: Deleting PersistentVolumeClaim "pvc-crnj5" Oct 5 12:08:42.029: INFO: Deleting PersistentVolume "local-pvb6d5w" STEP: Cleaning up PVC and PV Oct 5 12:08:42.034: INFO: Deleting PersistentVolumeClaim "pvc-7rkzv" Oct 5 12:08:42.039: INFO: Deleting PersistentVolume "local-pv56vv2" STEP: Cleaning up PVC and PV Oct 5 12:08:42.044: INFO: Deleting PersistentVolumeClaim "pvc-9fw2x" Oct 5 12:08:42.050: INFO: Deleting PersistentVolume "local-pv4rslq" STEP: Cleaning up PVC and PV Oct 5 12:08:42.055: INFO: Deleting PersistentVolumeClaim "pvc-c2n7t" Oct 5 12:08:42.059: INFO: Deleting PersistentVolume "local-pv9j7t5" STEP: Cleaning up PVC and PV Oct 5 12:08:42.064: INFO: Deleting PersistentVolumeClaim "pvc-bmmrc" Oct 5 12:08:42.069: INFO: Deleting PersistentVolume "local-pv8wq8m" STEP: Removing the test directory Oct 5 12:08:42.073: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-1743bec1-3179-4de6-a79c-0e80ff3e496f] Namespace:persistent-local-volumes-test-5820 PodName:hostexec-v122-worker-crv8m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:08:42.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:08:42.204: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f6606e8c-c563-4802-9f2a-8283b8fe3b60] Namespace:persistent-local-volumes-test-5820 PodName:hostexec-v122-worker-crv8m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:08:42.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:08:42.323: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-59e26f90-3c03-434c-af5a-f50a8a073c6a] Namespace:persistent-local-volumes-test-5820 PodName:hostexec-v122-worker-crv8m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:08:42.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:08:42.451: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-d4257c6a-731c-4848-8472-f079239f066a] Namespace:persistent-local-volumes-test-5820 PodName:hostexec-v122-worker-crv8m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:08:42.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:08:42.595: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-36622df2-c7df-44d0-a19d-bb84d93aa198] Namespace:persistent-local-volumes-test-5820 PodName:hostexec-v122-worker-crv8m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:08:42.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:08:42.773: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-bd984681-182a-4057-abce-4b9b29a1e554] Namespace:persistent-local-volumes-test-5820 PodName:hostexec-v122-worker-crv8m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:08:42.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up PVC and PV Oct 5 12:08:42.886: INFO: Deleting PersistentVolumeClaim "pvc-9f9bx" Oct 5 12:08:42.890: INFO: Deleting PersistentVolume "local-pvx8nf6" STEP: Cleaning up PVC and PV Oct 5 12:08:42.896: INFO: Deleting PersistentVolumeClaim "pvc-k4plp" Oct 5 12:08:42.900: INFO: Deleting PersistentVolume "local-pvcgrns" STEP: Cleaning up PVC and PV Oct 5 12:08:42.904: INFO: Deleting PersistentVolumeClaim "pvc-5h8zl" Oct 5 12:08:42.907: INFO: Deleting PersistentVolume "local-pvr5l28" STEP: Cleaning up PVC and PV Oct 5 12:08:42.911: INFO: Deleting PersistentVolumeClaim "pvc-h2szj" Oct 5 12:08:42.919: INFO: Deleting PersistentVolume "local-pv4chkl" STEP: Cleaning up PVC and PV Oct 5 12:08:42.923: INFO: Deleting PersistentVolumeClaim "pvc-z55hx" Oct 5 12:08:42.927: INFO: Deleting PersistentVolume "local-pvfb7b8" STEP: Cleaning up PVC and PV Oct 5 12:08:42.931: INFO: Deleting PersistentVolumeClaim "pvc-qsxs8" Oct 5 12:08:42.934: INFO: Deleting PersistentVolume "local-pvjnbxv" STEP: Removing the test directory Oct 5 12:08:42.937: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a43e011b-457c-4a4a-8dab-bbe9242eec47] Namespace:persistent-local-volumes-test-5820 PodName:hostexec-v122-worker2-gpnqw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:08:42.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:08:43.034: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-30abbcb8-4747-453a-ac63-e902bd19d4fe] Namespace:persistent-local-volumes-test-5820 PodName:hostexec-v122-worker2-gpnqw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:08:43.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:08:43.126: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-05384555-3868-4fc0-853a-9c48477c6f0a] Namespace:persistent-local-volumes-test-5820 PodName:hostexec-v122-worker2-gpnqw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:08:43.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:08:43.226: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-1346340b-649d-4209-8684-c27ec6246c50] Namespace:persistent-local-volumes-test-5820 PodName:hostexec-v122-worker2-gpnqw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:08:43.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:08:43.351: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-17416538-b421-410e-8bba-13ea9eba8244] Namespace:persistent-local-volumes-test-5820 PodName:hostexec-v122-worker2-gpnqw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:08:43.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:08:43.418: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-96afa6ff-55d6-4c61-8828-841ac42e2e25] Namespace:persistent-local-volumes-test-5820 PodName:hostexec-v122-worker2-gpnqw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:08:43.418: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:08:43.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5820" for this suite. S [SKIPPING] [27.295 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:384 should use volumes spread across nodes when pod management is parallel and pod has anti-affinity [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:425 Runs only when number of nodes >= 3 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:427 ------------------------------ SS ------------------------------ [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:06:37.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:49 [It] should allow deletion of pod with invalid volume : configmap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 Oct 5 12:07:07.537: INFO: Deleting pod "pv-3071"/"pod-ephm-test-projected-2cc9" Oct 5 12:07:07.537: INFO: Deleting pod "pod-ephm-test-projected-2cc9" in namespace "pv-3071" Oct 5 12:07:07.542: INFO: Wait up to 5m0s for pod "pod-ephm-test-projected-2cc9" to be fully deleted [AfterEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:08:43.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3071" for this suite. • [SLOW TEST:126.066 seconds] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 When pod refers to non-existent ephemeral storage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53 should allow deletion of pod with invalid volume : configmap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap","total":-1,"completed":7,"skipped":352,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:08:43.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "v122-worker2" using path "/tmp/local-volume-test-5b0bc391-64b0-4f4a-b194-2729095c22b9" Oct 5 12:08:45.702: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-5b0bc391-64b0-4f4a-b194-2729095c22b9 && dd if=/dev/zero of=/tmp/local-volume-test-5b0bc391-64b0-4f4a-b194-2729095c22b9/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-5b0bc391-64b0-4f4a-b194-2729095c22b9/file] Namespace:persistent-local-volumes-test-3943 PodName:hostexec-v122-worker2-6r7qp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:08:45.703: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:08:45.836: INFO: exec v122-worker2: command: mkdir -p /tmp/local-volume-test-5b0bc391-64b0-4f4a-b194-2729095c22b9 && dd if=/dev/zero of=/tmp/local-volume-test-5b0bc391-64b0-4f4a-b194-2729095c22b9/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-5b0bc391-64b0-4f4a-b194-2729095c22b9/file Oct 5 12:08:45.836: INFO: exec v122-worker2: stdout: "" Oct 5 12:08:45.836: INFO: exec v122-worker2: stderr: "5120+0 records in\n5120+0 records out\n20971520 bytes (21 MB, 20 MiB) copied, 0.0210617 s, 996 MB/s\nlosetup: /tmp/local-volume-test-5b0bc391-64b0-4f4a-b194-2729095c22b9/file: failed to set up loop device: No such device or address\n" Oct 5 12:08:45.836: INFO: exec v122-worker2: exit code: 0 Oct 5 12:08:45.836: FAIL: Unexpected error: : { Err: { s: "command terminated with exit code 1", }, Code: 1, } command terminated with exit code 1 occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).createAndSetupLoopDevice(0xc000efe8a0, 0xc0018c6d00, 0x3b, 0xc0044314d0, 0x1400000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:133 +0x45b k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).setupLocalVolumeBlock(0xc000efe8a0, 0xc0044314d0, 0x0, 0x78cd2a8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:146 +0x65 k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).Create(0xc000efe8a0, 0xc0044314d0, 0x702c9b3, 0x5, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:306 +0x326 k8s.io/kubernetes/test/e2e/storage.setupLocalVolumes(0xc0021d5ef0, 0x7069370, 0x14, 0xc0044314d0, 0x1, 0x0, 0x0, 0xc000a27200) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:837 +0x157 k8s.io/kubernetes/test/e2e/storage.setupLocalVolumesPVCsPVs(0xc0021d5ef0, 0x7069370, 0x14, 0xc0044314d0, 0x1, 0x703610f, 0x9, 0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1102 +0x87 k8s.io/kubernetes/test/e2e/storage.glob..func21.2.1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 +0xb6 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00100b380) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00100b380) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc00100b380, 0x729c7d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "persistent-local-volumes-test-3943". STEP: Found 4 events. Oct 5 12:08:45.842: INFO: At 2022-10-05 12:08:43 +0000 UTC - event for hostexec-v122-worker2-6r7qp: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-3943/hostexec-v122-worker2-6r7qp to v122-worker2 Oct 5 12:08:45.843: INFO: At 2022-10-05 12:08:44 +0000 UTC - event for hostexec-v122-worker2-6r7qp: {kubelet v122-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Oct 5 12:08:45.843: INFO: At 2022-10-05 12:08:44 +0000 UTC - event for hostexec-v122-worker2-6r7qp: {kubelet v122-worker2} Created: Created container agnhost-container Oct 5 12:08:45.843: INFO: At 2022-10-05 12:08:44 +0000 UTC - event for hostexec-v122-worker2-6r7qp: {kubelet v122-worker2} Started: Started container agnhost-container Oct 5 12:08:45.846: INFO: POD NODE PHASE GRACE CONDITIONS Oct 5 12:08:45.846: INFO: hostexec-v122-worker2-6r7qp v122-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-05 12:08:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-10-05 12:08:44 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-10-05 12:08:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-05 12:08:43 +0000 UTC }] Oct 5 12:08:45.846: INFO: Oct 5 12:08:45.850: INFO: Logging node info for node v122-control-plane Oct 5 12:08:45.853: INFO: Node Info: &Node{ObjectMeta:{v122-control-plane 0bba5de9-314a-4743-bf02-bde0ec06daf3 5868 0 2022-10-05 11:59:47 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:v122-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-10-05 11:59:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-10-05 11:59:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2022-10-05 12:00:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-10-05 12:00:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v122/v122-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-10-05 12:05:22 +0000 UTC,LastTransitionTime:2022-10-05 11:59:42 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-10-05 12:05:22 +0000 UTC,LastTransitionTime:2022-10-05 11:59:42 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-10-05 12:05:22 +0000 UTC,LastTransitionTime:2022-10-05 11:59:42 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-10-05 12:05:22 +0000 UTC,LastTransitionTime:2022-10-05 12:00:21 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.7,},NodeAddress{Type:Hostname,Address:v122-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:90a9e9edfe9d44d59ee2bec7a8da01cd,SystemUUID:2e684780-1fcb-4016-9109-255b79db130f,BootID:5bd644eb-fc54-4a37-951a-44566369b55e,KernelVersion:5.15.0-48-generic,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.22.15,KubeProxyVersion:v1.22.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:df9770adc7c9d21f7f2c2f8e04efed16e9ca12f9c00aad56e5753bd1819ad95f k8s.gcr.io/kube-proxy:v1.22.15],SizeBytes:105434107,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:393eaa5fb5582428fe7b343ba5b729dbdb8a7d23832e3e9183f515fee5e478ca k8s.gcr.io/kube-apiserver:v1.22.15],SizeBytes:74691038,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:0c81eb77294a8fb7706e538aae9423a6301e00bae81cebf0a67c42f27e6714c7 k8s.gcr.io/kube-controller-manager:v1.22.15],SizeBytes:67534962,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:974ce90d1dfaa71470d7e80077a3dd85ed92848347005df49d710c1576511a2c k8s.gcr.io/kube-scheduler:v1.22.15],SizeBytes:53936244,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220726-ed811e41],SizeBytes:25818452,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220607-9a4d8d2a],SizeBytes:2859509,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 5 12:08:45.854: INFO: Logging kubelet events for node v122-control-plane Oct 5 12:08:45.859: INFO: Logging pods the kubelet thinks is on node v122-control-plane Oct 5 12:08:45.877: INFO: kube-proxy-xtt57 started at 2022-10-05 12:00:05 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:45.877: INFO: Container kube-proxy ready: true, restart count 0 Oct 5 12:08:45.877: INFO: create-loop-devs-lvpbc started at 2022-10-05 12:00:14 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:45.877: INFO: Container loopdev ready: true, restart count 0 Oct 5 12:08:45.877: INFO: etcd-v122-control-plane started at 2022-10-05 11:59:52 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:45.877: INFO: Container etcd ready: true, restart count 0 Oct 5 12:08:45.877: INFO: kube-apiserver-v122-control-plane started at 2022-10-05 11:59:52 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:45.877: INFO: Container kube-apiserver ready: true, restart count 0 Oct 5 12:08:45.877: INFO: kube-controller-manager-v122-control-plane started at 2022-10-05 11:59:52 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:45.877: INFO: Container kube-controller-manager ready: true, restart count 0 Oct 5 12:08:45.877: INFO: kube-scheduler-v122-control-plane started at 2022-10-05 11:59:52 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:45.877: INFO: Container kube-scheduler ready: true, restart count 0 Oct 5 12:08:45.877: INFO: kindnet-g8rqz started at 2022-10-05 12:00:05 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:45.877: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 12:08:45.941: INFO: Latency metrics for node v122-control-plane Oct 5 12:08:45.941: INFO: Logging node info for node v122-worker Oct 5 12:08:45.944: INFO: Node Info: &Node{ObjectMeta:{v122-worker 8286eab4-ee46-4103-bc96-cf44e85cf562 10287 0 2022-10-05 12:00:08 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:v122-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-376":"csi-mock-csi-mock-volumes-376","csi-mock-csi-mock-volumes-9240":"csi-mock-csi-mock-volumes-9240"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-10-05 12:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2022-10-05 12:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-10-05 12:00:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-10-05 12:08:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v122/v122-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-10-05 12:08:09 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-10-05 12:08:09 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-10-05 12:08:09 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-10-05 12:08:09 +0000 UTC,LastTransitionTime:2022-10-05 12:00:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.6,},NodeAddress{Type:Hostname,Address:v122-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8ce5667169114cc58989bd26cdb88021,SystemUUID:f1b8869e-1c17-4972-b832-4d15146806a4,BootID:5bd644eb-fc54-4a37-951a-44566369b55e,KernelVersion:5.15.0-48-generic,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.22.15,KubeProxyVersion:v1.22.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:df9770adc7c9d21f7f2c2f8e04efed16e9ca12f9c00aad56e5753bd1819ad95f k8s.gcr.io/kube-proxy:v1.22.15],SizeBytes:105434107,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:393eaa5fb5582428fe7b343ba5b729dbdb8a7d23832e3e9183f515fee5e478ca k8s.gcr.io/kube-apiserver:v1.22.15],SizeBytes:74691038,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:0c81eb77294a8fb7706e538aae9423a6301e00bae81cebf0a67c42f27e6714c7 k8s.gcr.io/kube-controller-manager:v1.22.15],SizeBytes:67534962,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:974ce90d1dfaa71470d7e80077a3dd85ed92848347005df49d710c1576511a2c k8s.gcr.io/kube-scheduler:v1.22.15],SizeBytes:53936244,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220726-ed811e41],SizeBytes:25818452,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220607-9a4d8d2a],SizeBytes:2859509,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 5 12:08:45.945: INFO: Logging kubelet events for node v122-worker Oct 5 12:08:45.950: INFO: Logging pods the kubelet thinks is on node v122-worker Oct 5 12:08:45.979: INFO: create-loop-devs-f76cj started at 2022-10-05 12:00:14 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:45.979: INFO: Container loopdev ready: true, restart count 0 Oct 5 12:08:45.979: INFO: pod-2a161bef-758d-4581-ad3e-53df0e978cfb started at 2022-10-05 12:08:39 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:45.979: INFO: Container write-pod ready: true, restart count 0 Oct 5 12:08:45.979: INFO: hostexec-v122-worker-qt9q8 started at 2022-10-05 12:08:39 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:45.979: INFO: Container agnhost-container ready: false, restart count 0 Oct 5 12:08:45.979: INFO: pod-configmaps-2bb22201-613d-442b-9f83-a9d39e6f1499 started at 2022-10-05 12:06:31 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:45.979: INFO: Container agnhost-container ready: false, restart count 0 Oct 5 12:08:45.979: INFO: csi-mockplugin-attacher-0 started at 2022-10-05 12:08:41 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:45.979: INFO: Container csi-attacher ready: false, restart count 0 Oct 5 12:08:45.979: INFO: csi-mockplugin-resizer-0 started at 2022-10-05 12:08:41 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:45.979: INFO: Container csi-resizer ready: false, restart count 0 Oct 5 12:08:45.979: INFO: kindnet-rkh8m started at 2022-10-05 12:00:09 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:45.979: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 12:08:45.979: INFO: kube-proxy-xkzrn started at 2022-10-05 12:00:09 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:45.979: INFO: Container kube-proxy ready: true, restart count 0 Oct 5 12:08:45.979: INFO: csi-mockplugin-0 started at (0+0 container statuses recorded) Oct 5 12:08:45.979: INFO: pod-secrets-9d1aa6a5-fe49-413f-85a9-4c2a8e6f4e5b started at 2022-10-05 12:03:08 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:45.979: INFO: Container creates-volume-test ready: false, restart count 0 Oct 5 12:08:45.979: INFO: pod-secrets-76b16dac-27d0-4343-a0fe-b8ed5dd81977 started at 2022-10-05 12:06:14 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:45.979: INFO: Container creates-volume-test ready: false, restart count 0 Oct 5 12:08:45.979: INFO: pod-bc5c4122-50a7-4db8-b5c3-8875dde8e415 started at 2022-10-05 12:08:36 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:45.979: INFO: Container write-pod ready: true, restart count 0 Oct 5 12:08:45.979: INFO: hostexec-v122-worker-crv8m started at 2022-10-05 12:08:16 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:45.979: INFO: Container agnhost-container ready: true, restart count 0 Oct 5 12:08:45.979: INFO: pvc-volume-tester-fhthn started at 2022-10-05 12:08:40 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:45.979: INFO: Container volume-tester ready: false, restart count 0 Oct 5 12:08:45.979: INFO: csi-mockplugin-0 started at 2022-10-05 12:08:41 +0000 UTC (0+3 container statuses recorded) Oct 5 12:08:45.979: INFO: Container csi-provisioner ready: false, restart count 0 Oct 5 12:08:45.979: INFO: Container driver-registrar ready: false, restart count 0 Oct 5 12:08:45.979: INFO: Container mock ready: false, restart count 0 Oct 5 12:08:45.979: INFO: pod-ephm-test-projected-vm97 started at 2022-10-05 12:06:53 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:45.979: INFO: Container test-container-subpath-projected-vm97 ready: false, restart count 0 Oct 5 12:08:45.979: INFO: csi-mockplugin-resizer-0 started at (0+0 container statuses recorded) Oct 5 12:08:45.979: INFO: hostexec-v122-worker-r7r4r started at 2022-10-05 12:08:30 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:45.979: INFO: Container agnhost-container ready: true, restart count 0 Oct 5 12:08:45.979: INFO: csi-mockplugin-0 started at 2022-10-05 12:08:28 +0000 UTC (0+4 container statuses recorded) Oct 5 12:08:45.979: INFO: Container busybox ready: true, restart count 0 Oct 5 12:08:45.979: INFO: Container csi-provisioner ready: true, restart count 0 Oct 5 12:08:45.979: INFO: Container driver-registrar ready: true, restart count 0 Oct 5 12:08:45.979: INFO: Container mock ready: true, restart count 0 Oct 5 12:08:45.979: INFO: csi-mockplugin-attacher-0 started at (0+0 container statuses recorded) Oct 5 12:08:46.146: INFO: Latency metrics for node v122-worker Oct 5 12:08:46.146: INFO: Logging node info for node v122-worker2 Oct 5 12:08:46.150: INFO: Node Info: &Node{ObjectMeta:{v122-worker2 e098b7b6-6804-492f-b9ec-650d1924542e 10291 0 2022-10-05 12:00:08 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:v122-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-10-05 12:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2022-10-05 12:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-10-05 12:00:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-10-05 12:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {kube-controller-manager Update v1 2022-10-05 12:08:45 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v122/v122-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-10-05 12:08:39 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-10-05 12:08:39 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-10-05 12:08:39 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-10-05 12:08:39 +0000 UTC,LastTransitionTime:2022-10-05 12:00:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.5,},NodeAddress{Type:Hostname,Address:v122-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:feea07f38e414515ae57b946e27fa7bb,SystemUUID:07d898dc-4331-403b-9bdf-da8ef413d01c,BootID:5bd644eb-fc54-4a37-951a-44566369b55e,KernelVersion:5.15.0-48-generic,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.22.15,KubeProxyVersion:v1.22.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/sig-storage/nfs-provisioner@sha256:c1bedac8758029948afe060bf8f6ee63ea489b5e08d29745f44fab68ee0d46ca k8s.gcr.io/sig-storage/nfs-provisioner:v2.2.2],SizeBytes:138177747,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:df9770adc7c9d21f7f2c2f8e04efed16e9ca12f9c00aad56e5753bd1819ad95f k8s.gcr.io/kube-proxy:v1.22.15],SizeBytes:105434107,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:393eaa5fb5582428fe7b343ba5b729dbdb8a7d23832e3e9183f515fee5e478ca k8s.gcr.io/kube-apiserver:v1.22.15],SizeBytes:74691038,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:0c81eb77294a8fb7706e538aae9423a6301e00bae81cebf0a67c42f27e6714c7 k8s.gcr.io/kube-controller-manager:v1.22.15],SizeBytes:67534962,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:974ce90d1dfaa71470d7e80077a3dd85ed92848347005df49d710c1576511a2c k8s.gcr.io/kube-scheduler:v1.22.15],SizeBytes:53936244,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220726-ed811e41],SizeBytes:25818452,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220607-9a4d8d2a],SizeBytes:2859509,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-9034^4,DevicePath:,},},Config:nil,},} Oct 5 12:08:46.150: INFO: Logging kubelet events for node v122-worker2 Oct 5 12:08:46.155: INFO: Logging pods the kubelet thinks is on node v122-worker2 Oct 5 12:08:46.182: INFO: create-loop-devs-6sf59 started at 2022-10-05 12:00:14 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:46.182: INFO: Container loopdev ready: true, restart count 0 Oct 5 12:08:46.182: INFO: pod-f79ff21f-4fdf-4e74-928e-a01f6395dff5 started at 2022-10-05 12:07:08 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:46.182: INFO: Container write-pod ready: false, restart count 0 Oct 5 12:08:46.182: INFO: csi-mockplugin-attacher-0 started at 2022-10-05 12:08:38 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:46.182: INFO: Container csi-attacher ready: true, restart count 0 Oct 5 12:08:46.182: INFO: hostexec-v122-worker2-6r7qp started at 2022-10-05 12:08:43 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:46.182: INFO: Container agnhost-container ready: true, restart count 0 Oct 5 12:08:46.182: INFO: kindnet-vqtz2 started at 2022-10-05 12:00:09 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:46.182: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 12:08:46.182: INFO: kube-proxy-pwsq7 started at 2022-10-05 12:00:09 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:46.182: INFO: Container kube-proxy ready: true, restart count 0 Oct 5 12:08:46.182: INFO: hostexec-v122-worker2-gpnqw started at 2022-10-05 12:08:29 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:46.182: INFO: Container agnhost-container ready: true, restart count 0 Oct 5 12:08:46.182: INFO: coredns-78fcd69978-srwh8 started at 2022-10-05 12:00:18 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:46.182: INFO: Container coredns ready: true, restart count 0 Oct 5 12:08:46.182: INFO: local-path-provisioner-58c8ccd54c-lkwwv started at 2022-10-05 12:00:18 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:46.182: INFO: Container local-path-provisioner ready: true, restart count 0 Oct 5 12:08:46.182: INFO: csi-mockplugin-0 started at 2022-10-05 12:08:38 +0000 UTC (0+3 container statuses recorded) Oct 5 12:08:46.182: INFO: Container csi-provisioner ready: true, restart count 0 Oct 5 12:08:46.182: INFO: Container driver-registrar ready: true, restart count 0 Oct 5 12:08:46.182: INFO: Container mock ready: true, restart count 0 Oct 5 12:08:46.182: INFO: hostexec-v122-worker2-75hw5 started at 2022-10-05 12:06:49 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:46.182: INFO: Container agnhost-container ready: true, restart count 0 Oct 5 12:08:46.182: INFO: pod-2dfdc6f1-9191-4301-96e6-b9954dca0603 started at 2022-10-05 12:07:05 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:46.182: INFO: Container write-pod ready: true, restart count 0 Oct 5 12:08:46.182: INFO: pvc-volume-tester-m85d7 started at 2022-10-05 12:08:45 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:46.182: INFO: Container volume-tester ready: false, restart count 0 Oct 5 12:08:46.182: INFO: coredns-78fcd69978-vrzs8 started at 2022-10-05 12:00:18 +0000 UTC (0+1 container statuses recorded) Oct 5 12:08:46.182: INFO: Container coredns ready: true, restart count 0 Oct 5 12:08:46.323: INFO: Latency metrics for node v122-worker2 Oct 5 12:08:46.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3943" for this suite. • Failure in Spec Setup (BeforeEach) [2.679 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 Oct 5 12:08:45.837: Unexpected error: : { Err: { s: "command terminated with exit code 1", }, Code: 1, } command terminated with exit code 1 occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:133 ------------------------------ {"msg":"FAILED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":7,"skipped":419,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:08:39.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Oct 5 12:08:47.630: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-63eb5e95-5c46-4268-a311-70d141e84c92] Namespace:persistent-local-volumes-test-3389 PodName:hostexec-v122-worker-qt9q8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:08:47.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:08:47.775: INFO: Creating a PV followed by a PVC Oct 5 12:08:47.784: INFO: Waiting for PV local-pvchj75 to bind to PVC pvc-72477 Oct 5 12:08:47.784: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-72477] to have phase Bound Oct 5 12:08:47.788: INFO: PersistentVolumeClaim pvc-72477 found but phase is Pending instead of Bound. Oct 5 12:08:49.791: INFO: PersistentVolumeClaim pvc-72477 found and phase=Bound (2.007051258s) Oct 5 12:08:49.791: INFO: Waiting up to 3m0s for PersistentVolume local-pvchj75 to have phase Bound Oct 5 12:08:49.795: INFO: PersistentVolume local-pvchj75 found and phase=Bound (3.116387ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Oct 5 12:08:57.820: INFO: pod "pod-134f03ea-3f9d-4e8d-9eed-8970a555f4d9" created on Node "v122-worker" STEP: Writing in pod1 Oct 5 12:08:57.820: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3389 PodName:pod-134f03ea-3f9d-4e8d-9eed-8970a555f4d9 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:08:57.820: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:08:57.959: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Oct 5 12:08:57.959: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3389 PodName:pod-134f03ea-3f9d-4e8d-9eed-8970a555f4d9 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:08:57.959: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:08:58.055: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Oct 5 12:08:58.055: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-63eb5e95-5c46-4268-a311-70d141e84c92 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3389 PodName:pod-134f03ea-3f9d-4e8d-9eed-8970a555f4d9 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:08:58.055: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:08:58.180: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-63eb5e95-5c46-4268-a311-70d141e84c92 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-134f03ea-3f9d-4e8d-9eed-8970a555f4d9 in namespace persistent-local-volumes-test-3389 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:08:58.186: INFO: Deleting PersistentVolumeClaim "pvc-72477" Oct 5 12:08:58.191: INFO: Deleting PersistentVolume "local-pvchj75" STEP: Removing the test directory Oct 5 12:08:58.196: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-63eb5e95-5c46-4268-a311-70d141e84c92] Namespace:persistent-local-volumes-test-3389 PodName:hostexec-v122-worker-qt9q8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:08:58.196: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:08:58.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3389" for this suite. • [SLOW TEST:18.764 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":12,"skipped":460,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: block] Set fsGroup for local volume should set fsGroup for one pod [Slow]"]} SSSS ------------------------------ [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:06:53.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:49 [It] should allow deletion of pod with invalid volume : projected /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 Oct 5 12:07:23.822: INFO: Deleting pod "pv-9409"/"pod-ephm-test-projected-vm97" Oct 5 12:07:23.822: INFO: Deleting pod "pod-ephm-test-projected-vm97" in namespace "pv-9409" Oct 5 12:07:23.827: INFO: Wait up to 5m0s for pod "pod-ephm-test-projected-vm97" to be fully deleted [AfterEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:08:59.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-9409" for this suite. • [SLOW TEST:126.056 seconds] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 When pod refers to non-existent ephemeral storage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53 should allow deletion of pod with invalid volume : projected /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 ------------------------------ {"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected","total":-1,"completed":9,"skipped":206,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:08:59.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:68 Oct 5 12:09:00.018: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:09:00.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-5602" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.046 seconds] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 GlusterFS [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:128 should be mountable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:129 Only supported for node OS distro [gci ubuntu custom] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:69 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:09:00.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "v122-worker2" using path "/tmp/local-volume-test-eb0a33ae-97cf-4f94-9b19-50692c028ff1" Oct 5 12:09:02.099: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-eb0a33ae-97cf-4f94-9b19-50692c028ff1 && dd if=/dev/zero of=/tmp/local-volume-test-eb0a33ae-97cf-4f94-9b19-50692c028ff1/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-eb0a33ae-97cf-4f94-9b19-50692c028ff1/file] Namespace:persistent-local-volumes-test-4656 PodName:hostexec-v122-worker2-s9lhz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:09:02.099: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:09:02.267: INFO: exec v122-worker2: command: mkdir -p /tmp/local-volume-test-eb0a33ae-97cf-4f94-9b19-50692c028ff1 && dd if=/dev/zero of=/tmp/local-volume-test-eb0a33ae-97cf-4f94-9b19-50692c028ff1/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-eb0a33ae-97cf-4f94-9b19-50692c028ff1/file Oct 5 12:09:02.267: INFO: exec v122-worker2: stdout: "" Oct 5 12:09:02.267: INFO: exec v122-worker2: stderr: "5120+0 records in\n5120+0 records out\n20971520 bytes (21 MB, 20 MiB) copied, 0.0242413 s, 865 MB/s\nlosetup: /tmp/local-volume-test-eb0a33ae-97cf-4f94-9b19-50692c028ff1/file: failed to set up loop device: No such device or address\n" Oct 5 12:09:02.267: INFO: exec v122-worker2: exit code: 0 Oct 5 12:09:02.268: FAIL: Unexpected error: : { Err: { s: "command terminated with exit code 1", }, Code: 1, } command terminated with exit code 1 occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).createAndSetupLoopDevice(0xc00453f0b0, 0xc004111a40, 0x3b, 0xc00389d4d0, 0x1400000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:133 +0x45b k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).setupLocalVolumeBlock(0xc00453f0b0, 0xc00389d4d0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:146 +0x65 k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).setupLocalVolumeBlockFS(0xc00453f0b0, 0xc00389d4d0, 0x0, 0x78cd2a8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:174 +0x5a k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).Create(0xc00453f0b0, 0xc00389d4d0, 0x7030fcb, 0x7, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:308 +0x391 k8s.io/kubernetes/test/e2e/storage.setupLocalVolumes(0xc0035923f0, 0x70587e6, 0x11, 0xc00389d4d0, 0x1, 0x0, 0x0, 0xc004691500) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:837 +0x157 k8s.io/kubernetes/test/e2e/storage.setupLocalVolumesPVCsPVs(0xc0035923f0, 0x70587e6, 0x11, 0xc00389d4d0, 0x1, 0x703610f, 0x9, 0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1102 +0x87 k8s.io/kubernetes/test/e2e/storage.glob..func21.2.1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 +0xb6 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000f88a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000f88a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc000f88a80, 0x729c7d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "persistent-local-volumes-test-4656". STEP: Found 4 events. Oct 5 12:09:02.273: INFO: At 2022-10-05 12:09:00 +0000 UTC - event for hostexec-v122-worker2-s9lhz: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-4656/hostexec-v122-worker2-s9lhz to v122-worker2 Oct 5 12:09:02.273: INFO: At 2022-10-05 12:09:00 +0000 UTC - event for hostexec-v122-worker2-s9lhz: {kubelet v122-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Oct 5 12:09:02.273: INFO: At 2022-10-05 12:09:00 +0000 UTC - event for hostexec-v122-worker2-s9lhz: {kubelet v122-worker2} Created: Created container agnhost-container Oct 5 12:09:02.273: INFO: At 2022-10-05 12:09:00 +0000 UTC - event for hostexec-v122-worker2-s9lhz: {kubelet v122-worker2} Started: Started container agnhost-container Oct 5 12:09:02.276: INFO: POD NODE PHASE GRACE CONDITIONS Oct 5 12:09:02.276: INFO: hostexec-v122-worker2-s9lhz v122-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-05 12:09:00 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-10-05 12:09:00 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-10-05 12:09:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-05 12:09:00 +0000 UTC }] Oct 5 12:09:02.276: INFO: Oct 5 12:09:02.280: INFO: Logging node info for node v122-control-plane Oct 5 12:09:02.283: INFO: Node Info: &Node{ObjectMeta:{v122-control-plane 0bba5de9-314a-4743-bf02-bde0ec06daf3 5868 0 2022-10-05 11:59:47 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:v122-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-10-05 11:59:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-10-05 11:59:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2022-10-05 12:00:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-10-05 12:00:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v122/v122-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-10-05 12:05:22 +0000 UTC,LastTransitionTime:2022-10-05 11:59:42 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-10-05 12:05:22 +0000 UTC,LastTransitionTime:2022-10-05 11:59:42 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-10-05 12:05:22 +0000 UTC,LastTransitionTime:2022-10-05 11:59:42 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-10-05 12:05:22 +0000 UTC,LastTransitionTime:2022-10-05 12:00:21 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.7,},NodeAddress{Type:Hostname,Address:v122-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:90a9e9edfe9d44d59ee2bec7a8da01cd,SystemUUID:2e684780-1fcb-4016-9109-255b79db130f,BootID:5bd644eb-fc54-4a37-951a-44566369b55e,KernelVersion:5.15.0-48-generic,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.22.15,KubeProxyVersion:v1.22.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:df9770adc7c9d21f7f2c2f8e04efed16e9ca12f9c00aad56e5753bd1819ad95f k8s.gcr.io/kube-proxy:v1.22.15],SizeBytes:105434107,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:393eaa5fb5582428fe7b343ba5b729dbdb8a7d23832e3e9183f515fee5e478ca k8s.gcr.io/kube-apiserver:v1.22.15],SizeBytes:74691038,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:0c81eb77294a8fb7706e538aae9423a6301e00bae81cebf0a67c42f27e6714c7 k8s.gcr.io/kube-controller-manager:v1.22.15],SizeBytes:67534962,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:974ce90d1dfaa71470d7e80077a3dd85ed92848347005df49d710c1576511a2c k8s.gcr.io/kube-scheduler:v1.22.15],SizeBytes:53936244,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220726-ed811e41],SizeBytes:25818452,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220607-9a4d8d2a],SizeBytes:2859509,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 5 12:09:02.283: INFO: Logging kubelet events for node v122-control-plane Oct 5 12:09:02.287: INFO: Logging pods the kubelet thinks is on node v122-control-plane Oct 5 12:09:02.300: INFO: kube-apiserver-v122-control-plane started at 2022-10-05 11:59:52 +0000 UTC (0+1 container statuses recorded) Oct 5 12:09:02.300: INFO: Container kube-apiserver ready: true, restart count 0 Oct 5 12:09:02.300: INFO: kube-controller-manager-v122-control-plane started at 2022-10-05 11:59:52 +0000 UTC (0+1 container statuses recorded) Oct 5 12:09:02.300: INFO: Container kube-controller-manager ready: true, restart count 0 Oct 5 12:09:02.300: INFO: kube-scheduler-v122-control-plane started at 2022-10-05 11:59:52 +0000 UTC (0+1 container statuses recorded) Oct 5 12:09:02.300: INFO: Container kube-scheduler ready: true, restart count 0 Oct 5 12:09:02.300: INFO: kindnet-g8rqz started at 2022-10-05 12:00:05 +0000 UTC (0+1 container statuses recorded) Oct 5 12:09:02.300: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 12:09:02.300: INFO: kube-proxy-xtt57 started at 2022-10-05 12:00:05 +0000 UTC (0+1 container statuses recorded) Oct 5 12:09:02.300: INFO: Container kube-proxy ready: true, restart count 0 Oct 5 12:09:02.300: INFO: create-loop-devs-lvpbc started at 2022-10-05 12:00:14 +0000 UTC (0+1 container statuses recorded) Oct 5 12:09:02.300: INFO: Container loopdev ready: true, restart count 0 Oct 5 12:09:02.300: INFO: etcd-v122-control-plane started at 2022-10-05 11:59:52 +0000 UTC (0+1 container statuses recorded) Oct 5 12:09:02.300: INFO: Container etcd ready: true, restart count 0 Oct 5 12:09:02.354: INFO: Latency metrics for node v122-control-plane Oct 5 12:09:02.354: INFO: Logging node info for node v122-worker Oct 5 12:09:02.357: INFO: Node Info: &Node{ObjectMeta:{v122-worker 8286eab4-ee46-4103-bc96-cf44e85cf562 10581 0 2022-10-05 12:00:08 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:v122-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-376":"csi-mock-csi-mock-volumes-376","csi-mock-csi-mock-volumes-9047":"csi-mock-csi-mock-volumes-9047","csi-mock-csi-mock-volumes-9240":"csi-mock-csi-mock-volumes-9240"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-10-05 12:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2022-10-05 12:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-10-05 12:00:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kube-controller-manager Update v1 2022-10-05 12:08:49 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-10-05 12:08:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v122/v122-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-10-05 12:08:59 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-10-05 12:08:59 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-10-05 12:08:59 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-10-05 12:08:59 +0000 UTC,LastTransitionTime:2022-10-05 12:00:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.6,},NodeAddress{Type:Hostname,Address:v122-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8ce5667169114cc58989bd26cdb88021,SystemUUID:f1b8869e-1c17-4972-b832-4d15146806a4,BootID:5bd644eb-fc54-4a37-951a-44566369b55e,KernelVersion:5.15.0-48-generic,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.22.15,KubeProxyVersion:v1.22.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:df9770adc7c9d21f7f2c2f8e04efed16e9ca12f9c00aad56e5753bd1819ad95f k8s.gcr.io/kube-proxy:v1.22.15],SizeBytes:105434107,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:393eaa5fb5582428fe7b343ba5b729dbdb8a7d23832e3e9183f515fee5e478ca k8s.gcr.io/kube-apiserver:v1.22.15],SizeBytes:74691038,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:0c81eb77294a8fb7706e538aae9423a6301e00bae81cebf0a67c42f27e6714c7 k8s.gcr.io/kube-controller-manager:v1.22.15],SizeBytes:67534962,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:974ce90d1dfaa71470d7e80077a3dd85ed92848347005df49d710c1576511a2c k8s.gcr.io/kube-scheduler:v1.22.15],SizeBytes:53936244,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220726-ed811e41],SizeBytes:25818452,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220607-9a4d8d2a],SizeBytes:2859509,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-376^4 kubernetes.io/csi/csi-mock-csi-mock-volumes-9047^4],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-376^4,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-9047^4,DevicePath:,},},Config:nil,},} Oct 5 12:09:02.358: INFO: Logging kubelet events for node v122-worker Oct 5 12:09:02.363: INFO: Logging pods the kubelet thinks is on node v122-worker Oct 5 12:09:02.389: INFO: csi-mockplugin-0 started at 2022-10-05 12:08:41 +0000 UTC (0+3 container statuses recorded) Oct 5 12:09:02.389: INFO: Container csi-provisioner ready: true, restart count 0 Oct 5 12:09:02.389: INFO: Container driver-registrar ready: true, restart count 0 Oct 5 12:09:02.390: INFO: Container mock ready: true, restart count 0 Oct 5 12:09:02.390: INFO: csi-mockplugin-resizer-0 started at 2022-10-05 12:08:43 +0000 UTC (0+1 container statuses recorded) Oct 5 12:09:02.390: INFO: Container csi-resizer ready: true, restart count 0 Oct 5 12:09:02.390: INFO: csi-mockplugin-0 started at 2022-10-05 12:08:28 +0000 UTC (0+4 container statuses recorded) Oct 5 12:09:02.390: INFO: Container busybox ready: true, restart count 0 Oct 5 12:09:02.390: INFO: Container csi-provisioner ready: true, restart count 0 Oct 5 12:09:02.390: INFO: Container driver-registrar ready: true, restart count 0 Oct 5 12:09:02.390: INFO: Container mock ready: true, restart count 0 Oct 5 12:09:02.390: INFO: csi-mockplugin-attacher-0 started at 2022-10-05 12:08:43 +0000 UTC (0+1 container statuses recorded) Oct 5 12:09:02.390: INFO: Container csi-attacher ready: true, restart count 0 Oct 5 12:09:02.390: INFO: pod-configmaps-2bb22201-613d-442b-9f83-a9d39e6f1499 started at 2022-10-05 12:06:31 +0000 UTC (0+1 container statuses recorded) Oct 5 12:09:02.390: INFO: Container agnhost-container ready: false, restart count 0 Oct 5 12:09:02.390: INFO: csi-mockplugin-attacher-0 started at 2022-10-05 12:08:41 +0000 UTC (0+1 container statuses recorded) Oct 5 12:09:02.390: INFO: Container csi-attacher ready: true, restart count 0 Oct 5 12:09:02.390: INFO: csi-mockplugin-resizer-0 started at 2022-10-05 12:08:41 +0000 UTC (0+1 container statuses recorded) Oct 5 12:09:02.390: INFO: Container csi-resizer ready: true, restart count 0 Oct 5 12:09:02.390: INFO: pvc-volume-tester-4wcx6 started at 2022-10-05 12:08:50 +0000 UTC (0+1 container statuses recorded) Oct 5 12:09:02.390: INFO: Container volume-tester ready: false, restart count 0 Oct 5 12:09:02.390: INFO: create-loop-devs-f76cj started at 2022-10-05 12:00:14 +0000 UTC (0+1 container statuses recorded) Oct 5 12:09:02.390: INFO: Container loopdev ready: true, restart count 0 Oct 5 12:09:02.390: INFO: csi-mockplugin-attacher-0 started at 2022-10-05 12:08:58 +0000 UTC (0+1 container statuses recorded) Oct 5 12:09:02.390: INFO: Container csi-attacher ready: false, restart count 0 Oct 5 12:09:02.390: INFO: hostexec-v122-worker-qt9q8 started at 2022-10-05 12:08:39 +0000 UTC (0+1 container statuses recorded) Oct 5 12:09:02.390: INFO: Container agnhost-container ready: true, restart count 0 Oct 5 12:09:02.390: INFO: pod-secrets-9d1aa6a5-fe49-413f-85a9-4c2a8e6f4e5b started at 2022-10-05 12:03:08 +0000 UTC (0+1 container statuses recorded) Oct 5 12:09:02.390: INFO: Container creates-volume-test ready: false, restart count 0 Oct 5 12:09:02.390: INFO: pod-secrets-76b16dac-27d0-4343-a0fe-b8ed5dd81977 started at 2022-10-05 12:06:14 +0000 UTC (0+1 container statuses recorded) Oct 5 12:09:02.390: INFO: Container creates-volume-test ready: false, restart count 0 Oct 5 12:09:02.390: INFO: pvc-volume-tester-66mbm started at 2022-10-05 12:08:48 +0000 UTC (0+1 container statuses recorded) Oct 5 12:09:02.390: INFO: Container volume-tester ready: false, restart count 0 Oct 5 12:09:02.390: INFO: csi-mockplugin-0 started at 2022-10-05 12:08:58 +0000 UTC (0+3 container statuses recorded) Oct 5 12:09:02.390: INFO: Container csi-provisioner ready: false, restart count 0 Oct 5 12:09:02.390: INFO: Container driver-registrar ready: false, restart count 0 Oct 5 12:09:02.390: INFO: Container mock ready: false, restart count 0 Oct 5 12:09:02.390: INFO: pvc-volume-tester-fhthn started at 2022-10-05 12:08:40 +0000 UTC (0+1 container statuses recorded) Oct 5 12:09:02.390: INFO: Container volume-tester ready: false, restart count 0 Oct 5 12:09:02.390: INFO: kindnet-rkh8m started at 2022-10-05 12:00:09 +0000 UTC (0+1 container statuses recorded) Oct 5 12:09:02.390: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 12:09:02.390: INFO: kube-proxy-xkzrn started at 2022-10-05 12:00:09 +0000 UTC (0+1 container statuses recorded) Oct 5 12:09:02.390: INFO: Container kube-proxy ready: true, restart count 0 Oct 5 12:09:02.390: INFO: csi-mockplugin-0 started at 2022-10-05 12:08:43 +0000 UTC (0+3 container statuses recorded) Oct 5 12:09:02.390: INFO: Container csi-provisioner ready: true, restart count 0 Oct 5 12:09:02.390: INFO: Container driver-registrar ready: true, restart count 0 Oct 5 12:09:02.390: INFO: Container mock ready: true, restart count 0 Oct 5 12:09:02.562: INFO: Latency metrics for node v122-worker Oct 5 12:09:02.562: INFO: Logging node info for node v122-worker2 Oct 5 12:09:02.566: INFO: Node Info: &Node{ObjectMeta:{v122-worker2 e098b7b6-6804-492f-b9ec-650d1924542e 10585 0 2022-10-05 12:00:08 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:v122-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-10-05 12:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2022-10-05 12:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-10-05 12:00:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-10-05 12:08:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v122/v122-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-10-05 12:08:59 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-10-05 12:08:59 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-10-05 12:08:59 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-10-05 12:08:59 +0000 UTC,LastTransitionTime:2022-10-05 12:00:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.5,},NodeAddress{Type:Hostname,Address:v122-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:feea07f38e414515ae57b946e27fa7bb,SystemUUID:07d898dc-4331-403b-9bdf-da8ef413d01c,BootID:5bd644eb-fc54-4a37-951a-44566369b55e,KernelVersion:5.15.0-48-generic,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.22.15,KubeProxyVersion:v1.22.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/sig-storage/nfs-provisioner@sha256:c1bedac8758029948afe060bf8f6ee63ea489b5e08d29745f44fab68ee0d46ca k8s.gcr.io/sig-storage/nfs-provisioner:v2.2.2],SizeBytes:138177747,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:df9770adc7c9d21f7f2c2f8e04efed16e9ca12f9c00aad56e5753bd1819ad95f k8s.gcr.io/kube-proxy:v1.22.15],SizeBytes:105434107,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:393eaa5fb5582428fe7b343ba5b729dbdb8a7d23832e3e9183f515fee5e478ca k8s.gcr.io/kube-apiserver:v1.22.15],SizeBytes:74691038,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:0c81eb77294a8fb7706e538aae9423a6301e00bae81cebf0a67c42f27e6714c7 k8s.gcr.io/kube-controller-manager:v1.22.15],SizeBytes:67534962,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:974ce90d1dfaa71470d7e80077a3dd85ed92848347005df49d710c1576511a2c k8s.gcr.io/kube-scheduler:v1.22.15],SizeBytes:53936244,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220726-ed811e41],SizeBytes:25818452,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220607-9a4d8d2a],SizeBytes:2859509,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 5 12:09:02.566: INFO: Logging kubelet events for node v122-worker2 Oct 5 12:09:02.571: INFO: Logging pods the kubelet thinks is on node v122-worker2 Oct 5 12:09:02.584: INFO: coredns-78fcd69978-srwh8 started at 2022-10-05 12:00:18 +0000 UTC (0+1 container statuses recorded) Oct 5 12:09:02.584: INFO: Container coredns ready: true, restart count 0 Oct 5 12:09:02.584: INFO: local-path-provisioner-58c8ccd54c-lkwwv started at 2022-10-05 12:00:18 +0000 UTC (0+1 container statuses recorded) Oct 5 12:09:02.584: INFO: Container local-path-provisioner ready: true, restart count 0 Oct 5 12:09:02.584: INFO: csi-mockplugin-0 started at 2022-10-05 12:08:38 +0000 UTC (0+3 container statuses recorded) Oct 5 12:09:02.584: INFO: Container csi-provisioner ready: true, restart count 0 Oct 5 12:09:02.584: INFO: Container driver-registrar ready: true, restart count 0 Oct 5 12:09:02.584: INFO: Container mock ready: true, restart count 0 Oct 5 12:09:02.584: INFO: hostexec-v122-worker2-75hw5 started at 2022-10-05 12:06:49 +0000 UTC (0+1 container statuses recorded) Oct 5 12:09:02.584: INFO: Container agnhost-container ready: true, restart count 0 Oct 5 12:09:02.584: INFO: pod-2dfdc6f1-9191-4301-96e6-b9954dca0603 started at 2022-10-05 12:07:05 +0000 UTC (0+1 container statuses recorded) Oct 5 12:09:02.584: INFO: Container write-pod ready: true, restart count 0 Oct 5 12:09:02.584: INFO: hostexec-v122-worker2-s9lhz started at 2022-10-05 12:09:00 +0000 UTC (0+1 container statuses recorded) Oct 5 12:09:02.584: INFO: Container agnhost-container ready: true, restart count 0 Oct 5 12:09:02.584: INFO: pod-secrets-e827c9fc-8fe2-4070-8ecd-1f57a842134f started at 2022-10-05 12:08:46 +0000 UTC (0+1 container statuses recorded) Oct 5 12:09:02.584: INFO: Container creates-volume-test ready: false, restart count 0 Oct 5 12:09:02.584: INFO: coredns-78fcd69978-vrzs8 started at 2022-10-05 12:00:18 +0000 UTC (0+1 container statuses recorded) Oct 5 12:09:02.584: INFO: Container coredns ready: true, restart count 0 Oct 5 12:09:02.584: INFO: create-loop-devs-6sf59 started at 2022-10-05 12:00:14 +0000 UTC (0+1 container statuses recorded) Oct 5 12:09:02.584: INFO: Container loopdev ready: true, restart count 0 Oct 5 12:09:02.584: INFO: pod-f79ff21f-4fdf-4e74-928e-a01f6395dff5 started at 2022-10-05 12:07:08 +0000 UTC (0+1 container statuses recorded) Oct 5 12:09:02.584: INFO: Container write-pod ready: false, restart count 0 Oct 5 12:09:02.584: INFO: csi-mockplugin-attacher-0 started at 2022-10-05 12:08:38 +0000 UTC (0+1 container statuses recorded) Oct 5 12:09:02.584: INFO: Container csi-attacher ready: true, restart count 0 Oct 5 12:09:02.584: INFO: kindnet-vqtz2 started at 2022-10-05 12:00:09 +0000 UTC (0+1 container statuses recorded) Oct 5 12:09:02.584: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 12:09:02.584: INFO: kube-proxy-pwsq7 started at 2022-10-05 12:00:09 +0000 UTC (0+1 container statuses recorded) Oct 5 12:09:02.584: INFO: Container kube-proxy ready: true, restart count 0 Oct 5 12:09:02.743: INFO: Latency metrics for node v122-worker2 Oct 5 12:09:02.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4656" for this suite. • Failure in Spec Setup (BeforeEach) [2.716 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 Oct 5 12:09:02.268: Unexpected error: : { Err: { s: "command terminated with exit code 1", }, Code: 1, } command terminated with exit code 1 occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:133 ------------------------------ {"msg":"FAILED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":9,"skipped":285,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:08:38.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not be passed when podInfoOnMount=nil /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:494 STEP: Building a driver namespace object, basename csi-mock-volumes-9034 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Oct 5 12:08:38.156: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9034-4655/csi-attacher Oct 5 12:08:38.159: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9034 Oct 5 12:08:38.159: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-9034 Oct 5 12:08:38.163: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9034 Oct 5 12:08:38.167: INFO: creating *v1.Role: csi-mock-volumes-9034-4655/external-attacher-cfg-csi-mock-volumes-9034 Oct 5 12:08:38.170: INFO: creating *v1.RoleBinding: csi-mock-volumes-9034-4655/csi-attacher-role-cfg Oct 5 12:08:38.174: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9034-4655/csi-provisioner Oct 5 12:08:38.177: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9034 Oct 5 12:08:38.177: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-9034 Oct 5 12:08:38.181: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9034 Oct 5 12:08:38.185: INFO: creating *v1.Role: csi-mock-volumes-9034-4655/external-provisioner-cfg-csi-mock-volumes-9034 Oct 5 12:08:38.190: INFO: creating *v1.RoleBinding: csi-mock-volumes-9034-4655/csi-provisioner-role-cfg Oct 5 12:08:38.193: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9034-4655/csi-resizer Oct 5 12:08:38.197: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9034 Oct 5 12:08:38.197: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-9034 Oct 5 12:08:38.200: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9034 Oct 5 12:08:38.204: INFO: creating *v1.Role: csi-mock-volumes-9034-4655/external-resizer-cfg-csi-mock-volumes-9034 Oct 5 12:08:38.208: INFO: creating *v1.RoleBinding: csi-mock-volumes-9034-4655/csi-resizer-role-cfg Oct 5 12:08:38.212: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9034-4655/csi-snapshotter Oct 5 12:08:38.215: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9034 Oct 5 12:08:38.215: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-9034 Oct 5 12:08:38.219: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9034 Oct 5 12:08:38.223: INFO: creating *v1.Role: csi-mock-volumes-9034-4655/external-snapshotter-leaderelection-csi-mock-volumes-9034 Oct 5 12:08:38.226: INFO: creating *v1.RoleBinding: csi-mock-volumes-9034-4655/external-snapshotter-leaderelection Oct 5 12:08:38.230: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9034-4655/csi-mock Oct 5 12:08:38.234: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9034 Oct 5 12:08:38.237: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9034 Oct 5 12:08:38.241: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9034 Oct 5 12:08:38.244: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9034 Oct 5 12:08:38.248: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9034 Oct 5 12:08:38.251: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9034 Oct 5 12:08:38.255: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9034 Oct 5 12:08:38.258: INFO: creating *v1.StatefulSet: csi-mock-volumes-9034-4655/csi-mockplugin Oct 5 12:08:38.265: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-9034 Oct 5 12:08:38.269: INFO: creating *v1.StatefulSet: csi-mock-volumes-9034-4655/csi-mockplugin-attacher Oct 5 12:08:38.274: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-9034" Oct 5 12:08:38.277: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-9034 to register on node v122-worker2 STEP: Creating pod Oct 5 12:08:43.290: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Oct 5 12:08:43.296: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-v77vs] to have phase Bound Oct 5 12:08:43.298: INFO: PersistentVolumeClaim pvc-v77vs found but phase is Pending instead of Bound. Oct 5 12:08:45.302: INFO: PersistentVolumeClaim pvc-v77vs found and phase=Bound (2.006571405s) STEP: Deleting the previously created pod Oct 5 12:08:51.321: INFO: Deleting pod "pvc-volume-tester-m85d7" in namespace "csi-mock-volumes-9034" Oct 5 12:08:51.326: INFO: Wait up to 5m0s for pod "pvc-volume-tester-m85d7" to be fully deleted STEP: Checking CSI driver logs Oct 5 12:08:55.343: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/cf3729b8-dcab-4625-b004-30c55c088abe/volumes/kubernetes.io~csi/pvc-c5abb4ec-2973-4527-a14a-f7fed25bbf3b/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-m85d7 Oct 5 12:08:55.343: INFO: Deleting pod "pvc-volume-tester-m85d7" in namespace "csi-mock-volumes-9034" STEP: Deleting claim pvc-v77vs Oct 5 12:08:55.353: INFO: Waiting up to 2m0s for PersistentVolume pvc-c5abb4ec-2973-4527-a14a-f7fed25bbf3b to get deleted Oct 5 12:08:55.357: INFO: PersistentVolume pvc-c5abb4ec-2973-4527-a14a-f7fed25bbf3b found and phase=Bound (3.230406ms) Oct 5 12:08:57.361: INFO: PersistentVolume pvc-c5abb4ec-2973-4527-a14a-f7fed25bbf3b found and phase=Released (2.007754944s) Oct 5 12:08:59.365: INFO: PersistentVolume pvc-c5abb4ec-2973-4527-a14a-f7fed25bbf3b found and phase=Released (4.011752835s) Oct 5 12:09:01.370: INFO: PersistentVolume pvc-c5abb4ec-2973-4527-a14a-f7fed25bbf3b found and phase=Released (6.016303718s) Oct 5 12:09:03.374: INFO: PersistentVolume pvc-c5abb4ec-2973-4527-a14a-f7fed25bbf3b was removed STEP: Deleting storageclass csi-mock-volumes-9034-scz66rl STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-9034 STEP: Waiting for namespaces [csi-mock-volumes-9034] to vanish STEP: uninstalling csi mock driver Oct 5 12:09:09.389: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9034-4655/csi-attacher Oct 5 12:09:09.394: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9034 Oct 5 12:09:09.399: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9034 Oct 5 12:09:09.404: INFO: deleting *v1.Role: csi-mock-volumes-9034-4655/external-attacher-cfg-csi-mock-volumes-9034 Oct 5 12:09:09.409: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9034-4655/csi-attacher-role-cfg Oct 5 12:09:09.413: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9034-4655/csi-provisioner Oct 5 12:09:09.418: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9034 Oct 5 12:09:09.422: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9034 Oct 5 12:09:09.427: INFO: deleting *v1.Role: csi-mock-volumes-9034-4655/external-provisioner-cfg-csi-mock-volumes-9034 Oct 5 12:09:09.432: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9034-4655/csi-provisioner-role-cfg Oct 5 12:09:09.436: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9034-4655/csi-resizer Oct 5 12:09:09.441: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9034 Oct 5 12:09:09.446: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9034 Oct 5 12:09:09.450: INFO: deleting *v1.Role: csi-mock-volumes-9034-4655/external-resizer-cfg-csi-mock-volumes-9034 Oct 5 12:09:09.455: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9034-4655/csi-resizer-role-cfg Oct 5 12:09:09.459: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9034-4655/csi-snapshotter Oct 5 12:09:09.464: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9034 Oct 5 12:09:09.468: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9034 Oct 5 12:09:09.472: INFO: deleting *v1.Role: csi-mock-volumes-9034-4655/external-snapshotter-leaderelection-csi-mock-volumes-9034 Oct 5 12:09:09.481: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9034-4655/external-snapshotter-leaderelection Oct 5 12:09:09.486: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9034-4655/csi-mock Oct 5 12:09:09.492: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9034 Oct 5 12:09:09.497: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9034 Oct 5 12:09:09.501: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9034 Oct 5 12:09:09.505: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9034 Oct 5 12:09:09.510: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9034 Oct 5 12:09:09.514: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9034 Oct 5 12:09:09.519: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9034 Oct 5 12:09:09.524: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9034-4655/csi-mockplugin Oct 5 12:09:09.529: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-9034 Oct 5 12:09:09.534: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9034-4655/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-9034-4655 STEP: Waiting for namespaces [csi-mock-volumes-9034-4655] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:09:15.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:37.493 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:444 should not be passed when podInfoOnMount=nil /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:494 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil","total":-1,"completed":4,"skipped":159,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:08:41.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume without restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:688 STEP: Building a driver namespace object, basename csi-mock-volumes-376 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Oct 5 12:08:41.530: INFO: creating *v1.ServiceAccount: csi-mock-volumes-376-1879/csi-attacher Oct 5 12:08:41.533: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-376 Oct 5 12:08:41.533: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-376 Oct 5 12:08:41.537: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-376 Oct 5 12:08:41.542: INFO: creating *v1.Role: csi-mock-volumes-376-1879/external-attacher-cfg-csi-mock-volumes-376 Oct 5 12:08:41.546: INFO: creating *v1.RoleBinding: csi-mock-volumes-376-1879/csi-attacher-role-cfg Oct 5 12:08:41.550: INFO: creating *v1.ServiceAccount: csi-mock-volumes-376-1879/csi-provisioner Oct 5 12:08:41.554: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-376 Oct 5 12:08:41.554: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-376 Oct 5 12:08:41.558: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-376 Oct 5 12:08:41.562: INFO: creating *v1.Role: csi-mock-volumes-376-1879/external-provisioner-cfg-csi-mock-volumes-376 Oct 5 12:08:41.566: INFO: creating *v1.RoleBinding: csi-mock-volumes-376-1879/csi-provisioner-role-cfg Oct 5 12:08:41.570: INFO: creating *v1.ServiceAccount: csi-mock-volumes-376-1879/csi-resizer Oct 5 12:08:41.573: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-376 Oct 5 12:08:41.573: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-376 Oct 5 12:08:41.577: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-376 Oct 5 12:08:41.581: INFO: creating *v1.Role: csi-mock-volumes-376-1879/external-resizer-cfg-csi-mock-volumes-376 Oct 5 12:08:41.585: INFO: creating *v1.RoleBinding: csi-mock-volumes-376-1879/csi-resizer-role-cfg Oct 5 12:08:41.588: INFO: creating *v1.ServiceAccount: csi-mock-volumes-376-1879/csi-snapshotter Oct 5 12:08:41.592: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-376 Oct 5 12:08:41.592: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-376 Oct 5 12:08:41.596: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-376 Oct 5 12:08:41.600: INFO: creating *v1.Role: csi-mock-volumes-376-1879/external-snapshotter-leaderelection-csi-mock-volumes-376 Oct 5 12:08:41.606: INFO: creating *v1.RoleBinding: csi-mock-volumes-376-1879/external-snapshotter-leaderelection Oct 5 12:08:41.610: INFO: creating *v1.ServiceAccount: csi-mock-volumes-376-1879/csi-mock Oct 5 12:08:41.614: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-376 Oct 5 12:08:41.617: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-376 Oct 5 12:08:41.621: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-376 Oct 5 12:08:41.625: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-376 Oct 5 12:08:41.629: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-376 Oct 5 12:08:41.633: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-376 Oct 5 12:08:41.637: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-376 Oct 5 12:08:41.641: INFO: creating *v1.StatefulSet: csi-mock-volumes-376-1879/csi-mockplugin Oct 5 12:08:41.647: INFO: creating *v1.StatefulSet: csi-mock-volumes-376-1879/csi-mockplugin-attacher Oct 5 12:08:41.652: INFO: creating *v1.StatefulSet: csi-mock-volumes-376-1879/csi-mockplugin-resizer Oct 5 12:08:41.658: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-376 to register on node v122-worker STEP: Creating pod Oct 5 12:08:46.672: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Oct 5 12:08:46.678: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-d7ck7] to have phase Bound Oct 5 12:08:46.681: INFO: PersistentVolumeClaim pvc-d7ck7 found but phase is Pending instead of Bound. Oct 5 12:08:48.686: INFO: PersistentVolumeClaim pvc-d7ck7 found and phase=Bound (2.007622147s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-66mbm Oct 5 12:09:06.724: INFO: Deleting pod "pvc-volume-tester-66mbm" in namespace "csi-mock-volumes-376" Oct 5 12:09:06.731: INFO: Wait up to 5m0s for pod "pvc-volume-tester-66mbm" to be fully deleted STEP: Deleting claim pvc-d7ck7 Oct 5 12:09:08.753: INFO: Waiting up to 2m0s for PersistentVolume pvc-0754e8b4-3f4b-4ece-9e89-11864d6ea6f2 to get deleted Oct 5 12:09:08.756: INFO: PersistentVolume pvc-0754e8b4-3f4b-4ece-9e89-11864d6ea6f2 found and phase=Bound (3.625961ms) Oct 5 12:09:10.760: INFO: PersistentVolume pvc-0754e8b4-3f4b-4ece-9e89-11864d6ea6f2 was removed STEP: Deleting storageclass csi-mock-volumes-376-sch9c6r STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-376 STEP: Waiting for namespaces [csi-mock-volumes-376] to vanish STEP: uninstalling csi mock driver Oct 5 12:09:16.776: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-376-1879/csi-attacher Oct 5 12:09:16.781: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-376 Oct 5 12:09:16.785: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-376 Oct 5 12:09:16.789: INFO: deleting *v1.Role: csi-mock-volumes-376-1879/external-attacher-cfg-csi-mock-volumes-376 Oct 5 12:09:16.792: INFO: deleting *v1.RoleBinding: csi-mock-volumes-376-1879/csi-attacher-role-cfg Oct 5 12:09:16.796: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-376-1879/csi-provisioner Oct 5 12:09:16.800: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-376 Oct 5 12:09:16.804: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-376 Oct 5 12:09:16.808: INFO: deleting *v1.Role: csi-mock-volumes-376-1879/external-provisioner-cfg-csi-mock-volumes-376 Oct 5 12:09:16.812: INFO: deleting *v1.RoleBinding: csi-mock-volumes-376-1879/csi-provisioner-role-cfg Oct 5 12:09:16.817: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-376-1879/csi-resizer Oct 5 12:09:16.821: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-376 Oct 5 12:09:16.825: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-376 Oct 5 12:09:16.829: INFO: deleting *v1.Role: csi-mock-volumes-376-1879/external-resizer-cfg-csi-mock-volumes-376 Oct 5 12:09:16.834: INFO: deleting *v1.RoleBinding: csi-mock-volumes-376-1879/csi-resizer-role-cfg Oct 5 12:09:16.838: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-376-1879/csi-snapshotter Oct 5 12:09:16.843: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-376 Oct 5 12:09:16.847: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-376 Oct 5 12:09:16.852: INFO: deleting *v1.Role: csi-mock-volumes-376-1879/external-snapshotter-leaderelection-csi-mock-volumes-376 Oct 5 12:09:16.856: INFO: deleting *v1.RoleBinding: csi-mock-volumes-376-1879/external-snapshotter-leaderelection Oct 5 12:09:16.860: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-376-1879/csi-mock Oct 5 12:09:16.865: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-376 Oct 5 12:09:16.875: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-376 Oct 5 12:09:16.879: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-376 Oct 5 12:09:16.884: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-376 Oct 5 12:09:16.888: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-376 Oct 5 12:09:16.892: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-376 Oct 5 12:09:16.897: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-376 Oct 5 12:09:16.902: INFO: deleting *v1.StatefulSet: csi-mock-volumes-376-1879/csi-mockplugin Oct 5 12:09:16.906: INFO: deleting *v1.StatefulSet: csi-mock-volumes-376-1879/csi-mockplugin-attacher Oct 5 12:09:16.911: INFO: deleting *v1.StatefulSet: csi-mock-volumes-376-1879/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-376-1879 STEP: Waiting for namespaces [csi-mock-volumes-376-1879] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:09:22.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:41.488 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI online volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:673 should expand volume without restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:688 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=on, nodeExpansion=on","total":-1,"completed":6,"skipped":263,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:09:23.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Oct 5 12:09:23.048: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:09:23.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-4225" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.048 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 schedule pods each with a PD, delete pod and verify detach [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:93 for RW PD with pod delete grace period of "immediate (0s)" /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:135 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:09:23.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:144 [It] should test that deleting a claim before the volume is provisioned deletes the volume. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:424 Oct 5 12:09:23.107: INFO: Only supported for providers [openstack gce aws gke vsphere azure] (not local) [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:09:23.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-3038" for this suite. S [SKIPPING] [0.042 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 DynamicProvisioner [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:150 should test that deleting a claim before the volume is provisioned deletes the volume. [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:424 Only supported for providers [openstack gce aws gke vsphere azure] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:430 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:09:23.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:57 Oct 5 12:09:23.187: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:09:23.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-6393" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:91 S [SKIPPING] in Spec Setup (BeforeEach) [0.043 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create volume metrics with the correct BlockMode PVC ref [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:270 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:61 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:09:23.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:144 [It] deletion should be idempotent /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:470 Oct 5 12:09:23.270: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:09:23.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-5313" for this suite. S [SKIPPING] [0.043 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 DynamicProvisioner [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:150 deletion should be idempotent [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:470 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:476 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:08:58.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] token should be plumbed down when csiServiceAccountTokenEnabled=true /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1525 STEP: Building a driver namespace object, basename csi-mock-volumes-6747 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Oct 5 12:08:58.443: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6747-5656/csi-attacher Oct 5 12:08:58.447: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6747 Oct 5 12:08:58.447: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-6747 Oct 5 12:08:58.451: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6747 Oct 5 12:08:58.455: INFO: creating *v1.Role: csi-mock-volumes-6747-5656/external-attacher-cfg-csi-mock-volumes-6747 Oct 5 12:08:58.459: INFO: creating *v1.RoleBinding: csi-mock-volumes-6747-5656/csi-attacher-role-cfg Oct 5 12:08:58.463: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6747-5656/csi-provisioner Oct 5 12:08:58.467: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6747 Oct 5 12:08:58.467: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-6747 Oct 5 12:08:58.470: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6747 Oct 5 12:08:58.474: INFO: creating *v1.Role: csi-mock-volumes-6747-5656/external-provisioner-cfg-csi-mock-volumes-6747 Oct 5 12:08:58.478: INFO: creating *v1.RoleBinding: csi-mock-volumes-6747-5656/csi-provisioner-role-cfg Oct 5 12:08:58.482: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6747-5656/csi-resizer Oct 5 12:08:58.487: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6747 Oct 5 12:08:58.487: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-6747 Oct 5 12:08:58.490: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6747 Oct 5 12:08:58.494: INFO: creating *v1.Role: csi-mock-volumes-6747-5656/external-resizer-cfg-csi-mock-volumes-6747 Oct 5 12:08:58.497: INFO: creating *v1.RoleBinding: csi-mock-volumes-6747-5656/csi-resizer-role-cfg Oct 5 12:08:58.501: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6747-5656/csi-snapshotter Oct 5 12:08:58.504: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6747 Oct 5 12:08:58.504: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-6747 Oct 5 12:08:58.508: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6747 Oct 5 12:08:58.511: INFO: creating *v1.Role: csi-mock-volumes-6747-5656/external-snapshotter-leaderelection-csi-mock-volumes-6747 Oct 5 12:08:58.515: INFO: creating *v1.RoleBinding: csi-mock-volumes-6747-5656/external-snapshotter-leaderelection Oct 5 12:08:58.519: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6747-5656/csi-mock Oct 5 12:08:58.522: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6747 Oct 5 12:08:58.526: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6747 Oct 5 12:08:58.530: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6747 Oct 5 12:08:58.534: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6747 Oct 5 12:08:58.537: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6747 Oct 5 12:08:58.541: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6747 Oct 5 12:08:58.545: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6747 Oct 5 12:08:58.549: INFO: creating *v1.StatefulSet: csi-mock-volumes-6747-5656/csi-mockplugin Oct 5 12:08:58.558: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-6747 Oct 5 12:08:58.563: INFO: creating *v1.StatefulSet: csi-mock-volumes-6747-5656/csi-mockplugin-attacher Oct 5 12:08:58.568: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-6747" Oct 5 12:08:58.571: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-6747 to register on node v122-worker STEP: Creating pod Oct 5 12:09:08.090: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Oct 5 12:09:08.098: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-6rxkw] to have phase Bound Oct 5 12:09:08.102: INFO: PersistentVolumeClaim pvc-6rxkw found but phase is Pending instead of Bound. Oct 5 12:09:10.107: INFO: PersistentVolumeClaim pvc-6rxkw found and phase=Bound (2.008547518s) STEP: Deleting the previously created pod Oct 5 12:09:23.129: INFO: Deleting pod "pvc-volume-tester-44jch" in namespace "csi-mock-volumes-6747" Oct 5 12:09:23.135: INFO: Wait up to 5m0s for pod "pvc-volume-tester-44jch" to be fully deleted STEP: Checking CSI driver logs Oct 5 12:09:27.164: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.tokens: {"":{"token":"eyJhbGciOiJSUzI1NiIsImtpZCI6ImlsZmZONDFoWjNTWVU3Q1RvTWtRc1FOWGRCRDNSRjJuVEh3RGtoTzRNR0UifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjY0OTcyMzYxLCJpYXQiOjE2NjQ5NzE3NjEsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJjc2ktbW9jay12b2x1bWVzLTY3NDciLCJwb2QiOnsibmFtZSI6InB2Yy12b2x1bWUtdGVzdGVyLTQ0amNoIiwidWlkIjoiMDQ0N2E2OGUtMGIwOC00YzAwLTlmM2YtNzUwNDBmN2NlMTA0In0sInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJkZWZhdWx0IiwidWlkIjoiNjcyMDlhYWItMmM1Zi00NTI4LTlhNWYtNTcyYzI2YjhiZmY5In19LCJuYmYiOjE2NjQ5NzE3NjEsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpjc2ktbW9jay12b2x1bWVzLTY3NDc6ZGVmYXVsdCJ9.Dr0E-xoaOdJN2D9eQFEf-m_8jf2irxpRaPIIXWBxP1gCzqmMn7lvi0Kv9OWMqjRz2wZwlGbvkTrPqN5ei4E8K9FjOet1QYa5KO0LoJXP8Tv0Yne9lO6jj1InMQnck3AwUUeg5WEWLs2jmNGe5kliiMAcAoI5Ra-vdLGjTIKTZsfY-o4cSX-jvXqh4KtA0LgK4v2zga1HU0SxM_YSdDOeup4Kk8z9L4wR1peIUrd9gm0dPVbxbKJ5iniq08zB5XS_K9bejJgtgsxAtdMG7CVEgJGMS3_NHdTwG4hWiqzVVFpXGlGgAXaczjsXUfRcQHzGx1f7Hx72oLkumRQuEHDLeg","expirationTimestamp":"2022-10-05T12:19:21Z"}} Oct 5 12:09:27.164: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/0447a68e-0b08-4c00-9f3f-75040f7ce104/volumes/kubernetes.io~csi/pvc-1894a586-f65b-4a40-8c4e-b7fb61a22f1e/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-44jch Oct 5 12:09:27.164: INFO: Deleting pod "pvc-volume-tester-44jch" in namespace "csi-mock-volumes-6747" STEP: Deleting claim pvc-6rxkw Oct 5 12:09:27.178: INFO: Waiting up to 2m0s for PersistentVolume pvc-1894a586-f65b-4a40-8c4e-b7fb61a22f1e to get deleted Oct 5 12:09:27.181: INFO: PersistentVolume pvc-1894a586-f65b-4a40-8c4e-b7fb61a22f1e found and phase=Bound (3.570917ms) Oct 5 12:09:29.185: INFO: PersistentVolume pvc-1894a586-f65b-4a40-8c4e-b7fb61a22f1e found and phase=Released (2.007325302s) Oct 5 12:09:31.189: INFO: PersistentVolume pvc-1894a586-f65b-4a40-8c4e-b7fb61a22f1e was removed STEP: Deleting storageclass csi-mock-volumes-6747-sccv929 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-6747 STEP: Waiting for namespaces [csi-mock-volumes-6747] to vanish STEP: uninstalling csi mock driver Oct 5 12:09:37.206: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6747-5656/csi-attacher Oct 5 12:09:37.212: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6747 Oct 5 12:09:37.217: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6747 Oct 5 12:09:37.222: INFO: deleting *v1.Role: csi-mock-volumes-6747-5656/external-attacher-cfg-csi-mock-volumes-6747 Oct 5 12:09:37.232: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6747-5656/csi-attacher-role-cfg Oct 5 12:09:37.236: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6747-5656/csi-provisioner Oct 5 12:09:37.241: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6747 Oct 5 12:09:37.245: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6747 Oct 5 12:09:37.250: INFO: deleting *v1.Role: csi-mock-volumes-6747-5656/external-provisioner-cfg-csi-mock-volumes-6747 Oct 5 12:09:37.254: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6747-5656/csi-provisioner-role-cfg Oct 5 12:09:37.259: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6747-5656/csi-resizer Oct 5 12:09:37.264: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6747 Oct 5 12:09:37.268: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6747 Oct 5 12:09:37.273: INFO: deleting *v1.Role: csi-mock-volumes-6747-5656/external-resizer-cfg-csi-mock-volumes-6747 Oct 5 12:09:37.277: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6747-5656/csi-resizer-role-cfg Oct 5 12:09:37.282: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6747-5656/csi-snapshotter Oct 5 12:09:37.286: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6747 Oct 5 12:09:37.291: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6747 Oct 5 12:09:37.295: INFO: deleting *v1.Role: csi-mock-volumes-6747-5656/external-snapshotter-leaderelection-csi-mock-volumes-6747 Oct 5 12:09:37.300: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6747-5656/external-snapshotter-leaderelection Oct 5 12:09:37.304: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6747-5656/csi-mock Oct 5 12:09:37.309: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6747 Oct 5 12:09:37.314: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6747 Oct 5 12:09:37.318: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6747 Oct 5 12:09:37.323: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6747 Oct 5 12:09:37.328: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6747 Oct 5 12:09:37.332: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6747 Oct 5 12:09:37.337: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6747 Oct 5 12:09:37.341: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6747-5656/csi-mockplugin Oct 5 12:09:37.346: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-6747 Oct 5 12:09:37.351: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6747-5656/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-6747-5656 STEP: Waiting for namespaces [csi-mock-volumes-6747-5656] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:09:43.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:45.020 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIServiceAccountToken /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1497 token should be plumbed down when csiServiceAccountTokenEnabled=true /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1525 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should be plumbed down when csiServiceAccountTokenEnabled=true","total":-1,"completed":13,"skipped":464,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: block] Set fsGroup for local volume should set fsGroup for one pod [Slow]"]} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:08:43.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume by restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:591 STEP: Building a driver namespace object, basename csi-mock-volumes-9047 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Oct 5 12:08:43.725: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9047-904/csi-attacher Oct 5 12:08:43.729: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9047 Oct 5 12:08:43.729: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-9047 Oct 5 12:08:43.733: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9047 Oct 5 12:08:43.736: INFO: creating *v1.Role: csi-mock-volumes-9047-904/external-attacher-cfg-csi-mock-volumes-9047 Oct 5 12:08:43.740: INFO: creating *v1.RoleBinding: csi-mock-volumes-9047-904/csi-attacher-role-cfg Oct 5 12:08:43.744: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9047-904/csi-provisioner Oct 5 12:08:43.747: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9047 Oct 5 12:08:43.748: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-9047 Oct 5 12:08:43.751: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9047 Oct 5 12:08:43.754: INFO: creating *v1.Role: csi-mock-volumes-9047-904/external-provisioner-cfg-csi-mock-volumes-9047 Oct 5 12:08:43.758: INFO: creating *v1.RoleBinding: csi-mock-volumes-9047-904/csi-provisioner-role-cfg Oct 5 12:08:43.762: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9047-904/csi-resizer Oct 5 12:08:43.765: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9047 Oct 5 12:08:43.765: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-9047 Oct 5 12:08:43.768: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9047 Oct 5 12:08:43.772: INFO: creating *v1.Role: csi-mock-volumes-9047-904/external-resizer-cfg-csi-mock-volumes-9047 Oct 5 12:08:43.776: INFO: creating *v1.RoleBinding: csi-mock-volumes-9047-904/csi-resizer-role-cfg Oct 5 12:08:43.780: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9047-904/csi-snapshotter Oct 5 12:08:43.783: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9047 Oct 5 12:08:43.783: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-9047 Oct 5 12:08:43.787: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9047 Oct 5 12:08:43.791: INFO: creating *v1.Role: csi-mock-volumes-9047-904/external-snapshotter-leaderelection-csi-mock-volumes-9047 Oct 5 12:08:43.795: INFO: creating *v1.RoleBinding: csi-mock-volumes-9047-904/external-snapshotter-leaderelection Oct 5 12:08:43.799: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9047-904/csi-mock Oct 5 12:08:43.802: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9047 Oct 5 12:08:43.805: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9047 Oct 5 12:08:43.809: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9047 Oct 5 12:08:43.812: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9047 Oct 5 12:08:43.816: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9047 Oct 5 12:08:43.819: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9047 Oct 5 12:08:43.823: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9047 Oct 5 12:08:43.826: INFO: creating *v1.StatefulSet: csi-mock-volumes-9047-904/csi-mockplugin Oct 5 12:08:43.833: INFO: creating *v1.StatefulSet: csi-mock-volumes-9047-904/csi-mockplugin-attacher Oct 5 12:08:43.838: INFO: creating *v1.StatefulSet: csi-mock-volumes-9047-904/csi-mockplugin-resizer Oct 5 12:08:43.843: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-9047 to register on node v122-worker STEP: Creating pod Oct 5 12:08:48.857: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Oct 5 12:08:48.864: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-2h5ck] to have phase Bound Oct 5 12:08:48.867: INFO: PersistentVolumeClaim pvc-2h5ck found but phase is Pending instead of Bound. Oct 5 12:08:50.872: INFO: PersistentVolumeClaim pvc-2h5ck found and phase=Bound (2.007840632s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Checking for conditions on pvc STEP: Deleting the previously created pod Oct 5 12:09:08.914: INFO: Deleting pod "pvc-volume-tester-4wcx6" in namespace "csi-mock-volumes-9047" Oct 5 12:09:08.920: INFO: Wait up to 5m0s for pod "pvc-volume-tester-4wcx6" to be fully deleted STEP: Creating a new pod with same volume STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-4wcx6 Oct 5 12:09:20.939: INFO: Deleting pod "pvc-volume-tester-4wcx6" in namespace "csi-mock-volumes-9047" STEP: Deleting pod pvc-volume-tester-9bgqc Oct 5 12:09:20.943: INFO: Deleting pod "pvc-volume-tester-9bgqc" in namespace "csi-mock-volumes-9047" Oct 5 12:09:20.949: INFO: Wait up to 5m0s for pod "pvc-volume-tester-9bgqc" to be fully deleted STEP: Deleting claim pvc-2h5ck Oct 5 12:09:24.965: INFO: Waiting up to 2m0s for PersistentVolume pvc-f4aa1df0-4258-4d80-9c2c-9815d01679bc to get deleted Oct 5 12:09:24.968: INFO: PersistentVolume pvc-f4aa1df0-4258-4d80-9c2c-9815d01679bc found and phase=Bound (3.199488ms) Oct 5 12:09:26.972: INFO: PersistentVolume pvc-f4aa1df0-4258-4d80-9c2c-9815d01679bc found and phase=Released (2.007693512s) Oct 5 12:09:28.977: INFO: PersistentVolume pvc-f4aa1df0-4258-4d80-9c2c-9815d01679bc found and phase=Released (4.012277724s) Oct 5 12:09:30.982: INFO: PersistentVolume pvc-f4aa1df0-4258-4d80-9c2c-9815d01679bc found and phase=Released (6.017205912s) Oct 5 12:09:32.987: INFO: PersistentVolume pvc-f4aa1df0-4258-4d80-9c2c-9815d01679bc was removed STEP: Deleting storageclass csi-mock-volumes-9047-scsmlv4 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-9047 STEP: Waiting for namespaces [csi-mock-volumes-9047] to vanish STEP: uninstalling csi mock driver Oct 5 12:09:39.005: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9047-904/csi-attacher Oct 5 12:09:39.011: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9047 Oct 5 12:09:39.016: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9047 Oct 5 12:09:39.021: INFO: deleting *v1.Role: csi-mock-volumes-9047-904/external-attacher-cfg-csi-mock-volumes-9047 Oct 5 12:09:39.026: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9047-904/csi-attacher-role-cfg Oct 5 12:09:39.030: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9047-904/csi-provisioner Oct 5 12:09:39.035: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9047 Oct 5 12:09:39.039: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9047 Oct 5 12:09:39.044: INFO: deleting *v1.Role: csi-mock-volumes-9047-904/external-provisioner-cfg-csi-mock-volumes-9047 Oct 5 12:09:39.048: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9047-904/csi-provisioner-role-cfg Oct 5 12:09:39.053: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9047-904/csi-resizer Oct 5 12:09:39.057: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9047 Oct 5 12:09:39.062: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9047 Oct 5 12:09:39.066: INFO: deleting *v1.Role: csi-mock-volumes-9047-904/external-resizer-cfg-csi-mock-volumes-9047 Oct 5 12:09:39.071: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9047-904/csi-resizer-role-cfg Oct 5 12:09:39.075: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9047-904/csi-snapshotter Oct 5 12:09:39.079: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9047 Oct 5 12:09:39.084: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9047 Oct 5 12:09:39.088: INFO: deleting *v1.Role: csi-mock-volumes-9047-904/external-snapshotter-leaderelection-csi-mock-volumes-9047 Oct 5 12:09:39.093: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9047-904/external-snapshotter-leaderelection Oct 5 12:09:39.097: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9047-904/csi-mock Oct 5 12:09:39.102: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9047 Oct 5 12:09:39.106: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9047 Oct 5 12:09:39.111: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9047 Oct 5 12:09:39.126: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9047 Oct 5 12:09:39.130: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9047 Oct 5 12:09:39.135: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9047 Oct 5 12:09:39.140: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9047 Oct 5 12:09:39.144: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9047-904/csi-mockplugin Oct 5 12:09:39.149: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9047-904/csi-mockplugin-attacher Oct 5 12:09:39.155: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9047-904/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-9047-904 STEP: Waiting for namespaces [csi-mock-volumes-9047-904] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:09:45.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:61.533 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI Volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:562 should expand volume by restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:591 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on","total":-1,"completed":6,"skipped":238,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:09:45.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:59 STEP: Creating configMap with name configmap-test-volume-a4f814a9-7385-4507-a11c-9cf952804219 STEP: Creating a pod to test consume configMaps Oct 5 12:09:45.237: INFO: Waiting up to 5m0s for pod "pod-configmaps-5bce4fa7-b490-475f-b88b-bf316a21dbe7" in namespace "configmap-2289" to be "Succeeded or Failed" Oct 5 12:09:45.240: INFO: Pod "pod-configmaps-5bce4fa7-b490-475f-b88b-bf316a21dbe7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.090717ms Oct 5 12:09:47.244: INFO: Pod "pod-configmaps-5bce4fa7-b490-475f-b88b-bf316a21dbe7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007569388s Oct 5 12:09:49.248: INFO: Pod "pod-configmaps-5bce4fa7-b490-475f-b88b-bf316a21dbe7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011651708s STEP: Saw pod success Oct 5 12:09:49.249: INFO: Pod "pod-configmaps-5bce4fa7-b490-475f-b88b-bf316a21dbe7" satisfied condition "Succeeded or Failed" Oct 5 12:09:49.252: INFO: Trying to get logs from node v122-worker pod pod-configmaps-5bce4fa7-b490-475f-b88b-bf316a21dbe7 container agnhost-container: STEP: delete the pod Oct 5 12:09:49.267: INFO: Waiting for pod pod-configmaps-5bce4fa7-b490-475f-b88b-bf316a21dbe7 to disappear Oct 5 12:09:49.270: INFO: Pod pod-configmaps-5bce4fa7-b490-475f-b88b-bf316a21dbe7 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:09:49.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2289" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":7,"skipped":241,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:09:43.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pvc-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:72 Oct 5 12:09:43.434: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PVC Oct 5 12:09:43.440: INFO: Default storage class: "standard" Oct 5 12:09:43.440: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Creating a Pod that becomes Running and therefore is actively using the PVC STEP: Waiting for PVC to become Bound Oct 5 12:09:51.465: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-protection9nfbv] to have phase Bound Oct 5 12:09:51.468: INFO: PersistentVolumeClaim pvc-protection9nfbv found and phase=Bound (3.451158ms) STEP: Checking that PVC Protection finalizer is set [It] Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:145 STEP: Deleting the PVC, however, the PVC must not be removed from the system as it's in active use by a pod STEP: Checking that the PVC status is Terminating STEP: Creating second Pod whose scheduling fails because it uses a PVC that is being deleted Oct 5 12:09:51.486: INFO: Waiting up to 5m0s for pod "pvc-tester-fjfn4" in namespace "pvc-protection-46" to be "Unschedulable" Oct 5 12:09:51.489: INFO: Pod "pvc-tester-fjfn4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.411006ms Oct 5 12:09:53.495: INFO: Pod "pvc-tester-fjfn4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009515034s Oct 5 12:09:53.495: INFO: Pod "pvc-tester-fjfn4" satisfied condition "Unschedulable" STEP: Deleting the second pod that uses the PVC that is being deleted Oct 5 12:09:53.499: INFO: Deleting pod "pvc-tester-fjfn4" in namespace "pvc-protection-46" Oct 5 12:09:53.509: INFO: Wait up to 5m0s for pod "pvc-tester-fjfn4" to be fully deleted STEP: Checking again that the PVC status is Terminating STEP: Deleting the first pod that uses the PVC Oct 5 12:09:53.515: INFO: Deleting pod "pvc-tester-6wddn" in namespace "pvc-protection-46" Oct 5 12:09:53.521: INFO: Wait up to 5m0s for pod "pvc-tester-6wddn" to be fully deleted STEP: Checking that the PVC is automatically removed from the system because it's no longer in active use by a pod Oct 5 12:09:55.528: INFO: Waiting up to 3m0s for PersistentVolumeClaim pvc-protection9nfbv to be removed Oct 5 12:09:55.531: INFO: Claim "pvc-protection9nfbv" in namespace "pvc-protection-46" doesn't exist in the system [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:09:55.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pvc-protection-46" for this suite. [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:108 • [SLOW TEST:12.145 seconds] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:145 ------------------------------ {"msg":"PASSED [sig-storage] PVC Protection Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable","total":-1,"completed":14,"skipped":473,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: block] Set fsGroup for local volume should set fsGroup for one pod [Slow]"]} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:09:49.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Oct 5 12:09:51.360: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-dcf08c45-89b5-4e14-87a2-1a7a01803971-backend && mount --bind /tmp/local-volume-test-dcf08c45-89b5-4e14-87a2-1a7a01803971-backend /tmp/local-volume-test-dcf08c45-89b5-4e14-87a2-1a7a01803971-backend && ln -s /tmp/local-volume-test-dcf08c45-89b5-4e14-87a2-1a7a01803971-backend /tmp/local-volume-test-dcf08c45-89b5-4e14-87a2-1a7a01803971] Namespace:persistent-local-volumes-test-5570 PodName:hostexec-v122-worker2-5pgmq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:09:51.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:09:51.511: INFO: Creating a PV followed by a PVC Oct 5 12:09:51.520: INFO: Waiting for PV local-pvw4rcw to bind to PVC pvc-6km4h Oct 5 12:09:51.520: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-6km4h] to have phase Bound Oct 5 12:09:51.523: INFO: PersistentVolumeClaim pvc-6km4h found but phase is Pending instead of Bound. Oct 5 12:09:53.527: INFO: PersistentVolumeClaim pvc-6km4h found and phase=Bound (2.006963928s) Oct 5 12:09:53.527: INFO: Waiting up to 3m0s for PersistentVolume local-pvw4rcw to have phase Bound Oct 5 12:09:53.530: INFO: PersistentVolume local-pvw4rcw found and phase=Bound (2.790824ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Oct 5 12:09:55.555: INFO: pod "pod-827ca8e5-da29-435f-afbc-57383320b967" created on Node "v122-worker2" STEP: Writing in pod1 Oct 5 12:09:55.555: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5570 PodName:pod-827ca8e5-da29-435f-afbc-57383320b967 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:09:55.556: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:09:55.685: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Oct 5 12:09:55.685: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5570 PodName:pod-827ca8e5-da29-435f-afbc-57383320b967 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:09:55.685: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:09:55.762: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-827ca8e5-da29-435f-afbc-57383320b967 in namespace persistent-local-volumes-test-5570 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:09:55.768: INFO: Deleting PersistentVolumeClaim "pvc-6km4h" Oct 5 12:09:55.773: INFO: Deleting PersistentVolume "local-pvw4rcw" STEP: Removing the test directory Oct 5 12:09:55.777: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-dcf08c45-89b5-4e14-87a2-1a7a01803971 && umount /tmp/local-volume-test-dcf08c45-89b5-4e14-87a2-1a7a01803971-backend && rm -r /tmp/local-volume-test-dcf08c45-89b5-4e14-87a2-1a7a01803971-backend] Namespace:persistent-local-volumes-test-5570 PodName:hostexec-v122-worker2-5pgmq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:09:55.777: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:09:55.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5570" for this suite. • [SLOW TEST:6.593 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":8,"skipped":248,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:09:02.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should modify fsGroup if fsGroupPolicy=default /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1583 STEP: Building a driver namespace object, basename csi-mock-volumes-5253 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Oct 5 12:09:02.887: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5253-7932/csi-attacher Oct 5 12:09:02.891: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5253 Oct 5 12:09:02.891: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-5253 Oct 5 12:09:02.895: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5253 Oct 5 12:09:02.899: INFO: creating *v1.Role: csi-mock-volumes-5253-7932/external-attacher-cfg-csi-mock-volumes-5253 Oct 5 12:09:02.903: INFO: creating *v1.RoleBinding: csi-mock-volumes-5253-7932/csi-attacher-role-cfg Oct 5 12:09:02.907: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5253-7932/csi-provisioner Oct 5 12:09:02.910: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5253 Oct 5 12:09:02.911: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-5253 Oct 5 12:09:02.915: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5253 Oct 5 12:09:02.919: INFO: creating *v1.Role: csi-mock-volumes-5253-7932/external-provisioner-cfg-csi-mock-volumes-5253 Oct 5 12:09:02.923: INFO: creating *v1.RoleBinding: csi-mock-volumes-5253-7932/csi-provisioner-role-cfg Oct 5 12:09:02.926: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5253-7932/csi-resizer Oct 5 12:09:02.930: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5253 Oct 5 12:09:02.930: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-5253 Oct 5 12:09:02.933: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5253 Oct 5 12:09:02.937: INFO: creating *v1.Role: csi-mock-volumes-5253-7932/external-resizer-cfg-csi-mock-volumes-5253 Oct 5 12:09:02.941: INFO: creating *v1.RoleBinding: csi-mock-volumes-5253-7932/csi-resizer-role-cfg Oct 5 12:09:02.945: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5253-7932/csi-snapshotter Oct 5 12:09:02.949: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5253 Oct 5 12:09:02.949: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-5253 Oct 5 12:09:02.952: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5253 Oct 5 12:09:02.956: INFO: creating *v1.Role: csi-mock-volumes-5253-7932/external-snapshotter-leaderelection-csi-mock-volumes-5253 Oct 5 12:09:02.964: INFO: creating *v1.RoleBinding: csi-mock-volumes-5253-7932/external-snapshotter-leaderelection Oct 5 12:09:02.968: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5253-7932/csi-mock Oct 5 12:09:02.971: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5253 Oct 5 12:09:02.975: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5253 Oct 5 12:09:02.979: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5253 Oct 5 12:09:02.983: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5253 Oct 5 12:09:02.986: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5253 Oct 5 12:09:02.990: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5253 Oct 5 12:09:02.993: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5253 Oct 5 12:09:02.997: INFO: creating *v1.StatefulSet: csi-mock-volumes-5253-7932/csi-mockplugin Oct 5 12:09:03.003: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-5253 Oct 5 12:09:03.007: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-5253" Oct 5 12:09:03.010: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5253 to register on node v122-worker2 STEP: Creating pod with fsGroup Oct 5 12:09:13.034: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Oct 5 12:09:13.041: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-sn4bh] to have phase Bound Oct 5 12:09:13.044: INFO: PersistentVolumeClaim pvc-sn4bh found but phase is Pending instead of Bound. Oct 5 12:09:15.048: INFO: PersistentVolumeClaim pvc-sn4bh found and phase=Bound (2.007000618s) Oct 5 12:09:17.067: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir /mnt/test/csi-mock-volumes-5253] Namespace:csi-mock-volumes-5253 PodName:pvc-volume-tester-7fmvg ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:09:17.068: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:09:17.117: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'filecontents' > '/mnt/test/csi-mock-volumes-5253/csi-mock-volumes-5253'; sync] Namespace:csi-mock-volumes-5253 PodName:pvc-volume-tester-7fmvg ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:09:17.117: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:09:17.245: INFO: ExecWithOptions {Command:[/bin/sh -c ls -l /mnt/test/csi-mock-volumes-5253/csi-mock-volumes-5253] Namespace:csi-mock-volumes-5253 PodName:pvc-volume-tester-7fmvg ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:09:17.245: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:09:17.362: INFO: pod csi-mock-volumes-5253/pvc-volume-tester-7fmvg exec for cmd ls -l /mnt/test/csi-mock-volumes-5253/csi-mock-volumes-5253, stdout: -rw-r--r-- 1 root 1694 13 Oct 5 12:09 /mnt/test/csi-mock-volumes-5253/csi-mock-volumes-5253, stderr: Oct 5 12:09:17.362: INFO: ExecWithOptions {Command:[/bin/sh -c rm -fr /mnt/test/csi-mock-volumes-5253] Namespace:csi-mock-volumes-5253 PodName:pvc-volume-tester-7fmvg ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:09:17.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod pvc-volume-tester-7fmvg Oct 5 12:09:17.482: INFO: Deleting pod "pvc-volume-tester-7fmvg" in namespace "csi-mock-volumes-5253" Oct 5 12:09:17.487: INFO: Wait up to 5m0s for pod "pvc-volume-tester-7fmvg" to be fully deleted STEP: Deleting claim pvc-sn4bh Oct 5 12:09:49.502: INFO: Waiting up to 2m0s for PersistentVolume pvc-2cf1de2d-f80b-4868-9345-0bd666a3ebd5 to get deleted Oct 5 12:09:49.505: INFO: PersistentVolume pvc-2cf1de2d-f80b-4868-9345-0bd666a3ebd5 found and phase=Bound (3.168844ms) Oct 5 12:09:51.510: INFO: PersistentVolume pvc-2cf1de2d-f80b-4868-9345-0bd666a3ebd5 was removed STEP: Deleting storageclass csi-mock-volumes-5253-scjf22z STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-5253 STEP: Waiting for namespaces [csi-mock-volumes-5253] to vanish STEP: uninstalling csi mock driver Oct 5 12:09:57.524: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5253-7932/csi-attacher Oct 5 12:09:57.528: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5253 Oct 5 12:09:57.533: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5253 Oct 5 12:09:57.537: INFO: deleting *v1.Role: csi-mock-volumes-5253-7932/external-attacher-cfg-csi-mock-volumes-5253 Oct 5 12:09:57.542: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5253-7932/csi-attacher-role-cfg Oct 5 12:09:57.547: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5253-7932/csi-provisioner Oct 5 12:09:57.551: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5253 Oct 5 12:09:57.555: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5253 Oct 5 12:09:57.560: INFO: deleting *v1.Role: csi-mock-volumes-5253-7932/external-provisioner-cfg-csi-mock-volumes-5253 Oct 5 12:09:57.564: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5253-7932/csi-provisioner-role-cfg Oct 5 12:09:57.568: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5253-7932/csi-resizer Oct 5 12:09:57.572: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5253 Oct 5 12:09:57.576: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5253 Oct 5 12:09:57.580: INFO: deleting *v1.Role: csi-mock-volumes-5253-7932/external-resizer-cfg-csi-mock-volumes-5253 Oct 5 12:09:57.584: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5253-7932/csi-resizer-role-cfg Oct 5 12:09:57.588: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5253-7932/csi-snapshotter Oct 5 12:09:57.593: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5253 Oct 5 12:09:57.597: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5253 Oct 5 12:09:57.601: INFO: deleting *v1.Role: csi-mock-volumes-5253-7932/external-snapshotter-leaderelection-csi-mock-volumes-5253 Oct 5 12:09:57.605: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5253-7932/external-snapshotter-leaderelection Oct 5 12:09:57.609: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5253-7932/csi-mock Oct 5 12:09:57.613: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5253 Oct 5 12:09:57.617: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5253 Oct 5 12:09:57.622: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5253 Oct 5 12:09:57.626: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5253 Oct 5 12:09:57.631: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5253 Oct 5 12:09:57.635: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5253 Oct 5 12:09:57.639: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5253 Oct 5 12:09:57.644: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5253-7932/csi-mockplugin Oct 5 12:09:57.650: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-5253 STEP: deleting the driver namespace: csi-mock-volumes-5253-7932 STEP: Waiting for namespaces [csi-mock-volumes-5253-7932] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:10:03.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:60.873 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI FSGroupPolicy [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1559 should modify fsGroup if fsGroupPolicy=default /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1583 ------------------------------ [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:09:55.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pvc-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:72 Oct 5 12:09:55.607: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PVC Oct 5 12:09:55.613: INFO: Default storage class: "standard" Oct 5 12:09:55.613: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Creating a Pod that becomes Running and therefore is actively using the PVC STEP: Waiting for PVC to become Bound Oct 5 12:10:01.636: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-protectionxn629] to have phase Bound Oct 5 12:10:01.639: INFO: PersistentVolumeClaim pvc-protectionxn629 found and phase=Bound (3.240208ms) STEP: Checking that PVC Protection finalizer is set [It] Verify that PVC in active use by a pod is not removed immediately /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:126 STEP: Deleting the PVC, however, the PVC must not be removed from the system as it's in active use by a pod STEP: Checking that the PVC status is Terminating STEP: Deleting the pod that uses the PVC Oct 5 12:10:01.652: INFO: Deleting pod "pvc-tester-wbh9t" in namespace "pvc-protection-2302" Oct 5 12:10:01.657: INFO: Wait up to 5m0s for pod "pvc-tester-wbh9t" to be fully deleted STEP: Checking that the PVC is automatically removed from the system because it's no longer in active use by a pod Oct 5 12:10:05.665: INFO: Waiting up to 3m0s for PersistentVolumeClaim pvc-protectionxn629 to be removed Oct 5 12:10:05.668: INFO: Claim "pvc-protectionxn629" in namespace "pvc-protection-2302" doesn't exist in the system [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:10:05.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pvc-protection-2302" for this suite. [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:108 • [SLOW TEST:10.104 seconds] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Verify that PVC in active use by a pod is not removed immediately /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:126 ------------------------------ {"msg":"PASSED [sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately","total":-1,"completed":15,"skipped":489,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: block] Set fsGroup for local volume should set fsGroup for one pod [Slow]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:09:15.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should modify fsGroup if fsGroupPolicy=File /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1583 STEP: Building a driver namespace object, basename csi-mock-volumes-354 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Oct 5 12:09:15.643: INFO: creating *v1.ServiceAccount: csi-mock-volumes-354-4685/csi-attacher Oct 5 12:09:15.647: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-354 Oct 5 12:09:15.647: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-354 Oct 5 12:09:15.650: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-354 Oct 5 12:09:15.655: INFO: creating *v1.Role: csi-mock-volumes-354-4685/external-attacher-cfg-csi-mock-volumes-354 Oct 5 12:09:15.659: INFO: creating *v1.RoleBinding: csi-mock-volumes-354-4685/csi-attacher-role-cfg Oct 5 12:09:15.662: INFO: creating *v1.ServiceAccount: csi-mock-volumes-354-4685/csi-provisioner Oct 5 12:09:15.666: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-354 Oct 5 12:09:15.666: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-354 Oct 5 12:09:15.670: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-354 Oct 5 12:09:15.674: INFO: creating *v1.Role: csi-mock-volumes-354-4685/external-provisioner-cfg-csi-mock-volumes-354 Oct 5 12:09:15.678: INFO: creating *v1.RoleBinding: csi-mock-volumes-354-4685/csi-provisioner-role-cfg Oct 5 12:09:15.682: INFO: creating *v1.ServiceAccount: csi-mock-volumes-354-4685/csi-resizer Oct 5 12:09:15.686: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-354 Oct 5 12:09:15.686: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-354 Oct 5 12:09:15.690: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-354 Oct 5 12:09:15.693: INFO: creating *v1.Role: csi-mock-volumes-354-4685/external-resizer-cfg-csi-mock-volumes-354 Oct 5 12:09:15.697: INFO: creating *v1.RoleBinding: csi-mock-volumes-354-4685/csi-resizer-role-cfg Oct 5 12:09:15.700: INFO: creating *v1.ServiceAccount: csi-mock-volumes-354-4685/csi-snapshotter Oct 5 12:09:15.704: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-354 Oct 5 12:09:15.704: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-354 Oct 5 12:09:15.708: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-354 Oct 5 12:09:15.712: INFO: creating *v1.Role: csi-mock-volumes-354-4685/external-snapshotter-leaderelection-csi-mock-volumes-354 Oct 5 12:09:15.717: INFO: creating *v1.RoleBinding: csi-mock-volumes-354-4685/external-snapshotter-leaderelection Oct 5 12:09:15.720: INFO: creating *v1.ServiceAccount: csi-mock-volumes-354-4685/csi-mock Oct 5 12:09:15.724: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-354 Oct 5 12:09:15.733: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-354 Oct 5 12:09:15.737: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-354 Oct 5 12:09:15.740: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-354 Oct 5 12:09:15.744: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-354 Oct 5 12:09:15.748: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-354 Oct 5 12:09:15.751: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-354 Oct 5 12:09:15.755: INFO: creating *v1.StatefulSet: csi-mock-volumes-354-4685/csi-mockplugin Oct 5 12:09:15.761: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-354 Oct 5 12:09:15.765: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-354" Oct 5 12:09:15.769: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-354 to register on node v122-worker2 STEP: Creating pod with fsGroup Oct 5 12:09:25.786: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Oct 5 12:09:25.794: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-cjn5s] to have phase Bound Oct 5 12:09:25.797: INFO: PersistentVolumeClaim pvc-cjn5s found but phase is Pending instead of Bound. Oct 5 12:09:27.801: INFO: PersistentVolumeClaim pvc-cjn5s found and phase=Bound (2.007265321s) Oct 5 12:09:29.821: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir /mnt/test/csi-mock-volumes-354] Namespace:csi-mock-volumes-354 PodName:pvc-volume-tester-mcp27 ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:09:29.821: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:09:29.950: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'filecontents' > '/mnt/test/csi-mock-volumes-354/csi-mock-volumes-354'; sync] Namespace:csi-mock-volumes-354 PodName:pvc-volume-tester-mcp27 ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:09:29.950: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:09:30.073: INFO: ExecWithOptions {Command:[/bin/sh -c ls -l /mnt/test/csi-mock-volumes-354/csi-mock-volumes-354] Namespace:csi-mock-volumes-354 PodName:pvc-volume-tester-mcp27 ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:09:30.073: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:09:30.176: INFO: pod csi-mock-volumes-354/pvc-volume-tester-mcp27 exec for cmd ls -l /mnt/test/csi-mock-volumes-354/csi-mock-volumes-354, stdout: -rw-r--r-- 1 root 7795 13 Oct 5 12:09 /mnt/test/csi-mock-volumes-354/csi-mock-volumes-354, stderr: Oct 5 12:09:30.176: INFO: ExecWithOptions {Command:[/bin/sh -c rm -fr /mnt/test/csi-mock-volumes-354] Namespace:csi-mock-volumes-354 PodName:pvc-volume-tester-mcp27 ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:09:30.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod pvc-volume-tester-mcp27 Oct 5 12:09:30.314: INFO: Deleting pod "pvc-volume-tester-mcp27" in namespace "csi-mock-volumes-354" Oct 5 12:09:30.319: INFO: Wait up to 5m0s for pod "pvc-volume-tester-mcp27" to be fully deleted STEP: Deleting claim pvc-cjn5s Oct 5 12:10:02.335: INFO: Waiting up to 2m0s for PersistentVolume pvc-ff48cc5b-29d1-4e06-998e-b219a39062b0 to get deleted Oct 5 12:10:02.338: INFO: PersistentVolume pvc-ff48cc5b-29d1-4e06-998e-b219a39062b0 found and phase=Bound (3.301847ms) Oct 5 12:10:04.343: INFO: PersistentVolume pvc-ff48cc5b-29d1-4e06-998e-b219a39062b0 was removed STEP: Deleting storageclass csi-mock-volumes-354-scw6cb2 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-354 STEP: Waiting for namespaces [csi-mock-volumes-354] to vanish STEP: uninstalling csi mock driver Oct 5 12:10:10.356: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-354-4685/csi-attacher Oct 5 12:10:10.362: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-354 Oct 5 12:10:10.367: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-354 Oct 5 12:10:10.371: INFO: deleting *v1.Role: csi-mock-volumes-354-4685/external-attacher-cfg-csi-mock-volumes-354 Oct 5 12:10:10.376: INFO: deleting *v1.RoleBinding: csi-mock-volumes-354-4685/csi-attacher-role-cfg Oct 5 12:10:10.380: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-354-4685/csi-provisioner Oct 5 12:10:10.384: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-354 Oct 5 12:10:10.389: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-354 Oct 5 12:10:10.393: INFO: deleting *v1.Role: csi-mock-volumes-354-4685/external-provisioner-cfg-csi-mock-volumes-354 Oct 5 12:10:10.398: INFO: deleting *v1.RoleBinding: csi-mock-volumes-354-4685/csi-provisioner-role-cfg Oct 5 12:10:10.402: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-354-4685/csi-resizer Oct 5 12:10:10.407: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-354 Oct 5 12:10:10.411: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-354 Oct 5 12:10:10.416: INFO: deleting *v1.Role: csi-mock-volumes-354-4685/external-resizer-cfg-csi-mock-volumes-354 Oct 5 12:10:10.420: INFO: deleting *v1.RoleBinding: csi-mock-volumes-354-4685/csi-resizer-role-cfg Oct 5 12:10:10.425: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-354-4685/csi-snapshotter Oct 5 12:10:10.429: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-354 Oct 5 12:10:10.433: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-354 Oct 5 12:10:10.438: INFO: deleting *v1.Role: csi-mock-volumes-354-4685/external-snapshotter-leaderelection-csi-mock-volumes-354 Oct 5 12:10:10.442: INFO: deleting *v1.RoleBinding: csi-mock-volumes-354-4685/external-snapshotter-leaderelection Oct 5 12:10:10.446: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-354-4685/csi-mock Oct 5 12:10:10.451: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-354 Oct 5 12:10:10.455: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-354 Oct 5 12:10:10.460: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-354 Oct 5 12:10:10.464: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-354 Oct 5 12:10:10.468: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-354 Oct 5 12:10:10.473: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-354 Oct 5 12:10:10.477: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-354 Oct 5 12:10:10.482: INFO: deleting *v1.StatefulSet: csi-mock-volumes-354-4685/csi-mockplugin Oct 5 12:10:10.487: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-354 STEP: deleting the driver namespace: csi-mock-volumes-354-4685 STEP: Waiting for namespaces [csi-mock-volumes-354-4685] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:10:16.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:60.945 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI FSGroupPolicy [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1559 should modify fsGroup if fsGroupPolicy=File /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1583 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File","total":-1,"completed":5,"skipped":162,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:10:16.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "v122-worker" using path "/tmp/local-volume-test-292dc598-485d-41e7-9ea0-ec174f1bb8df" Oct 5 12:10:18.609: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-292dc598-485d-41e7-9ea0-ec174f1bb8df && dd if=/dev/zero of=/tmp/local-volume-test-292dc598-485d-41e7-9ea0-ec174f1bb8df/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-292dc598-485d-41e7-9ea0-ec174f1bb8df/file] Namespace:persistent-local-volumes-test-2894 PodName:hostexec-v122-worker-xv6cr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:10:18.609: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:10:18.776: INFO: exec v122-worker: command: mkdir -p /tmp/local-volume-test-292dc598-485d-41e7-9ea0-ec174f1bb8df && dd if=/dev/zero of=/tmp/local-volume-test-292dc598-485d-41e7-9ea0-ec174f1bb8df/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-292dc598-485d-41e7-9ea0-ec174f1bb8df/file Oct 5 12:10:18.777: INFO: exec v122-worker: stdout: "" Oct 5 12:10:18.777: INFO: exec v122-worker: stderr: "5120+0 records in\n5120+0 records out\n20971520 bytes (21 MB, 20 MiB) copied, 0.0215162 s, 975 MB/s\nlosetup: /tmp/local-volume-test-292dc598-485d-41e7-9ea0-ec174f1bb8df/file: failed to set up loop device: No such device or address\n" Oct 5 12:10:18.777: INFO: exec v122-worker: exit code: 0 Oct 5 12:10:18.777: FAIL: Unexpected error: : { Err: { s: "command terminated with exit code 1", }, Code: 1, } command terminated with exit code 1 occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).createAndSetupLoopDevice(0xc002c996b0, 0xc00257b7c0, 0x3b, 0xc002bb8000, 0x1400000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:133 +0x45b k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).setupLocalVolumeBlock(0xc002c996b0, 0xc002bb8000, 0x0, 0x78cd2a8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:146 +0x65 k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).Create(0xc002c996b0, 0xc002bb8000, 0x702c9b3, 0x5, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:306 +0x326 k8s.io/kubernetes/test/e2e/storage.setupLocalVolumes(0xc002bb0900, 0x702c9b3, 0x5, 0xc002bb8000, 0x1, 0x0, 0x0, 0xc0022e3c80) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:837 +0x157 k8s.io/kubernetes/test/e2e/storage.setupLocalVolumesPVCsPVs(0xc002bb0900, 0x702c9b3, 0x5, 0xc002bb8000, 0x1, 0x703610f, 0x9, 0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1102 +0x87 k8s.io/kubernetes/test/e2e/storage.glob..func21.2.1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 +0xb6 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001782480) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001782480) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc001782480, 0x729c7d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "persistent-local-volumes-test-2894". STEP: Found 4 events. Oct 5 12:10:18.782: INFO: At 2022-10-05 12:10:16 +0000 UTC - event for hostexec-v122-worker-xv6cr: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-2894/hostexec-v122-worker-xv6cr to v122-worker Oct 5 12:10:18.782: INFO: At 2022-10-05 12:10:17 +0000 UTC - event for hostexec-v122-worker-xv6cr: {kubelet v122-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Oct 5 12:10:18.782: INFO: At 2022-10-05 12:10:17 +0000 UTC - event for hostexec-v122-worker-xv6cr: {kubelet v122-worker} Created: Created container agnhost-container Oct 5 12:10:18.782: INFO: At 2022-10-05 12:10:17 +0000 UTC - event for hostexec-v122-worker-xv6cr: {kubelet v122-worker} Started: Started container agnhost-container Oct 5 12:10:18.785: INFO: POD NODE PHASE GRACE CONDITIONS Oct 5 12:10:18.785: INFO: hostexec-v122-worker-xv6cr v122-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-05 12:10:16 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-10-05 12:10:18 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-10-05 12:10:18 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-05 12:10:16 +0000 UTC }] Oct 5 12:10:18.785: INFO: Oct 5 12:10:18.789: INFO: Logging node info for node v122-control-plane Oct 5 12:10:18.792: INFO: Node Info: &Node{ObjectMeta:{v122-control-plane 0bba5de9-314a-4743-bf02-bde0ec06daf3 5868 0 2022-10-05 11:59:47 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:v122-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-10-05 11:59:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-10-05 11:59:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2022-10-05 12:00:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-10-05 12:00:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v122/v122-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-10-05 12:05:22 +0000 UTC,LastTransitionTime:2022-10-05 11:59:42 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-10-05 12:05:22 +0000 UTC,LastTransitionTime:2022-10-05 11:59:42 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-10-05 12:05:22 +0000 UTC,LastTransitionTime:2022-10-05 11:59:42 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-10-05 12:05:22 +0000 UTC,LastTransitionTime:2022-10-05 12:00:21 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.7,},NodeAddress{Type:Hostname,Address:v122-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:90a9e9edfe9d44d59ee2bec7a8da01cd,SystemUUID:2e684780-1fcb-4016-9109-255b79db130f,BootID:5bd644eb-fc54-4a37-951a-44566369b55e,KernelVersion:5.15.0-48-generic,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.22.15,KubeProxyVersion:v1.22.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:df9770adc7c9d21f7f2c2f8e04efed16e9ca12f9c00aad56e5753bd1819ad95f k8s.gcr.io/kube-proxy:v1.22.15],SizeBytes:105434107,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:393eaa5fb5582428fe7b343ba5b729dbdb8a7d23832e3e9183f515fee5e478ca k8s.gcr.io/kube-apiserver:v1.22.15],SizeBytes:74691038,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:0c81eb77294a8fb7706e538aae9423a6301e00bae81cebf0a67c42f27e6714c7 k8s.gcr.io/kube-controller-manager:v1.22.15],SizeBytes:67534962,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:974ce90d1dfaa71470d7e80077a3dd85ed92848347005df49d710c1576511a2c k8s.gcr.io/kube-scheduler:v1.22.15],SizeBytes:53936244,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220726-ed811e41],SizeBytes:25818452,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220607-9a4d8d2a],SizeBytes:2859509,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 5 12:10:18.792: INFO: Logging kubelet events for node v122-control-plane Oct 5 12:10:18.798: INFO: Logging pods the kubelet thinks is on node v122-control-plane Oct 5 12:10:18.819: INFO: etcd-v122-control-plane started at 2022-10-05 11:59:52 +0000 UTC (0+1 container statuses recorded) Oct 5 12:10:18.819: INFO: Container etcd ready: true, restart count 0 Oct 5 12:10:18.819: INFO: kube-apiserver-v122-control-plane started at 2022-10-05 11:59:52 +0000 UTC (0+1 container statuses recorded) Oct 5 12:10:18.819: INFO: Container kube-apiserver ready: true, restart count 0 Oct 5 12:10:18.819: INFO: kube-controller-manager-v122-control-plane started at 2022-10-05 11:59:52 +0000 UTC (0+1 container statuses recorded) Oct 5 12:10:18.819: INFO: Container kube-controller-manager ready: true, restart count 0 Oct 5 12:10:18.819: INFO: kube-scheduler-v122-control-plane started at 2022-10-05 11:59:52 +0000 UTC (0+1 container statuses recorded) Oct 5 12:10:18.819: INFO: Container kube-scheduler ready: true, restart count 0 Oct 5 12:10:18.819: INFO: kindnet-g8rqz started at 2022-10-05 12:00:05 +0000 UTC (0+1 container statuses recorded) Oct 5 12:10:18.819: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 12:10:18.819: INFO: kube-proxy-xtt57 started at 2022-10-05 12:00:05 +0000 UTC (0+1 container statuses recorded) Oct 5 12:10:18.819: INFO: Container kube-proxy ready: true, restart count 0 Oct 5 12:10:18.819: INFO: create-loop-devs-lvpbc started at 2022-10-05 12:00:14 +0000 UTC (0+1 container statuses recorded) Oct 5 12:10:18.819: INFO: Container loopdev ready: true, restart count 0 Oct 5 12:10:18.884: INFO: Latency metrics for node v122-control-plane Oct 5 12:10:18.884: INFO: Logging node info for node v122-worker Oct 5 12:10:18.888: INFO: Node Info: &Node{ObjectMeta:{v122-worker 8286eab4-ee46-4103-bc96-cf44e85cf562 11428 0 2022-10-05 12:00:08 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:v122-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-9240":"csi-mock-csi-mock-volumes-9240"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-10-05 12:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2022-10-05 12:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-10-05 12:00:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-10-05 12:09:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v122/v122-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-10-05 12:09:29 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-10-05 12:09:29 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-10-05 12:09:29 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-10-05 12:09:29 +0000 UTC,LastTransitionTime:2022-10-05 12:00:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.6,},NodeAddress{Type:Hostname,Address:v122-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8ce5667169114cc58989bd26cdb88021,SystemUUID:f1b8869e-1c17-4972-b832-4d15146806a4,BootID:5bd644eb-fc54-4a37-951a-44566369b55e,KernelVersion:5.15.0-48-generic,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.22.15,KubeProxyVersion:v1.22.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:df9770adc7c9d21f7f2c2f8e04efed16e9ca12f9c00aad56e5753bd1819ad95f k8s.gcr.io/kube-proxy:v1.22.15],SizeBytes:105434107,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:393eaa5fb5582428fe7b343ba5b729dbdb8a7d23832e3e9183f515fee5e478ca k8s.gcr.io/kube-apiserver:v1.22.15],SizeBytes:74691038,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:0c81eb77294a8fb7706e538aae9423a6301e00bae81cebf0a67c42f27e6714c7 k8s.gcr.io/kube-controller-manager:v1.22.15],SizeBytes:67534962,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:974ce90d1dfaa71470d7e80077a3dd85ed92848347005df49d710c1576511a2c k8s.gcr.io/kube-scheduler:v1.22.15],SizeBytes:53936244,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220726-ed811e41],SizeBytes:25818452,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220607-9a4d8d2a],SizeBytes:2859509,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 5 12:10:18.889: INFO: Logging kubelet events for node v122-worker Oct 5 12:10:18.894: INFO: Logging pods the kubelet thinks is on node v122-worker Oct 5 12:10:18.904: INFO: hostexec-v122-worker-xv6cr started at 2022-10-05 12:10:16 +0000 UTC (0+1 container statuses recorded) Oct 5 12:10:18.904: INFO: Container agnhost-container ready: true, restart count 0 Oct 5 12:10:18.904: INFO: csi-mockplugin-0 started at 2022-10-05 12:08:28 +0000 UTC (0+4 container statuses recorded) Oct 5 12:10:18.904: INFO: Container busybox ready: true, restart count 0 Oct 5 12:10:18.904: INFO: Container csi-provisioner ready: true, restart count 0 Oct 5 12:10:18.904: INFO: Container driver-registrar ready: true, restart count 0 Oct 5 12:10:18.904: INFO: Container mock ready: true, restart count 0 Oct 5 12:10:18.904: INFO: create-loop-devs-f76cj started at 2022-10-05 12:00:14 +0000 UTC (0+1 container statuses recorded) Oct 5 12:10:18.904: INFO: Container loopdev ready: true, restart count 0 Oct 5 12:10:18.904: INFO: pod-ephm-test-projected-64t9 started at 2022-10-05 12:10:03 +0000 UTC (0+1 container statuses recorded) Oct 5 12:10:18.904: INFO: Container test-container-subpath-projected-64t9 ready: false, restart count 0 Oct 5 12:10:18.904: INFO: pod-configmaps-2bb22201-613d-442b-9f83-a9d39e6f1499 started at 2022-10-05 12:06:31 +0000 UTC (0+1 container statuses recorded) Oct 5 12:10:18.904: INFO: Container agnhost-container ready: false, restart count 0 Oct 5 12:10:18.904: INFO: pvc-volume-tester-fhthn started at 2022-10-05 12:08:40 +0000 UTC (0+1 container statuses recorded) Oct 5 12:10:18.904: INFO: Container volume-tester ready: false, restart count 0 Oct 5 12:10:18.904: INFO: kindnet-rkh8m started at 2022-10-05 12:00:09 +0000 UTC (0+1 container statuses recorded) Oct 5 12:10:18.904: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 12:10:18.904: INFO: kube-proxy-xkzrn started at 2022-10-05 12:00:09 +0000 UTC (0+1 container statuses recorded) Oct 5 12:10:18.904: INFO: Container kube-proxy ready: true, restart count 0 Oct 5 12:10:18.904: INFO: pod-secrets-76b16dac-27d0-4343-a0fe-b8ed5dd81977 started at 2022-10-05 12:06:14 +0000 UTC (0+1 container statuses recorded) Oct 5 12:10:18.904: INFO: Container creates-volume-test ready: false, restart count 0 Oct 5 12:10:19.044: INFO: Latency metrics for node v122-worker Oct 5 12:10:19.044: INFO: Logging node info for node v122-worker2 Oct 5 12:10:19.048: INFO: Node Info: &Node{ObjectMeta:{v122-worker2 e098b7b6-6804-492f-b9ec-650d1924542e 12106 0 2022-10-05 12:00:08 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:v122-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-2378":"csi-mock-csi-mock-volumes-2378","csi-mock-csi-mock-volumes-4897":"csi-mock-csi-mock-volumes-4897"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-10-05 12:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2022-10-05 12:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-10-05 12:00:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-10-05 12:08:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v122/v122-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-10-05 12:08:59 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-10-05 12:08:59 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-10-05 12:08:59 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-10-05 12:08:59 +0000 UTC,LastTransitionTime:2022-10-05 12:00:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.5,},NodeAddress{Type:Hostname,Address:v122-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:feea07f38e414515ae57b946e27fa7bb,SystemUUID:07d898dc-4331-403b-9bdf-da8ef413d01c,BootID:5bd644eb-fc54-4a37-951a-44566369b55e,KernelVersion:5.15.0-48-generic,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.22.15,KubeProxyVersion:v1.22.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/sig-storage/nfs-provisioner@sha256:c1bedac8758029948afe060bf8f6ee63ea489b5e08d29745f44fab68ee0d46ca k8s.gcr.io/sig-storage/nfs-provisioner:v2.2.2],SizeBytes:138177747,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:df9770adc7c9d21f7f2c2f8e04efed16e9ca12f9c00aad56e5753bd1819ad95f k8s.gcr.io/kube-proxy:v1.22.15],SizeBytes:105434107,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:393eaa5fb5582428fe7b343ba5b729dbdb8a7d23832e3e9183f515fee5e478ca k8s.gcr.io/kube-apiserver:v1.22.15],SizeBytes:74691038,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:0c81eb77294a8fb7706e538aae9423a6301e00bae81cebf0a67c42f27e6714c7 k8s.gcr.io/kube-controller-manager:v1.22.15],SizeBytes:67534962,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:974ce90d1dfaa71470d7e80077a3dd85ed92848347005df49d710c1576511a2c k8s.gcr.io/kube-scheduler:v1.22.15],SizeBytes:53936244,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220726-ed811e41],SizeBytes:25818452,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220607-9a4d8d2a],SizeBytes:2859509,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 5 12:10:19.049: INFO: Logging kubelet events for node v122-worker2 Oct 5 12:10:19.054: INFO: Logging pods the kubelet thinks is on node v122-worker2 Oct 5 12:10:19.068: INFO: local-path-provisioner-58c8ccd54c-lkwwv started at 2022-10-05 12:00:18 +0000 UTC (0+1 container statuses recorded) Oct 5 12:10:19.068: INFO: Container local-path-provisioner ready: true, restart count 0 Oct 5 12:10:19.068: INFO: hostexec-v122-worker2-75hw5 started at 2022-10-05 12:06:49 +0000 UTC (0+1 container statuses recorded) Oct 5 12:10:19.068: INFO: Container agnhost-container ready: true, restart count 0 Oct 5 12:10:19.068: INFO: pod-2dfdc6f1-9191-4301-96e6-b9954dca0603 started at 2022-10-05 12:07:05 +0000 UTC (0+1 container statuses recorded) Oct 5 12:10:19.068: INFO: Container write-pod ready: true, restart count 0 Oct 5 12:10:19.068: INFO: csi-mockplugin-0 started at 2022-10-05 12:09:56 +0000 UTC (0+4 container statuses recorded) Oct 5 12:10:19.068: INFO: Container busybox ready: true, restart count 0 Oct 5 12:10:19.068: INFO: Container csi-provisioner ready: false, restart count 1 Oct 5 12:10:19.068: INFO: Container driver-registrar ready: true, restart count 0 Oct 5 12:10:19.068: INFO: Container mock ready: true, restart count 0 Oct 5 12:10:19.068: INFO: pod-secrets-e827c9fc-8fe2-4070-8ecd-1f57a842134f started at 2022-10-05 12:08:46 +0000 UTC (0+1 container statuses recorded) Oct 5 12:10:19.068: INFO: Container creates-volume-test ready: false, restart count 0 Oct 5 12:10:19.068: INFO: coredns-78fcd69978-srwh8 started at 2022-10-05 12:00:18 +0000 UTC (0+1 container statuses recorded) Oct 5 12:10:19.068: INFO: Container coredns ready: true, restart count 0 Oct 5 12:10:19.068: INFO: pod-configmaps-0701a096-7034-45ea-90fd-45bfd2a603de started at 2022-10-05 12:09:23 +0000 UTC (0+1 container statuses recorded) Oct 5 12:10:19.068: INFO: Container agnhost-container ready: false, restart count 0 Oct 5 12:10:19.068: INFO: coredns-78fcd69978-vrzs8 started at 2022-10-05 12:00:18 +0000 UTC (0+1 container statuses recorded) Oct 5 12:10:19.068: INFO: Container coredns ready: true, restart count 0 Oct 5 12:10:19.068: INFO: pod-f79ff21f-4fdf-4e74-928e-a01f6395dff5 started at 2022-10-05 12:07:08 +0000 UTC (0+1 container statuses recorded) Oct 5 12:10:19.068: INFO: Container write-pod ready: false, restart count 0 Oct 5 12:10:19.068: INFO: csi-mockplugin-0 started at 2022-10-05 12:10:05 +0000 UTC (0+3 container statuses recorded) Oct 5 12:10:19.068: INFO: Container csi-provisioner ready: true, restart count 0 Oct 5 12:10:19.068: INFO: Container driver-registrar ready: true, restart count 0 Oct 5 12:10:19.068: INFO: Container mock ready: true, restart count 0 Oct 5 12:10:19.068: INFO: create-loop-devs-6sf59 started at 2022-10-05 12:00:14 +0000 UTC (0+1 container statuses recorded) Oct 5 12:10:19.068: INFO: Container loopdev ready: true, restart count 0 Oct 5 12:10:19.068: INFO: kube-proxy-pwsq7 started at 2022-10-05 12:00:09 +0000 UTC (0+1 container statuses recorded) Oct 5 12:10:19.068: INFO: Container kube-proxy ready: true, restart count 0 Oct 5 12:10:19.068: INFO: csi-mockplugin-attacher-0 started at 2022-10-05 12:10:05 +0000 UTC (0+1 container statuses recorded) Oct 5 12:10:19.068: INFO: Container csi-attacher ready: true, restart count 0 Oct 5 12:10:19.068: INFO: kindnet-vqtz2 started at 2022-10-05 12:00:09 +0000 UTC (0+1 container statuses recorded) Oct 5 12:10:19.068: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 12:10:19.253: INFO: Latency metrics for node v122-worker2 Oct 5 12:10:19.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2894" for this suite. • Failure in Spec Setup (BeforeEach) [2.715 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 Oct 5 12:10:18.777: Unexpected error: : { Err: { s: "command terminated with exit code 1", }, Code: 1, } command terminated with exit code 1 occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:133 ------------------------------ {"msg":"FAILED [sig-storage] PersistentVolumes-local [Volume type: block] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":5,"skipped":180,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: block] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:10:05.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity used, no capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1300 STEP: Building a driver namespace object, basename csi-mock-volumes-2378 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Oct 5 12:10:05.784: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2378-4017/csi-attacher Oct 5 12:10:05.788: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2378 Oct 5 12:10:05.788: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-2378 Oct 5 12:10:05.792: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2378 Oct 5 12:10:05.796: INFO: creating *v1.Role: csi-mock-volumes-2378-4017/external-attacher-cfg-csi-mock-volumes-2378 Oct 5 12:10:05.800: INFO: creating *v1.RoleBinding: csi-mock-volumes-2378-4017/csi-attacher-role-cfg Oct 5 12:10:05.804: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2378-4017/csi-provisioner Oct 5 12:10:05.808: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2378 Oct 5 12:10:05.808: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-2378 Oct 5 12:10:05.811: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2378 Oct 5 12:10:05.816: INFO: creating *v1.Role: csi-mock-volumes-2378-4017/external-provisioner-cfg-csi-mock-volumes-2378 Oct 5 12:10:05.820: INFO: creating *v1.RoleBinding: csi-mock-volumes-2378-4017/csi-provisioner-role-cfg Oct 5 12:10:05.824: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2378-4017/csi-resizer Oct 5 12:10:05.828: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2378 Oct 5 12:10:05.828: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-2378 Oct 5 12:10:05.832: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2378 Oct 5 12:10:05.836: INFO: creating *v1.Role: csi-mock-volumes-2378-4017/external-resizer-cfg-csi-mock-volumes-2378 Oct 5 12:10:05.839: INFO: creating *v1.RoleBinding: csi-mock-volumes-2378-4017/csi-resizer-role-cfg Oct 5 12:10:05.843: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2378-4017/csi-snapshotter Oct 5 12:10:05.847: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2378 Oct 5 12:10:05.847: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-2378 Oct 5 12:10:05.850: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2378 Oct 5 12:10:05.854: INFO: creating *v1.Role: csi-mock-volumes-2378-4017/external-snapshotter-leaderelection-csi-mock-volumes-2378 Oct 5 12:10:05.858: INFO: creating *v1.RoleBinding: csi-mock-volumes-2378-4017/external-snapshotter-leaderelection Oct 5 12:10:05.863: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2378-4017/csi-mock Oct 5 12:10:05.866: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2378 Oct 5 12:10:05.870: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2378 Oct 5 12:10:05.873: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2378 Oct 5 12:10:05.877: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2378 Oct 5 12:10:05.880: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2378 Oct 5 12:10:05.884: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2378 Oct 5 12:10:05.888: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2378 Oct 5 12:10:05.892: INFO: creating *v1.StatefulSet: csi-mock-volumes-2378-4017/csi-mockplugin Oct 5 12:10:05.898: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-2378 Oct 5 12:10:05.902: INFO: creating *v1.StatefulSet: csi-mock-volumes-2378-4017/csi-mockplugin-attacher Oct 5 12:10:05.907: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-2378" Oct 5 12:10:05.911: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-2378 to register on node v122-worker2 STEP: Creating pod Oct 5 12:10:15.930: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Deleting the previously created pod Oct 5 12:10:15.950: INFO: Deleting pod "pvc-volume-tester-7qnvt" in namespace "csi-mock-volumes-2378" Oct 5 12:10:15.957: INFO: Wait up to 5m0s for pod "pvc-volume-tester-7qnvt" to be fully deleted STEP: Deleting pod pvc-volume-tester-7qnvt Oct 5 12:10:15.960: INFO: Deleting pod "pvc-volume-tester-7qnvt" in namespace "csi-mock-volumes-2378" STEP: Deleting claim pvc-k6n72 STEP: Deleting storageclass mock-csi-storage-capacity-csi-mock-volumes-2378 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-2378 STEP: Waiting for namespaces [csi-mock-volumes-2378] to vanish STEP: uninstalling csi mock driver Oct 5 12:10:21.985: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2378-4017/csi-attacher Oct 5 12:10:21.990: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2378 Oct 5 12:10:21.995: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2378 Oct 5 12:10:22.000: INFO: deleting *v1.Role: csi-mock-volumes-2378-4017/external-attacher-cfg-csi-mock-volumes-2378 Oct 5 12:10:22.004: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2378-4017/csi-attacher-role-cfg Oct 5 12:10:22.009: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2378-4017/csi-provisioner Oct 5 12:10:22.013: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2378 Oct 5 12:10:22.018: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2378 Oct 5 12:10:22.023: INFO: deleting *v1.Role: csi-mock-volumes-2378-4017/external-provisioner-cfg-csi-mock-volumes-2378 Oct 5 12:10:22.027: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2378-4017/csi-provisioner-role-cfg Oct 5 12:10:22.032: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2378-4017/csi-resizer Oct 5 12:10:22.041: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2378 Oct 5 12:10:22.046: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2378 Oct 5 12:10:22.050: INFO: deleting *v1.Role: csi-mock-volumes-2378-4017/external-resizer-cfg-csi-mock-volumes-2378 Oct 5 12:10:22.055: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2378-4017/csi-resizer-role-cfg Oct 5 12:10:22.059: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2378-4017/csi-snapshotter Oct 5 12:10:22.064: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2378 Oct 5 12:10:22.068: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2378 Oct 5 12:10:22.073: INFO: deleting *v1.Role: csi-mock-volumes-2378-4017/external-snapshotter-leaderelection-csi-mock-volumes-2378 Oct 5 12:10:22.077: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2378-4017/external-snapshotter-leaderelection Oct 5 12:10:22.082: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2378-4017/csi-mock Oct 5 12:10:22.086: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2378 Oct 5 12:10:22.091: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2378 Oct 5 12:10:22.095: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2378 Oct 5 12:10:22.100: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2378 Oct 5 12:10:22.105: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2378 Oct 5 12:10:22.110: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2378 Oct 5 12:10:22.120: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2378 Oct 5 12:10:22.125: INFO: deleting *v1.StatefulSet: csi-mock-volumes-2378-4017/csi-mockplugin Oct 5 12:10:22.130: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-2378 Oct 5 12:10:22.135: INFO: deleting *v1.StatefulSet: csi-mock-volumes-2378-4017/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-2378-4017 STEP: Waiting for namespaces [csi-mock-volumes-2378-4017] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:10:28.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:22.461 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1257 CSIStorageCapacity used, no capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1300 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","total":-1,"completed":16,"skipped":497,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: block] Set fsGroup for local volume should set fsGroup for one pod [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:10:28.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Oct 5 12:10:30.313: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-15d0a2dc-b3a8-475e-a503-58df3fc1cbc3-backend && mount --bind /tmp/local-volume-test-15d0a2dc-b3a8-475e-a503-58df3fc1cbc3-backend /tmp/local-volume-test-15d0a2dc-b3a8-475e-a503-58df3fc1cbc3-backend && ln -s /tmp/local-volume-test-15d0a2dc-b3a8-475e-a503-58df3fc1cbc3-backend /tmp/local-volume-test-15d0a2dc-b3a8-475e-a503-58df3fc1cbc3] Namespace:persistent-local-volumes-test-1872 PodName:hostexec-v122-worker-5bx9d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:10:30.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:10:30.467: INFO: Creating a PV followed by a PVC Oct 5 12:10:30.476: INFO: Waiting for PV local-pvl4m4j to bind to PVC pvc-2fvwc Oct 5 12:10:30.476: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-2fvwc] to have phase Bound Oct 5 12:10:30.480: INFO: PersistentVolumeClaim pvc-2fvwc found but phase is Pending instead of Bound. Oct 5 12:10:32.484: INFO: PersistentVolumeClaim pvc-2fvwc found but phase is Pending instead of Bound. Oct 5 12:10:34.488: INFO: PersistentVolumeClaim pvc-2fvwc found and phase=Bound (4.01181451s) Oct 5 12:10:34.488: INFO: Waiting up to 3m0s for PersistentVolume local-pvl4m4j to have phase Bound Oct 5 12:10:34.492: INFO: PersistentVolume local-pvl4m4j found and phase=Bound (3.231292ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Oct 5 12:10:36.515: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:45799 --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-1872 exec pod-f2df5049-75f0-403b-9020-991a3f2ca819 --namespace=persistent-local-volumes-test-1872 -- stat -c %g /mnt/volume1' Oct 5 12:10:36.739: INFO: stderr: "" Oct 5 12:10:36.739: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Oct 5 12:10:38.756: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:45799 --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-1872 exec pod-77e2474f-8076-45f4-98f4-31d3109ba033 --namespace=persistent-local-volumes-test-1872 -- stat -c %g /mnt/volume1' Oct 5 12:10:38.990: INFO: stderr: "" Oct 5 12:10:38.990: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod pod-f2df5049-75f0-403b-9020-991a3f2ca819 in namespace persistent-local-volumes-test-1872 STEP: Deleting second pod STEP: Deleting pod pod-77e2474f-8076-45f4-98f4-31d3109ba033 in namespace persistent-local-volumes-test-1872 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:10:39.001: INFO: Deleting PersistentVolumeClaim "pvc-2fvwc" Oct 5 12:10:39.006: INFO: Deleting PersistentVolume "local-pvl4m4j" STEP: Removing the test directory Oct 5 12:10:39.011: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-15d0a2dc-b3a8-475e-a503-58df3fc1cbc3 && umount /tmp/local-volume-test-15d0a2dc-b3a8-475e-a503-58df3fc1cbc3-backend && rm -r /tmp/local-volume-test-15d0a2dc-b3a8-475e-a503-58df3fc1cbc3-backend] Namespace:persistent-local-volumes-test-1872 PodName:hostexec-v122-worker-5bx9d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:10:39.011: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:10:39.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1872" for this suite. • [SLOW TEST:10.893 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":17,"skipped":542,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: block] Set fsGroup for local volume should set fsGroup for one pod [Slow]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:10:39.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename multi-az STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:39 Oct 5 12:10:39.234: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:10:39.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "multi-az-5051" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.055 seconds] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should schedule pods in the same zones as statically provisioned PVs [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:50 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:40 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:10:39.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:110 STEP: Creating configMap with name projected-configmap-test-volume-map-5a977738-19e0-4fb4-a7a7-9731edebef77 STEP: Creating a pod to test consume configMaps Oct 5 12:10:39.362: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-028fd66b-bba3-4712-8d54-3f0ac59b7870" in namespace "projected-5174" to be "Succeeded or Failed" Oct 5 12:10:39.365: INFO: Pod "pod-projected-configmaps-028fd66b-bba3-4712-8d54-3f0ac59b7870": Phase="Pending", Reason="", readiness=false. Elapsed: 3.252794ms Oct 5 12:10:41.369: INFO: Pod "pod-projected-configmaps-028fd66b-bba3-4712-8d54-3f0ac59b7870": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007244471s Oct 5 12:10:43.374: INFO: Pod "pod-projected-configmaps-028fd66b-bba3-4712-8d54-3f0ac59b7870": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011835622s STEP: Saw pod success Oct 5 12:10:43.374: INFO: Pod "pod-projected-configmaps-028fd66b-bba3-4712-8d54-3f0ac59b7870" satisfied condition "Succeeded or Failed" Oct 5 12:10:43.377: INFO: Trying to get logs from node v122-worker2 pod pod-projected-configmaps-028fd66b-bba3-4712-8d54-3f0ac59b7870 container agnhost-container: STEP: delete the pod Oct 5 12:10:43.393: INFO: Waiting for pod pod-projected-configmaps-028fd66b-bba3-4712-8d54-3f0ac59b7870 to disappear Oct 5 12:10:43.396: INFO: Pod pod-projected-configmaps-028fd66b-bba3-4712-8d54-3f0ac59b7870 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:10:43.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5174" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":18,"skipped":602,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: block] Set fsGroup for local volume should set fsGroup for one pod [Slow]"]} SSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:10:43.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Oct 5 12:10:43.459: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:10:43.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-9113" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.052 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 schedule a pod w/ RW PD(s) mounted to 1 or more containers, write to PD, verify content, delete pod, and repeat in rapid succession [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:231 using 4 containers and 1 PDs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:254 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:10:43.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:42 [It] should be mountable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:48 STEP: starting configmap-client STEP: Checking that text file contents are perfect. Oct 5 12:10:45.560: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:45799 --kubeconfig=/root/.kube/config --namespace=volume-1348 exec configmap-client --namespace=volume-1348 -- cat /opt/0/firstfile' Oct 5 12:10:45.802: INFO: stderr: "" Oct 5 12:10:45.802: INFO: stdout: "this is the first file" Oct 5 12:10:45.802: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /opt/0] Namespace:volume-1348 PodName:configmap-client ContainerName:configmap-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:10:45.802: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:10:45.911: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /opt/0] Namespace:volume-1348 PodName:configmap-client ContainerName:configmap-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:10:45.911: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:10:46.024: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:45799 --kubeconfig=/root/.kube/config --namespace=volume-1348 exec configmap-client --namespace=volume-1348 -- cat /opt/1/secondfile' Oct 5 12:10:46.245: INFO: stderr: "" Oct 5 12:10:46.245: INFO: stdout: "this is the second file" Oct 5 12:10:46.245: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /opt/1] Namespace:volume-1348 PodName:configmap-client ContainerName:configmap-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:10:46.245: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:10:46.327: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /opt/1] Namespace:volume-1348 PodName:configmap-client ContainerName:configmap-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:10:46.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod configmap-client in namespace volume-1348 Oct 5 12:10:46.446: INFO: Waiting for pod configmap-client to disappear Oct 5 12:10:46.449: INFO: Pod configmap-client still exists Oct 5 12:10:48.450: INFO: Waiting for pod configmap-client to disappear Oct 5 12:10:48.453: INFO: Pod configmap-client no longer exists [AfterEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:10:48.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-1348" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Volumes ConfigMap should be mountable","total":-1,"completed":19,"skipped":628,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: block] Set fsGroup for local volume should set fsGroup for one pod [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:10:48.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:110 STEP: Creating configMap with name configmap-test-volume-map-36915623-937d-4f56-bbe2-ce7e89397574 STEP: Creating a pod to test consume configMaps Oct 5 12:10:48.583: INFO: Waiting up to 5m0s for pod "pod-configmaps-9f96bb5b-012e-4233-b03a-7fecdfcf24b3" in namespace "configmap-7592" to be "Succeeded or Failed" Oct 5 12:10:48.586: INFO: Pod "pod-configmaps-9f96bb5b-012e-4233-b03a-7fecdfcf24b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.758026ms Oct 5 12:10:50.589: INFO: Pod "pod-configmaps-9f96bb5b-012e-4233-b03a-7fecdfcf24b3": Phase="Running", Reason="", readiness=false. Elapsed: 2.006440665s Oct 5 12:10:52.594: INFO: Pod "pod-configmaps-9f96bb5b-012e-4233-b03a-7fecdfcf24b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011178295s STEP: Saw pod success Oct 5 12:10:52.594: INFO: Pod "pod-configmaps-9f96bb5b-012e-4233-b03a-7fecdfcf24b3" satisfied condition "Succeeded or Failed" Oct 5 12:10:52.597: INFO: Trying to get logs from node v122-worker2 pod pod-configmaps-9f96bb5b-012e-4233-b03a-7fecdfcf24b3 container agnhost-container: STEP: delete the pod Oct 5 12:10:52.612: INFO: Waiting for pod pod-configmaps-9f96bb5b-012e-4233-b03a-7fecdfcf24b3 to disappear Oct 5 12:10:52.616: INFO: Pod pod-configmaps-9f96bb5b-012e-4233-b03a-7fecdfcf24b3 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:10:52.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7592" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":20,"skipped":668,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: block] Set fsGroup for local volume should set fsGroup for one pod [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:10:19.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:391 STEP: Setting up local volumes on node "v122-worker" STEP: Initializing test volumes Oct 5 12:10:21.398: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-09ad1f01-cf3d-41a8-8edd-39056f3702c6] Namespace:persistent-local-volumes-test-3884 PodName:hostexec-v122-worker-pqlx9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:10:21.398: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:10:21.550: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-31355685-f7a0-4e66-844d-db8047059f17] Namespace:persistent-local-volumes-test-3884 PodName:hostexec-v122-worker-pqlx9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:10:21.551: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:10:21.692: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-5cd394b3-24ce-4b46-a678-d3beb3eb3235] Namespace:persistent-local-volumes-test-3884 PodName:hostexec-v122-worker-pqlx9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:10:21.692: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:10:21.830: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-699877e0-69f2-439e-a257-11e13617021a] Namespace:persistent-local-volumes-test-3884 PodName:hostexec-v122-worker-pqlx9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:10:21.830: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:10:21.961: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-4f925510-2a1a-4453-8d54-70d0ee37cf89] Namespace:persistent-local-volumes-test-3884 PodName:hostexec-v122-worker-pqlx9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:10:21.961: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:10:22.110: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-6e5abab4-d830-41c7-bdf4-6f0b5cc51956] Namespace:persistent-local-volumes-test-3884 PodName:hostexec-v122-worker-pqlx9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:10:22.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:10:22.270: INFO: Creating a PV followed by a PVC Oct 5 12:10:22.279: INFO: Creating a PV followed by a PVC Oct 5 12:10:22.287: INFO: Creating a PV followed by a PVC Oct 5 12:10:22.294: INFO: Creating a PV followed by a PVC Oct 5 12:10:22.300: INFO: Creating a PV followed by a PVC Oct 5 12:10:22.306: INFO: Creating a PV followed by a PVC Oct 5 12:10:32.362: INFO: PVCs were not bound within 10s (that's good) STEP: Setting up local volumes on node "v122-worker2" STEP: Initializing test volumes Oct 5 12:10:34.376: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-4bbef5a8-0e2b-4697-8e47-fb6108796c77] Namespace:persistent-local-volumes-test-3884 PodName:hostexec-v122-worker2-jl7w7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:10:34.376: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:10:34.523: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-fae89b18-75a0-4216-8b1e-97c6219f4c36] Namespace:persistent-local-volumes-test-3884 PodName:hostexec-v122-worker2-jl7w7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:10:34.523: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:10:34.688: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-255df240-d105-4ebb-a794-ed452f317ebe] Namespace:persistent-local-volumes-test-3884 PodName:hostexec-v122-worker2-jl7w7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:10:34.688: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:10:34.809: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-7ca478dc-c6d0-4b74-9fb0-b5a2009b3e8b] Namespace:persistent-local-volumes-test-3884 PodName:hostexec-v122-worker2-jl7w7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:10:34.809: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:10:34.939: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-316eb3b1-1830-4b1a-8329-41aa9ffbf759] Namespace:persistent-local-volumes-test-3884 PodName:hostexec-v122-worker2-jl7w7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:10:34.939: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:10:35.052: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-e78d9503-1b59-4ed9-a818-1444340d0392] Namespace:persistent-local-volumes-test-3884 PodName:hostexec-v122-worker2-jl7w7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:10:35.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:10:35.153: INFO: Creating a PV followed by a PVC Oct 5 12:10:35.162: INFO: Creating a PV followed by a PVC Oct 5 12:10:35.168: INFO: Creating a PV followed by a PVC Oct 5 12:10:35.174: INFO: Creating a PV followed by a PVC Oct 5 12:10:35.181: INFO: Creating a PV followed by a PVC Oct 5 12:10:35.188: INFO: Creating a PV followed by a PVC Oct 5 12:10:45.249: INFO: PVCs were not bound within 10s (that's good) [It] should use volumes on one node when pod management is parallel and pod has affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:434 STEP: Creating a StatefulSet with pod affinity on nodes Oct 5 12:10:45.260: INFO: Found 0 stateful pods, waiting for 3 Oct 5 12:10:55.266: INFO: Waiting for pod local-volume-statefulset-0 to enter Running - Ready=true, currently Running - Ready=true Oct 5 12:10:55.266: INFO: Waiting for pod local-volume-statefulset-1 to enter Running - Ready=true, currently Running - Ready=true Oct 5 12:10:55.266: INFO: Waiting for pod local-volume-statefulset-2 to enter Running - Ready=true, currently Running - Ready=true Oct 5 12:10:55.271: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol1-local-volume-statefulset-0] to have phase Bound Oct 5 12:10:55.274: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-0 found and phase=Bound (3.099411ms) Oct 5 12:10:55.274: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol1-local-volume-statefulset-1] to have phase Bound Oct 5 12:10:55.277: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-1 found and phase=Bound (2.855912ms) Oct 5 12:10:55.277: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol1-local-volume-statefulset-2] to have phase Bound Oct 5 12:10:55.280: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-2 found and phase=Bound (3.170975ms) [AfterEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:403 STEP: Cleaning up PVC and PV Oct 5 12:10:55.280: INFO: Deleting PersistentVolumeClaim "pvc-qtgn5" Oct 5 12:10:55.285: INFO: Deleting PersistentVolume "local-pvvj995" STEP: Cleaning up PVC and PV Oct 5 12:10:55.290: INFO: Deleting PersistentVolumeClaim "pvc-wz55g" Oct 5 12:10:55.295: INFO: Deleting PersistentVolume "local-pvc55qm" STEP: Cleaning up PVC and PV Oct 5 12:10:55.299: INFO: Deleting PersistentVolumeClaim "pvc-4cwsh" Oct 5 12:10:55.303: INFO: Deleting PersistentVolume "local-pvw9hfp" STEP: Cleaning up PVC and PV Oct 5 12:10:55.308: INFO: Deleting PersistentVolumeClaim "pvc-hx9jc" Oct 5 12:10:55.312: INFO: Deleting PersistentVolume "local-pv7qj25" STEP: Cleaning up PVC and PV Oct 5 12:10:55.317: INFO: Deleting PersistentVolumeClaim "pvc-hvw48" Oct 5 12:10:55.322: INFO: Deleting PersistentVolume "local-pvbqhwb" STEP: Cleaning up PVC and PV Oct 5 12:10:55.326: INFO: Deleting PersistentVolumeClaim "pvc-lvns4" Oct 5 12:10:55.331: INFO: Deleting PersistentVolume "local-pvhtbql" STEP: Removing the test directory Oct 5 12:10:55.335: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-09ad1f01-cf3d-41a8-8edd-39056f3702c6] Namespace:persistent-local-volumes-test-3884 PodName:hostexec-v122-worker-pqlx9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:10:55.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:10:55.491: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-31355685-f7a0-4e66-844d-db8047059f17] Namespace:persistent-local-volumes-test-3884 PodName:hostexec-v122-worker-pqlx9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:10:55.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:10:55.627: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-5cd394b3-24ce-4b46-a678-d3beb3eb3235] Namespace:persistent-local-volumes-test-3884 PodName:hostexec-v122-worker-pqlx9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:10:55.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:10:55.751: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-699877e0-69f2-439e-a257-11e13617021a] Namespace:persistent-local-volumes-test-3884 PodName:hostexec-v122-worker-pqlx9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:10:55.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:10:55.892: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4f925510-2a1a-4453-8d54-70d0ee37cf89] Namespace:persistent-local-volumes-test-3884 PodName:hostexec-v122-worker-pqlx9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:10:55.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:10:56.022: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6e5abab4-d830-41c7-bdf4-6f0b5cc51956] Namespace:persistent-local-volumes-test-3884 PodName:hostexec-v122-worker-pqlx9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:10:56.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up PVC and PV Oct 5 12:10:56.172: INFO: Deleting PersistentVolumeClaim "pvc-px6f7" Oct 5 12:10:56.177: INFO: Deleting PersistentVolume "local-pvh2kq9" STEP: Cleaning up PVC and PV Oct 5 12:10:56.182: INFO: Deleting PersistentVolumeClaim "pvc-5fqh6" Oct 5 12:10:56.187: INFO: Deleting PersistentVolume "local-pvmh2gf" STEP: Cleaning up PVC and PV Oct 5 12:10:56.191: INFO: Deleting PersistentVolumeClaim "pvc-7d7bm" Oct 5 12:10:56.195: INFO: Deleting PersistentVolume "local-pvjvbbj" STEP: Cleaning up PVC and PV Oct 5 12:10:56.199: INFO: Deleting PersistentVolumeClaim "pvc-zv2gp" Oct 5 12:10:56.209: INFO: Deleting PersistentVolume "local-pvn4rqg" STEP: Cleaning up PVC and PV Oct 5 12:10:56.221: INFO: Deleting PersistentVolumeClaim "pvc-p9hj2" Oct 5 12:10:56.225: INFO: Deleting PersistentVolume "local-pvcssz7" STEP: Cleaning up PVC and PV Oct 5 12:10:56.230: INFO: Deleting PersistentVolumeClaim "pvc-c4qqz" Oct 5 12:10:56.233: INFO: Deleting PersistentVolume "local-pvmhljq" STEP: Removing the test directory Oct 5 12:10:56.238: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4bbef5a8-0e2b-4697-8e47-fb6108796c77] Namespace:persistent-local-volumes-test-3884 PodName:hostexec-v122-worker2-jl7w7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:10:56.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:10:56.395: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-fae89b18-75a0-4216-8b1e-97c6219f4c36] Namespace:persistent-local-volumes-test-3884 PodName:hostexec-v122-worker2-jl7w7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:10:56.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:10:56.553: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-255df240-d105-4ebb-a794-ed452f317ebe] Namespace:persistent-local-volumes-test-3884 PodName:hostexec-v122-worker2-jl7w7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:10:56.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:10:56.695: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-7ca478dc-c6d0-4b74-9fb0-b5a2009b3e8b] Namespace:persistent-local-volumes-test-3884 PodName:hostexec-v122-worker2-jl7w7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:10:56.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:10:56.837: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-316eb3b1-1830-4b1a-8329-41aa9ffbf759] Namespace:persistent-local-volumes-test-3884 PodName:hostexec-v122-worker2-jl7w7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:10:56.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:10:56.983: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e78d9503-1b59-4ed9-a818-1444340d0392] Namespace:persistent-local-volumes-test-3884 PodName:hostexec-v122-worker2-jl7w7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:10:56.983: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:10:57.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3884" for this suite. • [SLOW TEST:37.803 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:384 should use volumes on one node when pod management is parallel and pod has affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:434 ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:10:52.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Oct 5 12:10:54.896: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-a4e9b9e7-c519-40a6-9bc0-7de7021d27d4] Namespace:persistent-local-volumes-test-691 PodName:hostexec-v122-worker-n5crc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:10:54.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:10:55.043: INFO: Creating a PV followed by a PVC Oct 5 12:10:55.052: INFO: Waiting for PV local-pvpjd95 to bind to PVC pvc-qlpd5 Oct 5 12:10:55.052: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-qlpd5] to have phase Bound Oct 5 12:10:55.055: INFO: PersistentVolumeClaim pvc-qlpd5 found but phase is Pending instead of Bound. Oct 5 12:10:57.060: INFO: PersistentVolumeClaim pvc-qlpd5 found but phase is Pending instead of Bound. Oct 5 12:10:59.064: INFO: PersistentVolumeClaim pvc-qlpd5 found but phase is Pending instead of Bound. Oct 5 12:11:01.068: INFO: PersistentVolumeClaim pvc-qlpd5 found but phase is Pending instead of Bound. Oct 5 12:11:03.073: INFO: PersistentVolumeClaim pvc-qlpd5 found but phase is Pending instead of Bound. Oct 5 12:11:05.077: INFO: PersistentVolumeClaim pvc-qlpd5 found and phase=Bound (10.024417438s) Oct 5 12:11:05.077: INFO: Waiting up to 3m0s for PersistentVolume local-pvpjd95 to have phase Bound Oct 5 12:11:05.080: INFO: PersistentVolume local-pvpjd95 found and phase=Bound (3.523475ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Oct 5 12:11:05.087: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:11:05.088: INFO: Deleting PersistentVolumeClaim "pvc-qlpd5" Oct 5 12:11:05.094: INFO: Deleting PersistentVolume "local-pvpjd95" STEP: Removing the test directory Oct 5 12:11:05.099: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a4e9b9e7-c519-40a6-9bc0-7de7021d27d4] Namespace:persistent-local-volumes-test-691 PodName:hostexec-v122-worker-n5crc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:11:05.099: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:11:05.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-691" for this suite. S [SKIPPING] [12.444 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SS ------------------------------ [BeforeEach] [sig-storage] Volume limits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:11:05.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-limits-on-node STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volume limits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_limits.go:35 Oct 5 12:11:05.315: INFO: Only supported for providers [aws gce gke] (not local) [AfterEach] [sig-storage] Volume limits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:11:05.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-limits-on-node-6088" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.044 seconds] [sig-storage] Volume limits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should verify that all nodes have volume limits [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_limits.go:43 Only supported for providers [aws gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_limits.go:36 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:09:55.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] exhausted, immediate binding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1081 STEP: Building a driver namespace object, basename csi-mock-volumes-4897 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Oct 5 12:09:56.022: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4897-7140/csi-attacher Oct 5 12:09:56.025: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4897 Oct 5 12:09:56.026: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4897 Oct 5 12:09:56.029: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4897 Oct 5 12:09:56.033: INFO: creating *v1.Role: csi-mock-volumes-4897-7140/external-attacher-cfg-csi-mock-volumes-4897 Oct 5 12:09:56.037: INFO: creating *v1.RoleBinding: csi-mock-volumes-4897-7140/csi-attacher-role-cfg Oct 5 12:09:56.040: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4897-7140/csi-provisioner Oct 5 12:09:56.043: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4897 Oct 5 12:09:56.043: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4897 Oct 5 12:09:56.047: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4897 Oct 5 12:09:56.050: INFO: creating *v1.Role: csi-mock-volumes-4897-7140/external-provisioner-cfg-csi-mock-volumes-4897 Oct 5 12:09:56.053: INFO: creating *v1.RoleBinding: csi-mock-volumes-4897-7140/csi-provisioner-role-cfg Oct 5 12:09:56.056: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4897-7140/csi-resizer Oct 5 12:09:56.059: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4897 Oct 5 12:09:56.059: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4897 Oct 5 12:09:56.062: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4897 Oct 5 12:09:56.065: INFO: creating *v1.Role: csi-mock-volumes-4897-7140/external-resizer-cfg-csi-mock-volumes-4897 Oct 5 12:09:56.068: INFO: creating *v1.RoleBinding: csi-mock-volumes-4897-7140/csi-resizer-role-cfg Oct 5 12:09:56.071: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4897-7140/csi-snapshotter Oct 5 12:09:56.074: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4897 Oct 5 12:09:56.074: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4897 Oct 5 12:09:56.077: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4897 Oct 5 12:09:56.080: INFO: creating *v1.Role: csi-mock-volumes-4897-7140/external-snapshotter-leaderelection-csi-mock-volumes-4897 Oct 5 12:09:56.083: INFO: creating *v1.RoleBinding: csi-mock-volumes-4897-7140/external-snapshotter-leaderelection Oct 5 12:09:56.087: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4897-7140/csi-mock Oct 5 12:09:56.090: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4897 Oct 5 12:09:56.093: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4897 Oct 5 12:09:56.097: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4897 Oct 5 12:09:56.101: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4897 Oct 5 12:09:56.104: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4897 Oct 5 12:09:56.108: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4897 Oct 5 12:09:56.111: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4897 Oct 5 12:09:56.114: INFO: creating *v1.StatefulSet: csi-mock-volumes-4897-7140/csi-mockplugin Oct 5 12:09:56.120: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4897 Oct 5 12:09:56.123: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4897" Oct 5 12:09:56.126: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4897 to register on node v122-worker2 I1005 12:09:59.180101 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I1005 12:09:59.182790 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-4897","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1005 12:09:59.184992 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I1005 12:09:59.187540 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I1005 12:09:59.286966 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-4897","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1005 12:09:59.738588 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-4897"},"Error":"","FullError":null} STEP: Creating pod Oct 5 12:10:01.144: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Oct 5 12:10:01.151: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-6psr4] to have phase Bound Oct 5 12:10:01.154: INFO: PersistentVolumeClaim pvc-6psr4 found but phase is Pending instead of Bound. I1005 12:10:01.158914 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-42b2871c-23bf-4771-b390-ea7f5155738d","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}} I1005 12:10:02.162302 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-42b2871c-23bf-4771-b390-ea7f5155738d","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-42b2871c-23bf-4771-b390-ea7f5155738d"}}},"Error":"","FullError":null} Oct 5 12:10:03.159: INFO: PersistentVolumeClaim pvc-6psr4 found and phase=Bound (2.007940396s) I1005 12:10:03.345607 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:10:03.348616 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Oct 5 12:10:03.351: INFO: >>> kubeConfig: /root/.kube/config I1005 12:10:03.509391 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-42b2871c-23bf-4771-b390-ea7f5155738d/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-42b2871c-23bf-4771-b390-ea7f5155738d","storage.kubernetes.io/csiProvisionerIdentity":"1664971799188-8081-csi-mock-csi-mock-volumes-4897"}},"Response":{},"Error":"","FullError":null} I1005 12:10:03.517590 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:10:03.519919 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Oct 5 12:10:03.522: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:10:03.656: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:10:03.787: INFO: >>> kubeConfig: /root/.kube/config I1005 12:10:03.923243 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-42b2871c-23bf-4771-b390-ea7f5155738d/globalmount","target_path":"/var/lib/kubelet/pods/3dec9cff-09d1-49d4-9f34-f38dc93879fe/volumes/kubernetes.io~csi/pvc-42b2871c-23bf-4771-b390-ea7f5155738d/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-42b2871c-23bf-4771-b390-ea7f5155738d","storage.kubernetes.io/csiProvisionerIdentity":"1664971799188-8081-csi-mock-csi-mock-volumes-4897"}},"Response":{},"Error":"","FullError":null} Oct 5 12:10:07.179: INFO: Deleting pod "pvc-volume-tester-xrhcg" in namespace "csi-mock-volumes-4897" Oct 5 12:10:07.185: INFO: Wait up to 5m0s for pod "pvc-volume-tester-xrhcg" to be fully deleted Oct 5 12:10:07.587: INFO: >>> kubeConfig: /root/.kube/config I1005 12:10:07.726713 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/3dec9cff-09d1-49d4-9f34-f38dc93879fe/volumes/kubernetes.io~csi/pvc-42b2871c-23bf-4771-b390-ea7f5155738d/mount"},"Response":{},"Error":"","FullError":null} I1005 12:10:07.791452 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:10:07.793773 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-42b2871c-23bf-4771-b390-ea7f5155738d/globalmount"},"Response":{},"Error":"","FullError":null} I1005 12:10:09.228829 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} STEP: Checking PVC events Oct 5 12:10:10.200: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-6psr4", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4897", SelfLink:"", UID:"42b2871c-23bf-4771-b390-ea7f5155738d", ResourceVersion:"11854", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63800568601, loc:(*time.Location)(0xa04d060)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0034fb7d0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0034fb7e8), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc005204bc0), VolumeMode:(*v1.PersistentVolumeMode)(0xc005204bd0), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Oct 5 12:10:10.200: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-6psr4", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4897", SelfLink:"", UID:"42b2871c-23bf-4771-b390-ea7f5155738d", ResourceVersion:"11855", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63800568601, loc:(*time.Location)(0xa04d060)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4897"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0034fb8c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0034fb8d8), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0034fb8f0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0034fb908), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc005204cb0), VolumeMode:(*v1.PersistentVolumeMode)(0xc005204cc0), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Oct 5 12:10:10.200: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-6psr4", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4897", SelfLink:"", UID:"42b2871c-23bf-4771-b390-ea7f5155738d", ResourceVersion:"11872", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63800568601, loc:(*time.Location)(0xa04d060)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4897"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00540e648), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00540e660), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00540e678), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00540e690), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-42b2871c-23bf-4771-b390-ea7f5155738d", StorageClassName:(*string)(0xc0032ef540), VolumeMode:(*v1.PersistentVolumeMode)(0xc0032ef550), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Oct 5 12:10:10.200: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-6psr4", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4897", SelfLink:"", UID:"42b2871c-23bf-4771-b390-ea7f5155738d", ResourceVersion:"11873", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63800568601, loc:(*time.Location)(0xa04d060)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4897"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00520d050), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00520d068), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00520d080), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00520d098), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00520d0b0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00520d0c8), Subresource:"status"}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-42b2871c-23bf-4771-b390-ea7f5155738d", StorageClassName:(*string)(0xc0051e0fb0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0051e0fc0), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Oct 5 12:10:10.200: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-6psr4", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4897", SelfLink:"", UID:"42b2871c-23bf-4771-b390-ea7f5155738d", ResourceVersion:"12032", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63800568601, loc:(*time.Location)(0xa04d060)}}, DeletionTimestamp:(*v1.Time)(0xc00520d0f8), DeletionGracePeriodSeconds:(*int64)(0xc0037936c8), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4897"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00520d110), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00520d128), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00520d140), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00520d158), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00520d170), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00520d188), Subresource:"status"}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-42b2871c-23bf-4771-b390-ea7f5155738d", StorageClassName:(*string)(0xc0051e1010), VolumeMode:(*v1.PersistentVolumeMode)(0xc0051e1020), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Oct 5 12:10:10.201: INFO: PVC event DELETED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-6psr4", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4897", SelfLink:"", UID:"42b2871c-23bf-4771-b390-ea7f5155738d", ResourceVersion:"12033", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63800568601, loc:(*time.Location)(0xa04d060)}}, DeletionTimestamp:(*v1.Time)(0xc00520d1b8), DeletionGracePeriodSeconds:(*int64)(0xc0037937e8), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4897"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00520d1d0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00520d1e8), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00520d200), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00520d218), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00520d230), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00520d248), Subresource:"status"}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-42b2871c-23bf-4771-b390-ea7f5155738d", StorageClassName:(*string)(0xc0051e1060), VolumeMode:(*v1.PersistentVolumeMode)(0xc0051e1070), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} STEP: Deleting pod pvc-volume-tester-xrhcg Oct 5 12:10:10.201: INFO: Deleting pod "pvc-volume-tester-xrhcg" in namespace "csi-mock-volumes-4897" STEP: Deleting claim pvc-6psr4 STEP: Deleting storageclass csi-mock-volumes-4897-scvq252 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4897 STEP: Waiting for namespaces [csi-mock-volumes-4897] to vanish STEP: uninstalling csi mock driver Oct 5 12:10:23.244: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4897-7140/csi-attacher Oct 5 12:10:23.249: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4897 Oct 5 12:10:23.254: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4897 Oct 5 12:10:23.259: INFO: deleting *v1.Role: csi-mock-volumes-4897-7140/external-attacher-cfg-csi-mock-volumes-4897 Oct 5 12:10:23.263: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4897-7140/csi-attacher-role-cfg Oct 5 12:10:23.268: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4897-7140/csi-provisioner Oct 5 12:10:23.273: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4897 Oct 5 12:10:23.278: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4897 Oct 5 12:10:23.283: INFO: deleting *v1.Role: csi-mock-volumes-4897-7140/external-provisioner-cfg-csi-mock-volumes-4897 Oct 5 12:10:23.288: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4897-7140/csi-provisioner-role-cfg Oct 5 12:10:23.292: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4897-7140/csi-resizer Oct 5 12:10:23.296: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4897 Oct 5 12:10:23.301: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4897 Oct 5 12:10:23.306: INFO: deleting *v1.Role: csi-mock-volumes-4897-7140/external-resizer-cfg-csi-mock-volumes-4897 Oct 5 12:10:23.311: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4897-7140/csi-resizer-role-cfg Oct 5 12:10:23.316: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4897-7140/csi-snapshotter Oct 5 12:10:23.320: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4897 Oct 5 12:10:23.324: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4897 Oct 5 12:10:23.328: INFO: deleting *v1.Role: csi-mock-volumes-4897-7140/external-snapshotter-leaderelection-csi-mock-volumes-4897 Oct 5 12:10:23.333: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4897-7140/external-snapshotter-leaderelection Oct 5 12:10:23.337: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4897-7140/csi-mock Oct 5 12:10:23.341: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4897 Oct 5 12:10:23.345: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4897 Oct 5 12:10:23.353: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4897 Oct 5 12:10:23.357: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4897 Oct 5 12:10:23.361: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4897 Oct 5 12:10:23.365: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4897 Oct 5 12:10:23.369: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4897 Oct 5 12:10:23.374: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4897-7140/csi-mockplugin Oct 5 12:10:23.379: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-4897 STEP: deleting the driver namespace: csi-mock-volumes-4897-7140 STEP: Waiting for namespaces [csi-mock-volumes-4897-7140] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:11:07.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:71.457 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 storage capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1023 exhausted, immediate binding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1081 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, immediate binding","total":-1,"completed":9,"skipped":278,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:11:07.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:144 [It] should report an error and create no PV /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:738 Oct 5 12:11:07.473: INFO: Only supported for providers [aws] (not local) [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:11:07.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-4780" for this suite. S [SKIPPING] [0.051 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Invalid AWS KMS key /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:737 should report an error and create no PV [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:738 Only supported for providers [aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:739 ------------------------------ SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:11:05.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Oct 5 12:11:05.451: INFO: The status of Pod test-hostpath-type-cl4nj is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:11:07.455: INFO: The status of Pod test-hostpath-type-cl4nj is Running (Ready = true) STEP: running on node v122-worker STEP: Create a block device for further testing Oct 5 12:11:07.459: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-2869 PodName:test-hostpath-type-cl4nj ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:11:07.459: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:369 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:11:09.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-2869" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathCharDev","total":-1,"completed":21,"skipped":814,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: block] Set fsGroup for local volume should set fsGroup for one pod [Slow]"]} S ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:06:14.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:411 STEP: Creating the pod [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:11:14.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8142" for this suite. • [SLOW TEST:300.064 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:411 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret Should fail non-optional pod creation due to secret object does not exist [Slow]","total":-1,"completed":8,"skipped":159,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:11:09.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Oct 5 12:11:09.660: INFO: The status of Pod test-hostpath-type-4h7n4 is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:11:11.664: INFO: The status of Pod test-hostpath-type-4h7n4 is Running (Ready = true) STEP: running on node v122-worker2 STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should fail on mounting directory 'adir' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:99 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:11:15.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-5662" for this suite. • [SLOW TEST:6.108 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting directory 'adir' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:99 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathBlockDev","total":-1,"completed":22,"skipped":815,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: block] Set fsGroup for local volume should set fsGroup for one pod [Slow]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:11:07.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "v122-worker" at path "/tmp/local-volume-test-06af63ec-136a-40f6-ae8d-160f46278290" Oct 5 12:11:09.562: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-06af63ec-136a-40f6-ae8d-160f46278290" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-06af63ec-136a-40f6-ae8d-160f46278290" "/tmp/local-volume-test-06af63ec-136a-40f6-ae8d-160f46278290"] Namespace:persistent-local-volumes-test-6926 PodName:hostexec-v122-worker-d5lmk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:11:09.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:11:09.681: INFO: Creating a PV followed by a PVC Oct 5 12:11:09.690: INFO: Waiting for PV local-pvgq7rt to bind to PVC pvc-66z9x Oct 5 12:11:09.690: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-66z9x] to have phase Bound Oct 5 12:11:09.693: INFO: PersistentVolumeClaim pvc-66z9x found but phase is Pending instead of Bound. Oct 5 12:11:11.697: INFO: PersistentVolumeClaim pvc-66z9x found but phase is Pending instead of Bound. Oct 5 12:11:13.701: INFO: PersistentVolumeClaim pvc-66z9x found but phase is Pending instead of Bound. Oct 5 12:11:15.706: INFO: PersistentVolumeClaim pvc-66z9x found but phase is Pending instead of Bound. Oct 5 12:11:17.710: INFO: PersistentVolumeClaim pvc-66z9x found but phase is Pending instead of Bound. Oct 5 12:11:19.714: INFO: PersistentVolumeClaim pvc-66z9x found and phase=Bound (10.024043805s) Oct 5 12:11:19.714: INFO: Waiting up to 3m0s for PersistentVolume local-pvgq7rt to have phase Bound Oct 5 12:11:19.717: INFO: PersistentVolume local-pvgq7rt found and phase=Bound (3.050815ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Oct 5 12:11:21.743: INFO: pod "pod-d4029841-f36e-4a6c-b888-fc4d576fc956" created on Node "v122-worker" STEP: Writing in pod1 Oct 5 12:11:21.743: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6926 PodName:pod-d4029841-f36e-4a6c-b888-fc4d576fc956 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:11:21.743: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:11:21.882: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Oct 5 12:11:21.882: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6926 PodName:pod-d4029841-f36e-4a6c-b888-fc4d576fc956 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:11:21.882: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:11:21.998: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-d4029841-f36e-4a6c-b888-fc4d576fc956 in namespace persistent-local-volumes-test-6926 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:11:22.004: INFO: Deleting PersistentVolumeClaim "pvc-66z9x" Oct 5 12:11:22.009: INFO: Deleting PersistentVolume "local-pvgq7rt" STEP: Unmount tmpfs mount point on node "v122-worker" at path "/tmp/local-volume-test-06af63ec-136a-40f6-ae8d-160f46278290" Oct 5 12:11:22.013: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-06af63ec-136a-40f6-ae8d-160f46278290"] Namespace:persistent-local-volumes-test-6926 PodName:hostexec-v122-worker-d5lmk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:11:22.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:11:22.150: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-06af63ec-136a-40f6-ae8d-160f46278290] Namespace:persistent-local-volumes-test-6926 PodName:hostexec-v122-worker-d5lmk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:11:22.150: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:11:22.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6926" for this suite. • [SLOW TEST:14.811 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":10,"skipped":303,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:11:15.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Oct 5 12:11:17.832: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-c977540f-59fd-441f-8ce7-0c14b658683c && mount --bind /tmp/local-volume-test-c977540f-59fd-441f-8ce7-0c14b658683c /tmp/local-volume-test-c977540f-59fd-441f-8ce7-0c14b658683c] Namespace:persistent-local-volumes-test-65 PodName:hostexec-v122-worker-5qnjh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:11:17.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:11:17.989: INFO: Creating a PV followed by a PVC Oct 5 12:11:17.998: INFO: Waiting for PV local-pvbwb62 to bind to PVC pvc-7v27n Oct 5 12:11:17.998: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-7v27n] to have phase Bound Oct 5 12:11:18.001: INFO: PersistentVolumeClaim pvc-7v27n found but phase is Pending instead of Bound. Oct 5 12:11:20.006: INFO: PersistentVolumeClaim pvc-7v27n found and phase=Bound (2.008309983s) Oct 5 12:11:20.006: INFO: Waiting up to 3m0s for PersistentVolume local-pvbwb62 to have phase Bound Oct 5 12:11:20.009: INFO: PersistentVolume local-pvbwb62 found and phase=Bound (3.118992ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Oct 5 12:11:24.031: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:45799 --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-65 exec pod-5ab384a2-f36a-41c3-890d-0fab7dd924be --namespace=persistent-local-volumes-test-65 -- stat -c %g /mnt/volume1' Oct 5 12:11:24.272: INFO: stderr: "" Oct 5 12:11:24.272: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-5ab384a2-f36a-41c3-890d-0fab7dd924be in namespace persistent-local-volumes-test-65 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:11:24.277: INFO: Deleting PersistentVolumeClaim "pvc-7v27n" Oct 5 12:11:24.281: INFO: Deleting PersistentVolume "local-pvbwb62" STEP: Removing the test directory Oct 5 12:11:24.285: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-c977540f-59fd-441f-8ce7-0c14b658683c && rm -r /tmp/local-volume-test-c977540f-59fd-441f-8ce7-0c14b658683c] Namespace:persistent-local-volumes-test-65 PodName:hostexec-v122-worker-5qnjh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:11:24.285: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:11:24.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-65" for this suite. • [SLOW TEST:8.656 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":23,"skipped":837,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: block] Set fsGroup for local volume should set fsGroup for one pod [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:11:14.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Oct 5 12:11:16.739: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-3356b41a-132a-4c4a-8f69-55dc8ee6e636-backend && mount --bind /tmp/local-volume-test-3356b41a-132a-4c4a-8f69-55dc8ee6e636-backend /tmp/local-volume-test-3356b41a-132a-4c4a-8f69-55dc8ee6e636-backend && ln -s /tmp/local-volume-test-3356b41a-132a-4c4a-8f69-55dc8ee6e636-backend /tmp/local-volume-test-3356b41a-132a-4c4a-8f69-55dc8ee6e636] Namespace:persistent-local-volumes-test-72 PodName:hostexec-v122-worker-852kc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:11:16.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:11:16.891: INFO: Creating a PV followed by a PVC Oct 5 12:11:16.901: INFO: Waiting for PV local-pvk7552 to bind to PVC pvc-5jvkx Oct 5 12:11:16.901: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-5jvkx] to have phase Bound Oct 5 12:11:16.904: INFO: PersistentVolumeClaim pvc-5jvkx found but phase is Pending instead of Bound. Oct 5 12:11:18.909: INFO: PersistentVolumeClaim pvc-5jvkx found but phase is Pending instead of Bound. Oct 5 12:11:20.914: INFO: PersistentVolumeClaim pvc-5jvkx found and phase=Bound (4.012907148s) Oct 5 12:11:20.914: INFO: Waiting up to 3m0s for PersistentVolume local-pvk7552 to have phase Bound Oct 5 12:11:20.917: INFO: PersistentVolume local-pvk7552 found and phase=Bound (3.080064ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Oct 5 12:11:24.944: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:45799 --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-72 exec pod-425c0706-bb0d-4c88-b9d2-83cec22b96fd --namespace=persistent-local-volumes-test-72 -- stat -c %g /mnt/volume1' Oct 5 12:11:25.127: INFO: stderr: "" Oct 5 12:11:25.127: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-425c0706-bb0d-4c88-b9d2-83cec22b96fd in namespace persistent-local-volumes-test-72 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:11:25.132: INFO: Deleting PersistentVolumeClaim "pvc-5jvkx" Oct 5 12:11:25.136: INFO: Deleting PersistentVolume "local-pvk7552" STEP: Removing the test directory Oct 5 12:11:25.141: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-3356b41a-132a-4c4a-8f69-55dc8ee6e636 && umount /tmp/local-volume-test-3356b41a-132a-4c4a-8f69-55dc8ee6e636-backend && rm -r /tmp/local-volume-test-3356b41a-132a-4c4a-8f69-55dc8ee6e636-backend] Namespace:persistent-local-volumes-test-72 PodName:hostexec-v122-worker-852kc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:11:25.141: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:11:25.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-72" for this suite. • [SLOW TEST:10.610 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":9,"skipped":171,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:11:22.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:90 STEP: Creating projection with secret that has name projected-secret-test-9ca263cc-b343-41f3-97e6-77df03836594 STEP: Creating a pod to test consume secrets Oct 5 12:11:22.463: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2d82b512-4130-4a45-86a7-a41752222868" in namespace "projected-772" to be "Succeeded or Failed" Oct 5 12:11:22.466: INFO: Pod "pod-projected-secrets-2d82b512-4130-4a45-86a7-a41752222868": Phase="Pending", Reason="", readiness=false. Elapsed: 3.133229ms Oct 5 12:11:24.471: INFO: Pod "pod-projected-secrets-2d82b512-4130-4a45-86a7-a41752222868": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007798336s Oct 5 12:11:26.475: INFO: Pod "pod-projected-secrets-2d82b512-4130-4a45-86a7-a41752222868": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01202786s STEP: Saw pod success Oct 5 12:11:26.475: INFO: Pod "pod-projected-secrets-2d82b512-4130-4a45-86a7-a41752222868" satisfied condition "Succeeded or Failed" Oct 5 12:11:26.477: INFO: Trying to get logs from node v122-worker2 pod pod-projected-secrets-2d82b512-4130-4a45-86a7-a41752222868 container projected-secret-volume-test: STEP: delete the pod Oct 5 12:11:26.488: INFO: Waiting for pod pod-projected-secrets-2d82b512-4130-4a45-86a7-a41752222868 to disappear Oct 5 12:11:26.490: INFO: Pod pod-projected-secrets-2d82b512-4130-4a45-86a7-a41752222868 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:11:26.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-772" for this suite. STEP: Destroying namespace "secret-namespace-2947" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]","total":-1,"completed":11,"skipped":338,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:11:26.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:75 STEP: Creating configMap with name configmap-test-volume-b576ab73-3cc2-42f7-a93a-3459cc4bc5ed STEP: Creating a pod to test consume configMaps Oct 5 12:11:26.613: INFO: Waiting up to 5m0s for pod "pod-configmaps-874e514b-6357-4c0b-9079-718886a48e49" in namespace "configmap-8726" to be "Succeeded or Failed" Oct 5 12:11:26.616: INFO: Pod "pod-configmaps-874e514b-6357-4c0b-9079-718886a48e49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.695889ms Oct 5 12:11:28.621: INFO: Pod "pod-configmaps-874e514b-6357-4c0b-9079-718886a48e49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007475314s Oct 5 12:11:30.626: INFO: Pod "pod-configmaps-874e514b-6357-4c0b-9079-718886a48e49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013084731s STEP: Saw pod success Oct 5 12:11:30.627: INFO: Pod "pod-configmaps-874e514b-6357-4c0b-9079-718886a48e49" satisfied condition "Succeeded or Failed" Oct 5 12:11:30.629: INFO: Trying to get logs from node v122-worker2 pod pod-configmaps-874e514b-6357-4c0b-9079-718886a48e49 container agnhost-container: STEP: delete the pod Oct 5 12:11:30.654: INFO: Waiting for pod pod-configmaps-874e514b-6357-4c0b-9079-718886a48e49 to disappear Oct 5 12:11:30.657: INFO: Pod pod-configmaps-874e514b-6357-4c0b-9079-718886a48e49 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:11:30.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8726" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":12,"skipped":389,"failed":0} SSSSSSSSSSSSS ------------------------------ {"msg":"FAILED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":5,"skipped":48,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1"]} [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:06:31.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:557 STEP: Creating configMap with name cm-test-opt-create-b8707cfe-9a3b-4232-9fc7-8e4d76835140 STEP: Creating the pod [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:11:31.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8049" for this suite. • [SLOW TEST:300.068 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:557 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]","total":-1,"completed":6,"skipped":48,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Flexvolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:11:31.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename flexvolume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Flexvolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/flexvolume.go:169 Oct 5 12:11:31.901: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-storage] Flexvolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:11:31.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "flexvolume-676" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.040 seconds] [sig-storage] Flexvolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should be mountable when non-attachable [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/flexvolume.go:188 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/flexvolume.go:173 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:11:31.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Oct 5 12:11:32.048: INFO: The status of Pod test-hostpath-type-d99gj is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:11:34.052: INFO: The status of Pod test-hostpath-type-d99gj is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:11:36.053: INFO: The status of Pod test-hostpath-type-d99gj is Running (Ready = true) STEP: running on node v122-worker [It] Should fail on mounting socket 'asocket' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:231 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:11:38.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-308" for this suite. • [SLOW TEST:6.093 seconds] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting socket 'asocket' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:231 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathBlockDev","total":-1,"completed":7,"skipped":124,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:11:38.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:106 STEP: Creating a pod to test downward API volume plugin Oct 5 12:11:38.199: INFO: Waiting up to 5m0s for pod "metadata-volume-970b0f82-f6a5-42b1-b74a-560a9aa33ea5" in namespace "projected-6482" to be "Succeeded or Failed" Oct 5 12:11:38.202: INFO: Pod "metadata-volume-970b0f82-f6a5-42b1-b74a-560a9aa33ea5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.277588ms Oct 5 12:11:40.206: INFO: Pod "metadata-volume-970b0f82-f6a5-42b1-b74a-560a9aa33ea5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007311926s Oct 5 12:11:42.211: INFO: Pod "metadata-volume-970b0f82-f6a5-42b1-b74a-560a9aa33ea5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012255612s STEP: Saw pod success Oct 5 12:11:42.211: INFO: Pod "metadata-volume-970b0f82-f6a5-42b1-b74a-560a9aa33ea5" satisfied condition "Succeeded or Failed" Oct 5 12:11:42.215: INFO: Trying to get logs from node v122-worker pod metadata-volume-970b0f82-f6a5-42b1-b74a-560a9aa33ea5 container client-container: STEP: delete the pod Oct 5 12:11:42.243: INFO: Waiting for pod metadata-volume-970b0f82-f6a5-42b1-b74a-560a9aa33ea5 to disappear Oct 5 12:11:42.246: INFO: Pod metadata-volume-970b0f82-f6a5-42b1-b74a-560a9aa33ea5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:11:42.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6482" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":8,"skipped":154,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:08:28.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should call NodeUnstage after NodeStage ephemeral error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:829 STEP: Building a driver namespace object, basename csi-mock-volumes-9240 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Oct 5 12:08:28.437: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9240-4046/csi-attacher Oct 5 12:08:28.442: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9240 Oct 5 12:08:28.442: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-9240 Oct 5 12:08:28.446: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9240 Oct 5 12:08:28.452: INFO: creating *v1.Role: csi-mock-volumes-9240-4046/external-attacher-cfg-csi-mock-volumes-9240 Oct 5 12:08:28.458: INFO: creating *v1.RoleBinding: csi-mock-volumes-9240-4046/csi-attacher-role-cfg Oct 5 12:08:28.462: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9240-4046/csi-provisioner Oct 5 12:08:28.466: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9240 Oct 5 12:08:28.466: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-9240 Oct 5 12:08:28.470: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9240 Oct 5 12:08:28.474: INFO: creating *v1.Role: csi-mock-volumes-9240-4046/external-provisioner-cfg-csi-mock-volumes-9240 Oct 5 12:08:28.478: INFO: creating *v1.RoleBinding: csi-mock-volumes-9240-4046/csi-provisioner-role-cfg Oct 5 12:08:28.482: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9240-4046/csi-resizer Oct 5 12:08:28.486: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9240 Oct 5 12:08:28.486: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-9240 Oct 5 12:08:28.490: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9240 Oct 5 12:08:28.494: INFO: creating *v1.Role: csi-mock-volumes-9240-4046/external-resizer-cfg-csi-mock-volumes-9240 Oct 5 12:08:28.498: INFO: creating *v1.RoleBinding: csi-mock-volumes-9240-4046/csi-resizer-role-cfg Oct 5 12:08:28.503: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9240-4046/csi-snapshotter Oct 5 12:08:28.507: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9240 Oct 5 12:08:28.507: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-9240 Oct 5 12:08:28.511: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9240 Oct 5 12:08:28.515: INFO: creating *v1.Role: csi-mock-volumes-9240-4046/external-snapshotter-leaderelection-csi-mock-volumes-9240 Oct 5 12:08:28.520: INFO: creating *v1.RoleBinding: csi-mock-volumes-9240-4046/external-snapshotter-leaderelection Oct 5 12:08:28.524: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9240-4046/csi-mock Oct 5 12:08:28.528: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9240 Oct 5 12:08:28.531: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9240 Oct 5 12:08:28.535: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9240 Oct 5 12:08:28.539: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9240 Oct 5 12:08:28.542: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9240 Oct 5 12:08:28.546: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9240 Oct 5 12:08:28.550: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9240 Oct 5 12:08:28.555: INFO: creating *v1.StatefulSet: csi-mock-volumes-9240-4046/csi-mockplugin Oct 5 12:08:28.563: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-9240 Oct 5 12:08:28.568: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-9240" Oct 5 12:08:28.571: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-9240 to register on node v122-worker I1005 12:08:33.592999 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I1005 12:08:33.595390 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-9240","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1005 12:08:33.597201 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I1005 12:08:33.599404 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I1005 12:08:33.698695 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-9240","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1005 12:08:33.935248 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-9240"},"Error":"","FullError":null} STEP: Creating pod Oct 5 12:08:38.090: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Oct 5 12:08:38.096: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-fdhs8] to have phase Bound Oct 5 12:08:38.099: INFO: PersistentVolumeClaim pvc-fdhs8 found but phase is Pending instead of Bound. I1005 12:08:38.104019 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-987cde71-f8e6-477d-bfb8-6aad77147aad","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-987cde71-f8e6-477d-bfb8-6aad77147aad"}}},"Error":"","FullError":null} Oct 5 12:08:40.103: INFO: PersistentVolumeClaim pvc-fdhs8 found and phase=Bound (2.006732602s) Oct 5 12:08:40.115: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-fdhs8] to have phase Bound Oct 5 12:08:40.118: INFO: PersistentVolumeClaim pvc-fdhs8 found and phase=Bound (3.375437ms) STEP: Waiting for expected CSI calls I1005 12:08:41.482539 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:08:41.485121 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:08:41.487767 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-987cde71-f8e6-477d-bfb8-6aad77147aad/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-987cde71-f8e6-477d-bfb8-6aad77147aad","storage.kubernetes.io/csiProvisionerIdentity":"1664971713600-8081-csi-mock-csi-mock-volumes-9240"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I1005 12:08:42.089977 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:08:42.096248 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:08:42.098864 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-987cde71-f8e6-477d-bfb8-6aad77147aad/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-987cde71-f8e6-477d-bfb8-6aad77147aad","storage.kubernetes.io/csiProvisionerIdentity":"1664971713600-8081-csi-mock-csi-mock-volumes-9240"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} STEP: Deleting the previously created pod Oct 5 12:08:42.119: INFO: Deleting pod "pvc-volume-tester-fhthn" in namespace "csi-mock-volumes-9240" Oct 5 12:08:42.124: INFO: Wait up to 5m0s for pod "pvc-volume-tester-fhthn" to be fully deleted I1005 12:08:43.199673 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:08:43.201448 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:08:43.203149 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-987cde71-f8e6-477d-bfb8-6aad77147aad/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-987cde71-f8e6-477d-bfb8-6aad77147aad","storage.kubernetes.io/csiProvisionerIdentity":"1664971713600-8081-csi-mock-csi-mock-volumes-9240"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I1005 12:08:45.220744 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:08:45.222828 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:08:45.224919 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-987cde71-f8e6-477d-bfb8-6aad77147aad/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-987cde71-f8e6-477d-bfb8-6aad77147aad","storage.kubernetes.io/csiProvisionerIdentity":"1664971713600-8081-csi-mock-csi-mock-volumes-9240"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I1005 12:08:49.256864 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:08:49.259728 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:08:49.262424 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-987cde71-f8e6-477d-bfb8-6aad77147aad/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-987cde71-f8e6-477d-bfb8-6aad77147aad","storage.kubernetes.io/csiProvisionerIdentity":"1664971713600-8081-csi-mock-csi-mock-volumes-9240"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I1005 12:08:57.337800 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:08:57.340644 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:08:57.343364 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-987cde71-f8e6-477d-bfb8-6aad77147aad/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-987cde71-f8e6-477d-bfb8-6aad77147aad","storage.kubernetes.io/csiProvisionerIdentity":"1664971713600-8081-csi-mock-csi-mock-volumes-9240"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I1005 12:09:13.381052 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:09:13.384324 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:09:13.386960 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-987cde71-f8e6-477d-bfb8-6aad77147aad/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-987cde71-f8e6-477d-bfb8-6aad77147aad","storage.kubernetes.io/csiProvisionerIdentity":"1664971713600-8081-csi-mock-csi-mock-volumes-9240"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I1005 12:09:45.469235 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:09:45.471746 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:09:45.477061 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-987cde71-f8e6-477d-bfb8-6aad77147aad/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-987cde71-f8e6-477d-bfb8-6aad77147aad","storage.kubernetes.io/csiProvisionerIdentity":"1664971713600-8081-csi-mock-csi-mock-volumes-9240"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I1005 12:10:44.564717 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:10:44.567509 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-987cde71-f8e6-477d-bfb8-6aad77147aad/globalmount"},"Response":{},"Error":"","FullError":null} STEP: Waiting for all remaining expected CSI calls STEP: Deleting pod pvc-volume-tester-fhthn Oct 5 12:10:47.131: INFO: Deleting pod "pvc-volume-tester-fhthn" in namespace "csi-mock-volumes-9240" STEP: Deleting claim pvc-fdhs8 Oct 5 12:10:47.143: INFO: Waiting up to 2m0s for PersistentVolume pvc-987cde71-f8e6-477d-bfb8-6aad77147aad to get deleted Oct 5 12:10:47.146: INFO: PersistentVolume pvc-987cde71-f8e6-477d-bfb8-6aad77147aad found and phase=Bound (3.111679ms) I1005 12:10:47.169302 21 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} Oct 5 12:10:49.150: INFO: PersistentVolume pvc-987cde71-f8e6-477d-bfb8-6aad77147aad was removed STEP: Deleting storageclass csi-mock-volumes-9240-scbnh5g STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-9240 STEP: Waiting for namespaces [csi-mock-volumes-9240] to vanish STEP: uninstalling csi mock driver Oct 5 12:11:01.193: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9240-4046/csi-attacher Oct 5 12:11:01.200: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9240 Oct 5 12:11:01.205: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9240 Oct 5 12:11:01.210: INFO: deleting *v1.Role: csi-mock-volumes-9240-4046/external-attacher-cfg-csi-mock-volumes-9240 Oct 5 12:11:01.216: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9240-4046/csi-attacher-role-cfg Oct 5 12:11:01.221: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9240-4046/csi-provisioner Oct 5 12:11:01.225: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9240 Oct 5 12:11:01.231: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9240 Oct 5 12:11:01.235: INFO: deleting *v1.Role: csi-mock-volumes-9240-4046/external-provisioner-cfg-csi-mock-volumes-9240 Oct 5 12:11:01.240: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9240-4046/csi-provisioner-role-cfg Oct 5 12:11:01.245: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9240-4046/csi-resizer Oct 5 12:11:01.249: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9240 Oct 5 12:11:01.254: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9240 Oct 5 12:11:01.259: INFO: deleting *v1.Role: csi-mock-volumes-9240-4046/external-resizer-cfg-csi-mock-volumes-9240 Oct 5 12:11:01.263: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9240-4046/csi-resizer-role-cfg Oct 5 12:11:01.268: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9240-4046/csi-snapshotter Oct 5 12:11:01.273: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9240 Oct 5 12:11:01.277: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9240 Oct 5 12:11:01.282: INFO: deleting *v1.Role: csi-mock-volumes-9240-4046/external-snapshotter-leaderelection-csi-mock-volumes-9240 Oct 5 12:11:01.287: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9240-4046/external-snapshotter-leaderelection Oct 5 12:11:01.292: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9240-4046/csi-mock Oct 5 12:11:01.297: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9240 Oct 5 12:11:01.302: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9240 Oct 5 12:11:01.307: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9240 Oct 5 12:11:01.312: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9240 Oct 5 12:11:01.316: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9240 Oct 5 12:11:01.328: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9240 Oct 5 12:11:01.332: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9240 Oct 5 12:11:01.337: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9240-4046/csi-mockplugin Oct 5 12:11:01.343: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-9240 STEP: deleting the driver namespace: csi-mock-volumes-9240-4046 STEP: Waiting for namespaces [csi-mock-volumes-9240-4046] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:11:45.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:197.031 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI NodeStage error cases [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:735 should call NodeUnstage after NodeStage ephemeral error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:829 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should call NodeUnstage after NodeStage ephemeral error","total":-1,"completed":7,"skipped":299,"failed":0} [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:11:45.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Oct 5 12:11:47.451: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-129187c5-8efe-4fd5-94ec-30a05eae0eed-backend && ln -s /tmp/local-volume-test-129187c5-8efe-4fd5-94ec-30a05eae0eed-backend /tmp/local-volume-test-129187c5-8efe-4fd5-94ec-30a05eae0eed] Namespace:persistent-local-volumes-test-6771 PodName:hostexec-v122-worker2-vnzq6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:11:47.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:11:47.600: INFO: Creating a PV followed by a PVC Oct 5 12:11:47.610: INFO: Waiting for PV local-pvbx9cv to bind to PVC pvc-fzr8h Oct 5 12:11:47.610: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-fzr8h] to have phase Bound Oct 5 12:11:47.613: INFO: PersistentVolumeClaim pvc-fzr8h found but phase is Pending instead of Bound. Oct 5 12:11:49.616: INFO: PersistentVolumeClaim pvc-fzr8h found and phase=Bound (2.006465448s) Oct 5 12:11:49.617: INFO: Waiting up to 3m0s for PersistentVolume local-pvbx9cv to have phase Bound Oct 5 12:11:49.619: INFO: PersistentVolume local-pvbx9cv found and phase=Bound (2.884675ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Oct 5 12:11:49.625: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:11:49.627: INFO: Deleting PersistentVolumeClaim "pvc-fzr8h" Oct 5 12:11:49.631: INFO: Deleting PersistentVolume "local-pvbx9cv" STEP: Removing the test directory Oct 5 12:11:49.636: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-129187c5-8efe-4fd5-94ec-30a05eae0eed && rm -r /tmp/local-volume-test-129187c5-8efe-4fd5-94ec-30a05eae0eed-backend] Namespace:persistent-local-volumes-test-6771 PodName:hostexec-v122-worker2-vnzq6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:11:49.636: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:11:49.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6771" for this suite. S [SKIPPING] [4.416 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:11:24.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] token should not be plumbed down when csiServiceAccountTokenEnabled=false /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1525 STEP: Building a driver namespace object, basename csi-mock-volumes-4435 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Oct 5 12:11:24.610: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4435-3424/csi-attacher Oct 5 12:11:24.614: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4435 Oct 5 12:11:24.614: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4435 Oct 5 12:11:24.617: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4435 Oct 5 12:11:24.621: INFO: creating *v1.Role: csi-mock-volumes-4435-3424/external-attacher-cfg-csi-mock-volumes-4435 Oct 5 12:11:24.625: INFO: creating *v1.RoleBinding: csi-mock-volumes-4435-3424/csi-attacher-role-cfg Oct 5 12:11:24.629: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4435-3424/csi-provisioner Oct 5 12:11:24.633: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4435 Oct 5 12:11:24.633: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4435 Oct 5 12:11:24.637: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4435 Oct 5 12:11:24.640: INFO: creating *v1.Role: csi-mock-volumes-4435-3424/external-provisioner-cfg-csi-mock-volumes-4435 Oct 5 12:11:24.644: INFO: creating *v1.RoleBinding: csi-mock-volumes-4435-3424/csi-provisioner-role-cfg Oct 5 12:11:24.648: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4435-3424/csi-resizer Oct 5 12:11:24.652: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4435 Oct 5 12:11:24.652: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4435 Oct 5 12:11:24.655: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4435 Oct 5 12:11:24.659: INFO: creating *v1.Role: csi-mock-volumes-4435-3424/external-resizer-cfg-csi-mock-volumes-4435 Oct 5 12:11:24.663: INFO: creating *v1.RoleBinding: csi-mock-volumes-4435-3424/csi-resizer-role-cfg Oct 5 12:11:24.668: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4435-3424/csi-snapshotter Oct 5 12:11:24.672: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4435 Oct 5 12:11:24.672: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4435 Oct 5 12:11:24.675: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4435 Oct 5 12:11:24.679: INFO: creating *v1.Role: csi-mock-volumes-4435-3424/external-snapshotter-leaderelection-csi-mock-volumes-4435 Oct 5 12:11:24.682: INFO: creating *v1.RoleBinding: csi-mock-volumes-4435-3424/external-snapshotter-leaderelection Oct 5 12:11:24.686: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4435-3424/csi-mock Oct 5 12:11:24.690: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4435 Oct 5 12:11:24.693: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4435 Oct 5 12:11:24.697: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4435 Oct 5 12:11:24.700: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4435 Oct 5 12:11:24.704: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4435 Oct 5 12:11:24.707: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4435 Oct 5 12:11:24.711: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4435 Oct 5 12:11:24.714: INFO: creating *v1.StatefulSet: csi-mock-volumes-4435-3424/csi-mockplugin Oct 5 12:11:24.721: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4435 Oct 5 12:11:24.724: INFO: creating *v1.StatefulSet: csi-mock-volumes-4435-3424/csi-mockplugin-attacher Oct 5 12:11:24.729: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4435" Oct 5 12:11:24.732: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4435 to register on node v122-worker STEP: Creating pod Oct 5 12:11:29.746: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Oct 5 12:11:29.753: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-7ksnr] to have phase Bound Oct 5 12:11:29.755: INFO: PersistentVolumeClaim pvc-7ksnr found but phase is Pending instead of Bound. Oct 5 12:11:31.761: INFO: PersistentVolumeClaim pvc-7ksnr found and phase=Bound (2.008184756s) STEP: Deleting the previously created pod Oct 5 12:11:41.779: INFO: Deleting pod "pvc-volume-tester-89gpz" in namespace "csi-mock-volumes-4435" Oct 5 12:11:41.785: INFO: Wait up to 5m0s for pod "pvc-volume-tester-89gpz" to be fully deleted STEP: Checking CSI driver logs Oct 5 12:11:43.800: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/d1b99a52-10de-4fdf-b368-151db8fae447/volumes/kubernetes.io~csi/pvc-4dece084-065b-4747-a899-32ba6886d1ea/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-89gpz Oct 5 12:11:43.800: INFO: Deleting pod "pvc-volume-tester-89gpz" in namespace "csi-mock-volumes-4435" STEP: Deleting claim pvc-7ksnr Oct 5 12:11:43.811: INFO: Waiting up to 2m0s for PersistentVolume pvc-4dece084-065b-4747-a899-32ba6886d1ea to get deleted Oct 5 12:11:43.814: INFO: PersistentVolume pvc-4dece084-065b-4747-a899-32ba6886d1ea found and phase=Bound (3.126389ms) Oct 5 12:11:45.819: INFO: PersistentVolume pvc-4dece084-065b-4747-a899-32ba6886d1ea found and phase=Released (2.007646849s) Oct 5 12:11:47.824: INFO: PersistentVolume pvc-4dece084-065b-4747-a899-32ba6886d1ea found and phase=Released (4.012512336s) Oct 5 12:11:49.828: INFO: PersistentVolume pvc-4dece084-065b-4747-a899-32ba6886d1ea found and phase=Released (6.016623312s) Oct 5 12:11:51.832: INFO: PersistentVolume pvc-4dece084-065b-4747-a899-32ba6886d1ea was removed STEP: Deleting storageclass csi-mock-volumes-4435-scfcm77 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4435 STEP: Waiting for namespaces [csi-mock-volumes-4435] to vanish STEP: uninstalling csi mock driver Oct 5 12:11:57.848: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4435-3424/csi-attacher Oct 5 12:11:57.853: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4435 Oct 5 12:11:57.857: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4435 Oct 5 12:11:57.862: INFO: deleting *v1.Role: csi-mock-volumes-4435-3424/external-attacher-cfg-csi-mock-volumes-4435 Oct 5 12:11:57.866: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4435-3424/csi-attacher-role-cfg Oct 5 12:11:57.871: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4435-3424/csi-provisioner Oct 5 12:11:57.875: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4435 Oct 5 12:11:57.880: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4435 Oct 5 12:11:57.884: INFO: deleting *v1.Role: csi-mock-volumes-4435-3424/external-provisioner-cfg-csi-mock-volumes-4435 Oct 5 12:11:57.888: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4435-3424/csi-provisioner-role-cfg Oct 5 12:11:57.893: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4435-3424/csi-resizer Oct 5 12:11:57.897: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4435 Oct 5 12:11:57.902: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4435 Oct 5 12:11:57.906: INFO: deleting *v1.Role: csi-mock-volumes-4435-3424/external-resizer-cfg-csi-mock-volumes-4435 Oct 5 12:11:57.911: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4435-3424/csi-resizer-role-cfg Oct 5 12:11:57.915: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4435-3424/csi-snapshotter Oct 5 12:11:57.919: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4435 Oct 5 12:11:57.924: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4435 Oct 5 12:11:57.928: INFO: deleting *v1.Role: csi-mock-volumes-4435-3424/external-snapshotter-leaderelection-csi-mock-volumes-4435 Oct 5 12:11:57.933: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4435-3424/external-snapshotter-leaderelection Oct 5 12:11:57.937: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4435-3424/csi-mock Oct 5 12:11:57.947: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4435 Oct 5 12:11:57.951: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4435 Oct 5 12:11:57.955: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4435 Oct 5 12:11:57.960: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4435 Oct 5 12:11:57.964: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4435 Oct 5 12:11:57.968: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4435 Oct 5 12:11:57.972: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4435 Oct 5 12:11:57.977: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4435-3424/csi-mockplugin Oct 5 12:11:57.983: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-4435 Oct 5 12:11:57.987: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4435-3424/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-4435-3424 STEP: Waiting for namespaces [csi-mock-volumes-4435-3424] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:12:04.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:39.484 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIServiceAccountToken /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1497 token should not be plumbed down when csiServiceAccountTokenEnabled=false /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1525 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false","total":-1,"completed":24,"skipped":888,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: block] Set fsGroup for local volume should set fsGroup for one pod [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:11:25.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not be passed when podInfoOnMount=false /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:494 STEP: Building a driver namespace object, basename csi-mock-volumes-8866 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Oct 5 12:11:25.449: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8866-806/csi-attacher Oct 5 12:11:25.453: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8866 Oct 5 12:11:25.453: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-8866 Oct 5 12:11:25.457: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8866 Oct 5 12:11:25.467: INFO: creating *v1.Role: csi-mock-volumes-8866-806/external-attacher-cfg-csi-mock-volumes-8866 Oct 5 12:11:25.471: INFO: creating *v1.RoleBinding: csi-mock-volumes-8866-806/csi-attacher-role-cfg Oct 5 12:11:25.475: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8866-806/csi-provisioner Oct 5 12:11:25.478: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8866 Oct 5 12:11:25.478: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-8866 Oct 5 12:11:25.482: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8866 Oct 5 12:11:25.492: INFO: creating *v1.Role: csi-mock-volumes-8866-806/external-provisioner-cfg-csi-mock-volumes-8866 Oct 5 12:11:25.495: INFO: creating *v1.RoleBinding: csi-mock-volumes-8866-806/csi-provisioner-role-cfg Oct 5 12:11:25.499: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8866-806/csi-resizer Oct 5 12:11:25.503: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8866 Oct 5 12:11:25.503: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-8866 Oct 5 12:11:25.507: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8866 Oct 5 12:11:25.511: INFO: creating *v1.Role: csi-mock-volumes-8866-806/external-resizer-cfg-csi-mock-volumes-8866 Oct 5 12:11:25.515: INFO: creating *v1.RoleBinding: csi-mock-volumes-8866-806/csi-resizer-role-cfg Oct 5 12:11:25.521: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8866-806/csi-snapshotter Oct 5 12:11:25.526: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8866 Oct 5 12:11:25.526: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-8866 Oct 5 12:11:25.530: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8866 Oct 5 12:11:25.534: INFO: creating *v1.Role: csi-mock-volumes-8866-806/external-snapshotter-leaderelection-csi-mock-volumes-8866 Oct 5 12:11:25.537: INFO: creating *v1.RoleBinding: csi-mock-volumes-8866-806/external-snapshotter-leaderelection Oct 5 12:11:25.541: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8866-806/csi-mock Oct 5 12:11:25.544: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8866 Oct 5 12:11:25.548: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8866 Oct 5 12:11:25.552: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8866 Oct 5 12:11:25.555: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8866 Oct 5 12:11:25.559: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8866 Oct 5 12:11:25.562: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8866 Oct 5 12:11:25.565: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8866 Oct 5 12:11:25.569: INFO: creating *v1.StatefulSet: csi-mock-volumes-8866-806/csi-mockplugin Oct 5 12:11:25.575: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-8866 Oct 5 12:11:25.579: INFO: creating *v1.StatefulSet: csi-mock-volumes-8866-806/csi-mockplugin-attacher Oct 5 12:11:25.584: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-8866" Oct 5 12:11:25.587: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-8866 to register on node v122-worker2 STEP: Creating pod Oct 5 12:11:30.601: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Oct 5 12:11:30.608: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-2rpqx] to have phase Bound Oct 5 12:11:30.611: INFO: PersistentVolumeClaim pvc-2rpqx found but phase is Pending instead of Bound. Oct 5 12:11:32.615: INFO: PersistentVolumeClaim pvc-2rpqx found and phase=Bound (2.007258233s) STEP: Deleting the previously created pod Oct 5 12:11:40.635: INFO: Deleting pod "pvc-volume-tester-lfmbt" in namespace "csi-mock-volumes-8866" Oct 5 12:11:40.640: INFO: Wait up to 5m0s for pod "pvc-volume-tester-lfmbt" to be fully deleted STEP: Checking CSI driver logs Oct 5 12:11:44.656: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/dfa7f2be-30eb-4870-b227-d677903fbd02/volumes/kubernetes.io~csi/pvc-75be10f0-151c-440e-8c3f-60ccbcacef09/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-lfmbt Oct 5 12:11:44.656: INFO: Deleting pod "pvc-volume-tester-lfmbt" in namespace "csi-mock-volumes-8866" STEP: Deleting claim pvc-2rpqx Oct 5 12:11:44.668: INFO: Waiting up to 2m0s for PersistentVolume pvc-75be10f0-151c-440e-8c3f-60ccbcacef09 to get deleted Oct 5 12:11:44.670: INFO: PersistentVolume pvc-75be10f0-151c-440e-8c3f-60ccbcacef09 found and phase=Bound (2.792186ms) Oct 5 12:11:46.674: INFO: PersistentVolume pvc-75be10f0-151c-440e-8c3f-60ccbcacef09 found and phase=Released (2.006626065s) Oct 5 12:11:48.679: INFO: PersistentVolume pvc-75be10f0-151c-440e-8c3f-60ccbcacef09 found and phase=Released (4.011176961s) Oct 5 12:11:50.683: INFO: PersistentVolume pvc-75be10f0-151c-440e-8c3f-60ccbcacef09 found and phase=Released (6.015780548s) Oct 5 12:11:52.688: INFO: PersistentVolume pvc-75be10f0-151c-440e-8c3f-60ccbcacef09 was removed STEP: Deleting storageclass csi-mock-volumes-8866-scd8l7s STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-8866 STEP: Waiting for namespaces [csi-mock-volumes-8866] to vanish STEP: uninstalling csi mock driver Oct 5 12:11:58.704: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8866-806/csi-attacher Oct 5 12:11:58.709: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8866 Oct 5 12:11:58.713: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8866 Oct 5 12:11:58.718: INFO: deleting *v1.Role: csi-mock-volumes-8866-806/external-attacher-cfg-csi-mock-volumes-8866 Oct 5 12:11:58.722: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8866-806/csi-attacher-role-cfg Oct 5 12:11:58.727: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8866-806/csi-provisioner Oct 5 12:11:58.731: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8866 Oct 5 12:11:58.736: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8866 Oct 5 12:11:58.741: INFO: deleting *v1.Role: csi-mock-volumes-8866-806/external-provisioner-cfg-csi-mock-volumes-8866 Oct 5 12:11:58.745: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8866-806/csi-provisioner-role-cfg Oct 5 12:11:58.751: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8866-806/csi-resizer Oct 5 12:11:58.756: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8866 Oct 5 12:11:58.760: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8866 Oct 5 12:11:58.764: INFO: deleting *v1.Role: csi-mock-volumes-8866-806/external-resizer-cfg-csi-mock-volumes-8866 Oct 5 12:11:58.768: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8866-806/csi-resizer-role-cfg Oct 5 12:11:58.772: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8866-806/csi-snapshotter Oct 5 12:11:58.777: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8866 Oct 5 12:11:58.781: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8866 Oct 5 12:11:58.785: INFO: deleting *v1.Role: csi-mock-volumes-8866-806/external-snapshotter-leaderelection-csi-mock-volumes-8866 Oct 5 12:11:58.790: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8866-806/external-snapshotter-leaderelection Oct 5 12:11:58.794: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8866-806/csi-mock Oct 5 12:11:58.799: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8866 Oct 5 12:11:58.803: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8866 Oct 5 12:11:58.819: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8866 Oct 5 12:11:58.827: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8866 Oct 5 12:11:58.832: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8866 Oct 5 12:11:58.836: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8866 Oct 5 12:11:58.840: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8866 Oct 5 12:11:58.844: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8866-806/csi-mockplugin Oct 5 12:11:58.849: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-8866 Oct 5 12:11:58.854: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8866-806/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-8866-806 STEP: Waiting for namespaces [csi-mock-volumes-8866-806] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:12:04.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:39.515 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:444 should not be passed when podInfoOnMount=false /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:494 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false","total":-1,"completed":10,"skipped":202,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:12:04.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37 [It] should support subPath [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93 STEP: Creating a pod to test hostPath subPath Oct 5 12:12:04.167: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5352" to be "Succeeded or Failed" Oct 5 12:12:04.171: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.117418ms Oct 5 12:12:06.175: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007931264s Oct 5 12:12:08.181: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013105346s STEP: Saw pod success Oct 5 12:12:08.181: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Oct 5 12:12:08.184: INFO: Trying to get logs from node v122-worker pod pod-host-path-test container test-container-2: STEP: delete the pod Oct 5 12:12:08.199: INFO: Waiting for pod pod-host-path-test to disappear Oct 5 12:12:08.202: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:12:08.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-5352" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] HostPath should support subPath [NodeConformance]","total":-1,"completed":25,"skipped":948,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: block] Set fsGroup for local volume should set fsGroup for one pod [Slow]"]} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:06:49.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "v122-worker2" using path "/tmp/local-volume-test-ce284b16-bb06-4245-ab2f-a16737c286b4" Oct 5 12:06:51.471: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-ce284b16-bb06-4245-ab2f-a16737c286b4 && dd if=/dev/zero of=/tmp/local-volume-test-ce284b16-bb06-4245-ab2f-a16737c286b4/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-ce284b16-bb06-4245-ab2f-a16737c286b4/file] Namespace:persistent-local-volumes-test-5254 PodName:hostexec-v122-worker2-75hw5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:06:51.471: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:06:51.678: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-ce284b16-bb06-4245-ab2f-a16737c286b4/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-5254 PodName:hostexec-v122-worker2-75hw5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:06:51.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:06:51.833: INFO: Creating a PV followed by a PVC Oct 5 12:06:51.842: INFO: Waiting for PV local-pvldknd to bind to PVC pvc-tv9fj Oct 5 12:06:51.842: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-tv9fj] to have phase Bound Oct 5 12:06:51.845: INFO: PersistentVolumeClaim pvc-tv9fj found but phase is Pending instead of Bound. Oct 5 12:06:53.848: INFO: PersistentVolumeClaim pvc-tv9fj found but phase is Pending instead of Bound. Oct 5 12:06:55.852: INFO: PersistentVolumeClaim pvc-tv9fj found but phase is Pending instead of Bound. Oct 5 12:06:57.857: INFO: PersistentVolumeClaim pvc-tv9fj found but phase is Pending instead of Bound. Oct 5 12:06:59.861: INFO: PersistentVolumeClaim pvc-tv9fj found but phase is Pending instead of Bound. Oct 5 12:07:01.866: INFO: PersistentVolumeClaim pvc-tv9fj found but phase is Pending instead of Bound. Oct 5 12:07:03.870: INFO: PersistentVolumeClaim pvc-tv9fj found but phase is Pending instead of Bound. Oct 5 12:07:05.875: INFO: PersistentVolumeClaim pvc-tv9fj found and phase=Bound (14.03318017s) Oct 5 12:07:05.875: INFO: Waiting up to 3m0s for PersistentVolume local-pvldknd to have phase Bound Oct 5 12:07:05.878: INFO: PersistentVolume local-pvldknd found and phase=Bound (3.188679ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Oct 5 12:07:07.906: INFO: pod "pod-2dfdc6f1-9191-4301-96e6-b9954dca0603" created on Node "v122-worker2" STEP: Writing in pod1 Oct 5 12:07:07.906: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5254 PodName:pod-2dfdc6f1-9191-4301-96e6-b9954dca0603 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:07:07.906: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:07:08.047: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n18 bytes (18B) copied, 0.000161 seconds, 109.2KB/s", err: Oct 5 12:07:08.047: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-5254 PodName:pod-2dfdc6f1-9191-4301-96e6-b9954dca0603 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:07:08.047: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:07:08.168: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Oct 5 12:12:08.186: FAIL: Unexpected error: <*errors.errorString | 0xc000fb1f80>: { s: "pod \"pod-f79ff21f-4fdf-4e74-928e-a01f6395dff5\" is not Running: timed out waiting for the condition", } pod "pod-f79ff21f-4fdf-4e74-928e-a01f6395dff5" is not Running: timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage.twoPodsReadWriteTest(0xc002cb4b00, 0xc003330510, 0xc00109ad80) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:749 +0x2d6 k8s.io/kubernetes/test/e2e/storage.glob..func21.2.4.1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250 +0x45 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00022de00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00022de00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc00022de00, 0x729c7d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:12:08.188: INFO: Deleting PersistentVolumeClaim "pvc-tv9fj" Oct 5 12:12:08.193: INFO: Deleting PersistentVolume "local-pvldknd" Oct 5 12:12:08.198: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-ce284b16-bb06-4245-ab2f-a16737c286b4/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-5254 PodName:hostexec-v122-worker2-75hw5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:12:08.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop8" on node "v122-worker2" at path /tmp/local-volume-test-ce284b16-bb06-4245-ab2f-a16737c286b4/file Oct 5 12:12:08.334: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop8] Namespace:persistent-local-volumes-test-5254 PodName:hostexec-v122-worker2-75hw5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:12:08.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-ce284b16-bb06-4245-ab2f-a16737c286b4 Oct 5 12:12:08.486: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ce284b16-bb06-4245-ab2f-a16737c286b4] Namespace:persistent-local-volumes-test-5254 PodName:hostexec-v122-worker2-75hw5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:12:08.486: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "persistent-local-volumes-test-5254". STEP: Found 14 events. Oct 5 12:12:08.630: INFO: At 2022-10-05 12:06:49 +0000 UTC - event for hostexec-v122-worker2-75hw5: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-5254/hostexec-v122-worker2-75hw5 to v122-worker2 Oct 5 12:12:08.630: INFO: At 2022-10-05 12:06:49 +0000 UTC - event for hostexec-v122-worker2-75hw5: {kubelet v122-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Oct 5 12:12:08.630: INFO: At 2022-10-05 12:06:49 +0000 UTC - event for hostexec-v122-worker2-75hw5: {kubelet v122-worker2} Created: Created container agnhost-container Oct 5 12:12:08.630: INFO: At 2022-10-05 12:06:50 +0000 UTC - event for hostexec-v122-worker2-75hw5: {kubelet v122-worker2} Started: Started container agnhost-container Oct 5 12:12:08.630: INFO: At 2022-10-05 12:06:51 +0000 UTC - event for pvc-tv9fj: {persistentvolume-controller } ProvisioningFailed: no volume plugin matched name: kubernetes.io/no-provisioner Oct 5 12:12:08.630: INFO: At 2022-10-05 12:07:05 +0000 UTC - event for pod-2dfdc6f1-9191-4301-96e6-b9954dca0603: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-5254/pod-2dfdc6f1-9191-4301-96e6-b9954dca0603 to v122-worker2 Oct 5 12:12:08.630: INFO: At 2022-10-05 12:07:06 +0000 UTC - event for pod-2dfdc6f1-9191-4301-96e6-b9954dca0603: {kubelet v122-worker2} SuccessfulMountVolume: MapVolume.MapPodDevice succeeded for volume "local-pvldknd" globalMapPath "/var/lib/kubelet/plugins/kubernetes.io~local-volume/volumeDevices/local-pvldknd" Oct 5 12:12:08.630: INFO: At 2022-10-05 12:07:06 +0000 UTC - event for pod-2dfdc6f1-9191-4301-96e6-b9954dca0603: {kubelet v122-worker2} SuccessfulMountVolume: MapVolume.MapPodDevice succeeded for volume "local-pvldknd" volumeMapPath "/var/lib/kubelet/pods/6b6369d2-e204-4cea-bf99-8c4ca0cadc6a/volumeDevices/kubernetes.io~local-volume" Oct 5 12:12:08.630: INFO: At 2022-10-05 12:07:06 +0000 UTC - event for pod-2dfdc6f1-9191-4301-96e6-b9954dca0603: {kubelet v122-worker2} Created: Created container write-pod Oct 5 12:12:08.630: INFO: At 2022-10-05 12:07:06 +0000 UTC - event for pod-2dfdc6f1-9191-4301-96e6-b9954dca0603: {kubelet v122-worker2} Started: Started container write-pod Oct 5 12:12:08.630: INFO: At 2022-10-05 12:07:06 +0000 UTC - event for pod-2dfdc6f1-9191-4301-96e6-b9954dca0603: {kubelet v122-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" already present on machine Oct 5 12:12:08.630: INFO: At 2022-10-05 12:07:08 +0000 UTC - event for pod-f79ff21f-4fdf-4e74-928e-a01f6395dff5: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-5254/pod-f79ff21f-4fdf-4e74-928e-a01f6395dff5 to v122-worker2 Oct 5 12:12:08.630: INFO: At 2022-10-05 12:07:08 +0000 UTC - event for pod-f79ff21f-4fdf-4e74-928e-a01f6395dff5: {kubelet v122-worker2} FailedMapVolume: MapVolume.MapBlockVolume failed for volume "local-pvldknd" : blkUtil.AttachFileDevice failed. globalMapPath:/var/lib/kubelet/plugins/kubernetes.io~local-volume/volumeDevices/local-pvldknd, podUID: 7ec7f325-a468-473f-8d5b-6c62bba67f96: makeLoopDevice failed for path /var/lib/kubelet/plugins/kubernetes.io~local-volume/volumeDevices/local-pvldknd/7ec7f325-a468-473f-8d5b-6c62bba67f96: losetup -f /var/lib/kubelet/plugins/kubernetes.io~local-volume/volumeDevices/local-pvldknd/7ec7f325-a468-473f-8d5b-6c62bba67f96 failed: exit status 1 Oct 5 12:12:08.630: INFO: At 2022-10-05 12:09:11 +0000 UTC - event for pod-f79ff21f-4fdf-4e74-928e-a01f6395dff5: {kubelet v122-worker2} FailedMount: Unable to attach or mount volumes: unmounted volumes=[volume1], unattached volumes=[kube-api-access-gn4dx volume1]: timed out waiting for the condition Oct 5 12:12:08.634: INFO: POD NODE PHASE GRACE CONDITIONS Oct 5 12:12:08.634: INFO: hostexec-v122-worker2-75hw5 v122-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-05 12:06:49 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-10-05 12:06:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-10-05 12:06:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-05 12:06:49 +0000 UTC }] Oct 5 12:12:08.634: INFO: pod-2dfdc6f1-9191-4301-96e6-b9954dca0603 v122-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-05 12:07:05 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-10-05 12:07:07 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-10-05 12:07:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-05 12:07:05 +0000 UTC }] Oct 5 12:12:08.634: INFO: pod-f79ff21f-4fdf-4e74-928e-a01f6395dff5 v122-worker2 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-05 12:07:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-10-05 12:07:08 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-10-05 12:07:08 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-05 12:07:08 +0000 UTC }] Oct 5 12:12:08.634: INFO: Oct 5 12:12:08.638: INFO: Logging node info for node v122-control-plane Oct 5 12:12:08.641: INFO: Node Info: &Node{ObjectMeta:{v122-control-plane 0bba5de9-314a-4743-bf02-bde0ec06daf3 12289 0 2022-10-05 11:59:47 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:v122-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-10-05 11:59:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-10-05 11:59:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2022-10-05 12:00:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-10-05 12:00:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v122/v122-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-10-05 12:10:22 +0000 UTC,LastTransitionTime:2022-10-05 11:59:42 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-10-05 12:10:22 +0000 UTC,LastTransitionTime:2022-10-05 11:59:42 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-10-05 12:10:22 +0000 UTC,LastTransitionTime:2022-10-05 11:59:42 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-10-05 12:10:22 +0000 UTC,LastTransitionTime:2022-10-05 12:00:21 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.7,},NodeAddress{Type:Hostname,Address:v122-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:90a9e9edfe9d44d59ee2bec7a8da01cd,SystemUUID:2e684780-1fcb-4016-9109-255b79db130f,BootID:5bd644eb-fc54-4a37-951a-44566369b55e,KernelVersion:5.15.0-48-generic,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.22.15,KubeProxyVersion:v1.22.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:df9770adc7c9d21f7f2c2f8e04efed16e9ca12f9c00aad56e5753bd1819ad95f k8s.gcr.io/kube-proxy:v1.22.15],SizeBytes:105434107,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:393eaa5fb5582428fe7b343ba5b729dbdb8a7d23832e3e9183f515fee5e478ca k8s.gcr.io/kube-apiserver:v1.22.15],SizeBytes:74691038,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:0c81eb77294a8fb7706e538aae9423a6301e00bae81cebf0a67c42f27e6714c7 k8s.gcr.io/kube-controller-manager:v1.22.15],SizeBytes:67534962,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:974ce90d1dfaa71470d7e80077a3dd85ed92848347005df49d710c1576511a2c k8s.gcr.io/kube-scheduler:v1.22.15],SizeBytes:53936244,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220726-ed811e41],SizeBytes:25818452,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220607-9a4d8d2a],SizeBytes:2859509,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 5 12:12:08.641: INFO: Logging kubelet events for node v122-control-plane Oct 5 12:12:08.646: INFO: Logging pods the kubelet thinks is on node v122-control-plane Oct 5 12:12:08.677: INFO: kube-controller-manager-v122-control-plane started at 2022-10-05 11:59:52 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:08.677: INFO: Container kube-controller-manager ready: true, restart count 0 Oct 5 12:12:08.677: INFO: kube-scheduler-v122-control-plane started at 2022-10-05 11:59:52 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:08.677: INFO: Container kube-scheduler ready: true, restart count 0 Oct 5 12:12:08.677: INFO: kindnet-g8rqz started at 2022-10-05 12:00:05 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:08.677: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 12:12:08.677: INFO: kube-proxy-xtt57 started at 2022-10-05 12:00:05 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:08.677: INFO: Container kube-proxy ready: true, restart count 0 Oct 5 12:12:08.677: INFO: create-loop-devs-lvpbc started at 2022-10-05 12:00:14 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:08.677: INFO: Container loopdev ready: true, restart count 0 Oct 5 12:12:08.677: INFO: etcd-v122-control-plane started at 2022-10-05 11:59:52 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:08.677: INFO: Container etcd ready: true, restart count 0 Oct 5 12:12:08.677: INFO: kube-apiserver-v122-control-plane started at 2022-10-05 11:59:52 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:08.677: INFO: Container kube-apiserver ready: true, restart count 0 Oct 5 12:12:08.748: INFO: Latency metrics for node v122-control-plane Oct 5 12:12:08.748: INFO: Logging node info for node v122-worker Oct 5 12:12:08.752: INFO: Node Info: &Node{ObjectMeta:{v122-worker 8286eab4-ee46-4103-bc96-cf44e85cf562 14057 0 2022-10-05 12:00:08 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:v122-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-10-05 12:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2022-10-05 12:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-10-05 12:00:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-10-05 12:11:49 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v122/v122-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-10-05 12:11:49 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-10-05 12:11:49 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-10-05 12:11:49 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-10-05 12:11:49 +0000 UTC,LastTransitionTime:2022-10-05 12:00:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.6,},NodeAddress{Type:Hostname,Address:v122-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8ce5667169114cc58989bd26cdb88021,SystemUUID:f1b8869e-1c17-4972-b832-4d15146806a4,BootID:5bd644eb-fc54-4a37-951a-44566369b55e,KernelVersion:5.15.0-48-generic,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.22.15,KubeProxyVersion:v1.22.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:df9770adc7c9d21f7f2c2f8e04efed16e9ca12f9c00aad56e5753bd1819ad95f k8s.gcr.io/kube-proxy:v1.22.15],SizeBytes:105434107,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:393eaa5fb5582428fe7b343ba5b729dbdb8a7d23832e3e9183f515fee5e478ca k8s.gcr.io/kube-apiserver:v1.22.15],SizeBytes:74691038,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:0c81eb77294a8fb7706e538aae9423a6301e00bae81cebf0a67c42f27e6714c7 k8s.gcr.io/kube-controller-manager:v1.22.15],SizeBytes:67534962,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:974ce90d1dfaa71470d7e80077a3dd85ed92848347005df49d710c1576511a2c k8s.gcr.io/kube-scheduler:v1.22.15],SizeBytes:53936244,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220726-ed811e41],SizeBytes:25818452,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220607-9a4d8d2a],SizeBytes:2859509,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 5 12:12:08.752: INFO: Logging kubelet events for node v122-worker Oct 5 12:12:08.758: INFO: Logging pods the kubelet thinks is on node v122-worker Oct 5 12:12:08.767: INFO: hostexec-v122-worker-mck7x started at 2022-10-05 12:11:42 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:08.767: INFO: Container agnhost-container ready: true, restart count 0 Oct 5 12:12:08.767: INFO: pod-ephm-test-projected-64t9 started at 2022-10-05 12:10:03 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:08.767: INFO: Container test-container-subpath-projected-64t9 ready: false, restart count 0 Oct 5 12:12:08.767: INFO: pod-c07ff236-7c76-43f7-b7f5-b6e654f6e050 started at 2022-10-05 12:11:51 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:08.767: INFO: Container write-pod ready: false, restart count 0 Oct 5 12:12:08.767: INFO: test-hostpath-type-lcqtq started at 2022-10-05 12:12:04 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:08.767: INFO: Container host-path-testing ready: true, restart count 0 Oct 5 12:12:08.767: INFO: create-loop-devs-f76cj started at 2022-10-05 12:00:14 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:08.767: INFO: Container loopdev ready: true, restart count 0 Oct 5 12:12:08.767: INFO: kube-proxy-xkzrn started at 2022-10-05 12:00:09 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:08.767: INFO: Container kube-proxy ready: true, restart count 0 Oct 5 12:12:08.767: INFO: test-hostpath-type-d99gj started at 2022-10-05 12:11:32 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:08.767: INFO: Container host-path-sh-testing ready: true, restart count 0 Oct 5 12:12:08.767: INFO: test-hostpath-type-bqpdv started at 2022-10-05 12:12:07 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:08.767: INFO: Container host-path-testing ready: false, restart count 0 Oct 5 12:12:08.767: INFO: pod-secrets-76b16dac-27d0-4343-a0fe-b8ed5dd81977 started at 2022-10-05 12:06:14 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:08.767: INFO: Container creates-volume-test ready: false, restart count 0 Oct 5 12:12:08.767: INFO: kindnet-rkh8m started at 2022-10-05 12:00:09 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:08.767: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 12:12:08.890: INFO: Latency metrics for node v122-worker Oct 5 12:12:08.890: INFO: Logging node info for node v122-worker2 Oct 5 12:12:08.893: INFO: Node Info: &Node{ObjectMeta:{v122-worker2 e098b7b6-6804-492f-b9ec-650d1924542e 14116 0 2022-10-05 12:00:08 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:v122-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-10-05 12:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2022-10-05 12:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-10-05 12:00:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-10-05 12:11:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v122/v122-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-10-05 12:11:49 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-10-05 12:11:49 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-10-05 12:11:49 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-10-05 12:11:49 +0000 UTC,LastTransitionTime:2022-10-05 12:00:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.5,},NodeAddress{Type:Hostname,Address:v122-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:feea07f38e414515ae57b946e27fa7bb,SystemUUID:07d898dc-4331-403b-9bdf-da8ef413d01c,BootID:5bd644eb-fc54-4a37-951a-44566369b55e,KernelVersion:5.15.0-48-generic,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.22.15,KubeProxyVersion:v1.22.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/sig-storage/nfs-provisioner@sha256:c1bedac8758029948afe060bf8f6ee63ea489b5e08d29745f44fab68ee0d46ca k8s.gcr.io/sig-storage/nfs-provisioner:v2.2.2],SizeBytes:138177747,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:df9770adc7c9d21f7f2c2f8e04efed16e9ca12f9c00aad56e5753bd1819ad95f k8s.gcr.io/kube-proxy:v1.22.15],SizeBytes:105434107,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:393eaa5fb5582428fe7b343ba5b729dbdb8a7d23832e3e9183f515fee5e478ca k8s.gcr.io/kube-apiserver:v1.22.15],SizeBytes:74691038,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:0c81eb77294a8fb7706e538aae9423a6301e00bae81cebf0a67c42f27e6714c7 k8s.gcr.io/kube-controller-manager:v1.22.15],SizeBytes:67534962,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:974ce90d1dfaa71470d7e80077a3dd85ed92848347005df49d710c1576511a2c k8s.gcr.io/kube-scheduler:v1.22.15],SizeBytes:53936244,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220726-ed811e41],SizeBytes:25818452,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220607-9a4d8d2a],SizeBytes:2859509,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 5 12:12:08.894: INFO: Logging kubelet events for node v122-worker2 Oct 5 12:12:08.898: INFO: Logging pods the kubelet thinks is on node v122-worker2 Oct 5 12:12:08.908: INFO: kindnet-vqtz2 started at 2022-10-05 12:00:09 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:08.909: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 12:12:08.909: INFO: kube-proxy-pwsq7 started at 2022-10-05 12:00:09 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:08.909: INFO: Container kube-proxy ready: true, restart count 0 Oct 5 12:12:08.909: INFO: csi-mockplugin-resizer-0 started at 2022-10-05 12:12:08 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:08.909: INFO: Container csi-resizer ready: false, restart count 0 Oct 5 12:12:08.909: INFO: pod-2dfdc6f1-9191-4301-96e6-b9954dca0603 started at 2022-10-05 12:07:05 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:08.909: INFO: Container write-pod ready: true, restart count 0 Oct 5 12:12:08.909: INFO: pod-secrets-e827c9fc-8fe2-4070-8ecd-1f57a842134f started at 2022-10-05 12:08:46 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:08.909: INFO: Container creates-volume-test ready: false, restart count 0 Oct 5 12:12:08.909: INFO: pod-configmaps-e719227e-7d0c-41a8-a6cd-9102f2fe8d3f started at 2022-10-05 12:11:30 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:08.909: INFO: Container agnhost-container ready: false, restart count 0 Oct 5 12:12:08.909: INFO: coredns-78fcd69978-srwh8 started at 2022-10-05 12:00:18 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:08.909: INFO: Container coredns ready: true, restart count 0 Oct 5 12:12:08.909: INFO: local-path-provisioner-58c8ccd54c-lkwwv started at 2022-10-05 12:00:18 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:08.909: INFO: Container local-path-provisioner ready: true, restart count 0 Oct 5 12:12:08.909: INFO: pod-subpath-test-configmap-j7dz started at 2022-10-05 12:10:57 +0000 UTC (1+2 container statuses recorded) Oct 5 12:12:08.909: INFO: Init container init-volume-configmap-j7dz ready: true, restart count 0 Oct 5 12:12:08.909: INFO: Container test-container-subpath-configmap-j7dz ready: true, restart count 3 Oct 5 12:12:08.909: INFO: Container test-container-volume-configmap-j7dz ready: true, restart count 0 Oct 5 12:12:08.909: INFO: hostexec-v122-worker2-75hw5 started at 2022-10-05 12:06:49 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:08.909: INFO: Container agnhost-container ready: true, restart count 0 Oct 5 12:12:08.909: INFO: coredns-78fcd69978-vrzs8 started at 2022-10-05 12:00:18 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:08.909: INFO: Container coredns ready: true, restart count 0 Oct 5 12:12:08.909: INFO: pod-configmaps-0701a096-7034-45ea-90fd-45bfd2a603de started at 2022-10-05 12:09:23 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:08.909: INFO: Container agnhost-container ready: false, restart count 0 Oct 5 12:12:08.909: INFO: csi-mockplugin-0 started at 2022-10-05 12:12:08 +0000 UTC (0+3 container statuses recorded) Oct 5 12:12:08.909: INFO: Container csi-provisioner ready: false, restart count 0 Oct 5 12:12:08.909: INFO: Container driver-registrar ready: false, restart count 0 Oct 5 12:12:08.909: INFO: Container mock ready: false, restart count 0 Oct 5 12:12:08.909: INFO: create-loop-devs-6sf59 started at 2022-10-05 12:00:14 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:08.909: INFO: Container loopdev ready: true, restart count 0 Oct 5 12:12:08.909: INFO: pod-f79ff21f-4fdf-4e74-928e-a01f6395dff5 started at 2022-10-05 12:07:08 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:08.909: INFO: Container write-pod ready: false, restart count 0 Oct 5 12:12:08.909: INFO: csi-mockplugin-attacher-0 started at 2022-10-05 12:12:08 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:08.909: INFO: Container csi-attacher ready: false, restart count 0 Oct 5 12:12:09.071: INFO: Latency metrics for node v122-worker2 Oct 5 12:12:09.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5254" for this suite. • Failure [319.664 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 Oct 5 12:12:08.186: Unexpected error: <*errors.errorString | 0xc000fb1f80>: { s: "pod \"pod-f79ff21f-4fdf-4e74-928e-a01f6395dff5\" is not Running: timed out waiting for the condition", } pod "pod-f79ff21f-4fdf-4e74-928e-a01f6395dff5" is not Running: timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:749 ------------------------------ {"msg":"FAILED [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":10,"skipped":369,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2"]} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:12:04.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Oct 5 12:12:04.984: INFO: The status of Pod test-hostpath-type-lcqtq is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:12:06.989: INFO: The status of Pod test-hostpath-type-lcqtq is Running (Ready = true) STEP: running on node v122-worker STEP: Create a block device for further testing Oct 5 12:12:06.992: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-9136 PodName:test-hostpath-type-lcqtq ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:12:06.992: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting non-existent block device 'does-not-exist-blk-dev' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:340 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:12:09.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-9136" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should fail on mounting non-existent block device 'does-not-exist-blk-dev' when HostPathType is HostPathBlockDev","total":-1,"completed":11,"skipped":234,"failed":0} S ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","total":-1,"completed":10,"skipped":306,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:10:03.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:49 [It] should allow deletion of pod with invalid volume : secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 Oct 5 12:10:33.722: INFO: Deleting pod "pv-8865"/"pod-ephm-test-projected-64t9" Oct 5 12:10:33.722: INFO: Deleting pod "pod-ephm-test-projected-64t9" in namespace "pv-8865" Oct 5 12:10:33.727: INFO: Wait up to 5m0s for pod "pod-ephm-test-projected-64t9" to be fully deleted [AfterEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:12:09.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-8865" for this suite. • [SLOW TEST:126.066 seconds] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 When pod refers to non-existent ephemeral storage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53 should allow deletion of pod with invalid volume : secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 ------------------------------ {"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret","total":-1,"completed":11,"skipped":306,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:12:09.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "v122-worker" using path "/tmp/local-volume-test-11a4d3df-0643-47cb-a75a-a141c9ddccf6" Oct 5 12:12:11.200: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-11a4d3df-0643-47cb-a75a-a141c9ddccf6 && dd if=/dev/zero of=/tmp/local-volume-test-11a4d3df-0643-47cb-a75a-a141c9ddccf6/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-11a4d3df-0643-47cb-a75a-a141c9ddccf6/file] Namespace:persistent-local-volumes-test-3463 PodName:hostexec-v122-worker-5t52w ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:12:11.200: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:12:11.377: INFO: exec v122-worker: command: mkdir -p /tmp/local-volume-test-11a4d3df-0643-47cb-a75a-a141c9ddccf6 && dd if=/dev/zero of=/tmp/local-volume-test-11a4d3df-0643-47cb-a75a-a141c9ddccf6/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-11a4d3df-0643-47cb-a75a-a141c9ddccf6/file Oct 5 12:12:11.377: INFO: exec v122-worker: stdout: "" Oct 5 12:12:11.377: INFO: exec v122-worker: stderr: "5120+0 records in\n5120+0 records out\n20971520 bytes (21 MB, 20 MiB) copied, 0.0227994 s, 920 MB/s\nlosetup: /tmp/local-volume-test-11a4d3df-0643-47cb-a75a-a141c9ddccf6/file: failed to set up loop device: No such device or address\n" Oct 5 12:12:11.377: INFO: exec v122-worker: exit code: 0 Oct 5 12:12:11.377: FAIL: Unexpected error: : { Err: { s: "command terminated with exit code 1", }, Code: 1, } command terminated with exit code 1 occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).createAndSetupLoopDevice(0xc0036b7680, 0xc001aec300, 0x3b, 0xc00388e000, 0x1400000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:133 +0x45b k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).setupLocalVolumeBlock(0xc0036b7680, 0xc00388e000, 0x0, 0x78cd2a8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:146 +0x65 k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).Create(0xc0036b7680, 0xc00388e000, 0x702c9b3, 0x5, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:306 +0x326 k8s.io/kubernetes/test/e2e/storage.setupLocalVolumes(0xc003810d80, 0x7069370, 0x14, 0xc00388e000, 0x1, 0x0, 0x0, 0xc0037d7e00) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:837 +0x157 k8s.io/kubernetes/test/e2e/storage.setupLocalVolumesPVCsPVs(0xc003810d80, 0x7069370, 0x14, 0xc00388e000, 0x1, 0x703610f, 0x9, 0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1102 +0x87 k8s.io/kubernetes/test/e2e/storage.glob..func21.2.1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 +0xb6 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001201c80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001201c80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc001201c80, 0x729c7d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:12:11.379: INFO: Deleting PersistentVolumeClaim "pvc-64z5p" Oct 5 12:12:11.383: INFO: Deleting PersistentVolume "local-pv849df" Oct 5 12:12:11.386: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-740ded68-de97-4ce4-9b66-53b9b4e0c6a4/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-3463 PodName:hostexec-v122-worker-5t52w ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:12:11.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "" on node "v122-worker" at path /tmp/local-volume-test-740ded68-de97-4ce4-9b66-53b9b4e0c6a4/file Oct 5 12:12:11.520: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d ] Namespace:persistent-local-volumes-test-3463 PodName:hostexec-v122-worker-5t52w ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:12:11.520: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:12:11.656: INFO: exec v122-worker: command: losetup -d Oct 5 12:12:11.656: INFO: exec v122-worker: stdout: "" Oct 5 12:12:11.656: INFO: exec v122-worker: stderr: "losetup: option requires an argument -- 'd'\nTry 'losetup --help' for more information.\n" Oct 5 12:12:11.656: INFO: exec v122-worker: exit code: 0 Oct 5 12:12:11.657: FAIL: Unexpected error: : { Err: { s: "command terminated with exit code 1", }, Code: 1, } command terminated with exit code 1 occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).teardownLoopDevice(0xc0036b7680, 0xc0036704c0, 0x3b, 0xc0025c2600) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:161 +0x255 k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).cleanupLocalVolumeBlock(0xc0036b7680, 0xc004f27f00) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:166 +0x4f k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).Remove(0xc0036b7680, 0xc004f27f00) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:335 +0x13a k8s.io/kubernetes/test/e2e/storage.cleanupLocalVolumes(0xc003810d80, 0xc000d7aef0, 0x1, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:861 +0x82 k8s.io/kubernetes/test/e2e/storage.glob..func21.2.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205 +0x65 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001201c80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001201c80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc001201c80, 0x729c7d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "persistent-local-volumes-test-3463". STEP: Found 4 events. Oct 5 12:12:11.662: INFO: At 2022-10-05 12:12:09 +0000 UTC - event for hostexec-v122-worker-5t52w: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-3463/hostexec-v122-worker-5t52w to v122-worker Oct 5 12:12:11.662: INFO: At 2022-10-05 12:12:09 +0000 UTC - event for hostexec-v122-worker-5t52w: {kubelet v122-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Oct 5 12:12:11.662: INFO: At 2022-10-05 12:12:09 +0000 UTC - event for hostexec-v122-worker-5t52w: {kubelet v122-worker} Created: Created container agnhost-container Oct 5 12:12:11.662: INFO: At 2022-10-05 12:12:09 +0000 UTC - event for hostexec-v122-worker-5t52w: {kubelet v122-worker} Started: Started container agnhost-container Oct 5 12:12:11.666: INFO: POD NODE PHASE GRACE CONDITIONS Oct 5 12:12:11.666: INFO: hostexec-v122-worker-5t52w v122-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-10-05 12:12:09 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-10-05 12:12:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-10-05 12:12:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-10-05 12:12:09 +0000 UTC }] Oct 5 12:12:11.666: INFO: Oct 5 12:12:11.670: INFO: Logging node info for node v122-control-plane Oct 5 12:12:11.673: INFO: Node Info: &Node{ObjectMeta:{v122-control-plane 0bba5de9-314a-4743-bf02-bde0ec06daf3 12289 0 2022-10-05 11:59:47 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:v122-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-10-05 11:59:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-10-05 11:59:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2022-10-05 12:00:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-10-05 12:00:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v122/v122-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-10-05 12:10:22 +0000 UTC,LastTransitionTime:2022-10-05 11:59:42 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-10-05 12:10:22 +0000 UTC,LastTransitionTime:2022-10-05 11:59:42 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-10-05 12:10:22 +0000 UTC,LastTransitionTime:2022-10-05 11:59:42 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-10-05 12:10:22 +0000 UTC,LastTransitionTime:2022-10-05 12:00:21 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.7,},NodeAddress{Type:Hostname,Address:v122-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:90a9e9edfe9d44d59ee2bec7a8da01cd,SystemUUID:2e684780-1fcb-4016-9109-255b79db130f,BootID:5bd644eb-fc54-4a37-951a-44566369b55e,KernelVersion:5.15.0-48-generic,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.22.15,KubeProxyVersion:v1.22.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:df9770adc7c9d21f7f2c2f8e04efed16e9ca12f9c00aad56e5753bd1819ad95f k8s.gcr.io/kube-proxy:v1.22.15],SizeBytes:105434107,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:393eaa5fb5582428fe7b343ba5b729dbdb8a7d23832e3e9183f515fee5e478ca k8s.gcr.io/kube-apiserver:v1.22.15],SizeBytes:74691038,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:0c81eb77294a8fb7706e538aae9423a6301e00bae81cebf0a67c42f27e6714c7 k8s.gcr.io/kube-controller-manager:v1.22.15],SizeBytes:67534962,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:974ce90d1dfaa71470d7e80077a3dd85ed92848347005df49d710c1576511a2c k8s.gcr.io/kube-scheduler:v1.22.15],SizeBytes:53936244,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220726-ed811e41],SizeBytes:25818452,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220607-9a4d8d2a],SizeBytes:2859509,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 5 12:12:11.674: INFO: Logging kubelet events for node v122-control-plane Oct 5 12:12:11.680: INFO: Logging pods the kubelet thinks is on node v122-control-plane Oct 5 12:12:11.699: INFO: kube-scheduler-v122-control-plane started at 2022-10-05 11:59:52 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:11.699: INFO: Container kube-scheduler ready: true, restart count 0 Oct 5 12:12:11.699: INFO: kindnet-g8rqz started at 2022-10-05 12:00:05 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:11.699: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 12:12:11.699: INFO: kube-proxy-xtt57 started at 2022-10-05 12:00:05 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:11.699: INFO: Container kube-proxy ready: true, restart count 0 Oct 5 12:12:11.699: INFO: create-loop-devs-lvpbc started at 2022-10-05 12:00:14 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:11.699: INFO: Container loopdev ready: true, restart count 0 Oct 5 12:12:11.699: INFO: etcd-v122-control-plane started at 2022-10-05 11:59:52 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:11.700: INFO: Container etcd ready: true, restart count 0 Oct 5 12:12:11.700: INFO: kube-apiserver-v122-control-plane started at 2022-10-05 11:59:52 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:11.700: INFO: Container kube-apiserver ready: true, restart count 0 Oct 5 12:12:11.700: INFO: kube-controller-manager-v122-control-plane started at 2022-10-05 11:59:52 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:11.700: INFO: Container kube-controller-manager ready: true, restart count 0 Oct 5 12:12:11.774: INFO: Latency metrics for node v122-control-plane Oct 5 12:12:11.774: INFO: Logging node info for node v122-worker Oct 5 12:12:11.778: INFO: Node Info: &Node{ObjectMeta:{v122-worker 8286eab4-ee46-4103-bc96-cf44e85cf562 14057 0 2022-10-05 12:00:08 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:v122-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-10-05 12:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2022-10-05 12:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-10-05 12:00:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-10-05 12:11:49 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v122/v122-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-10-05 12:11:49 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-10-05 12:11:49 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-10-05 12:11:49 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-10-05 12:11:49 +0000 UTC,LastTransitionTime:2022-10-05 12:00:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.6,},NodeAddress{Type:Hostname,Address:v122-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8ce5667169114cc58989bd26cdb88021,SystemUUID:f1b8869e-1c17-4972-b832-4d15146806a4,BootID:5bd644eb-fc54-4a37-951a-44566369b55e,KernelVersion:5.15.0-48-generic,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.22.15,KubeProxyVersion:v1.22.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:df9770adc7c9d21f7f2c2f8e04efed16e9ca12f9c00aad56e5753bd1819ad95f k8s.gcr.io/kube-proxy:v1.22.15],SizeBytes:105434107,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:393eaa5fb5582428fe7b343ba5b729dbdb8a7d23832e3e9183f515fee5e478ca k8s.gcr.io/kube-apiserver:v1.22.15],SizeBytes:74691038,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:0c81eb77294a8fb7706e538aae9423a6301e00bae81cebf0a67c42f27e6714c7 k8s.gcr.io/kube-controller-manager:v1.22.15],SizeBytes:67534962,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:974ce90d1dfaa71470d7e80077a3dd85ed92848347005df49d710c1576511a2c k8s.gcr.io/kube-scheduler:v1.22.15],SizeBytes:53936244,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220726-ed811e41],SizeBytes:25818452,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220607-9a4d8d2a],SizeBytes:2859509,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 5 12:12:11.778: INFO: Logging kubelet events for node v122-worker Oct 5 12:12:11.784: INFO: Logging pods the kubelet thinks is on node v122-worker Oct 5 12:12:11.795: INFO: kindnet-rkh8m started at 2022-10-05 12:00:09 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:11.795: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 12:12:11.795: INFO: kube-proxy-xkzrn started at 2022-10-05 12:00:09 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:11.795: INFO: Container kube-proxy ready: true, restart count 0 Oct 5 12:12:11.795: INFO: test-hostpath-type-d99gj started at 2022-10-05 12:11:32 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:11.795: INFO: Container host-path-sh-testing ready: true, restart count 0 Oct 5 12:12:11.795: INFO: pod-secrets-76b16dac-27d0-4343-a0fe-b8ed5dd81977 started at 2022-10-05 12:06:14 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:11.795: INFO: Container creates-volume-test ready: false, restart count 0 Oct 5 12:12:11.795: INFO: hostexec-v122-worker-mck7x started at 2022-10-05 12:11:42 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:11.795: INFO: Container agnhost-container ready: true, restart count 0 Oct 5 12:12:11.795: INFO: pod-3a44a224-19d4-40eb-bffd-4d1756aee3aa started at 2022-10-05 12:12:09 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:11.795: INFO: Container test-container ready: false, restart count 0 Oct 5 12:12:11.795: INFO: create-loop-devs-f76cj started at 2022-10-05 12:00:14 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:11.795: INFO: Container loopdev ready: true, restart count 0 Oct 5 12:12:11.795: INFO: hostexec-v122-worker-5t52w started at 2022-10-05 12:12:09 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:11.795: INFO: Container agnhost-container ready: true, restart count 0 Oct 5 12:12:11.796: INFO: pod-c07ff236-7c76-43f7-b7f5-b6e654f6e050 started at 2022-10-05 12:11:51 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:11.796: INFO: Container write-pod ready: false, restart count 0 Oct 5 12:12:11.796: INFO: test-hostpath-type-lcqtq started at 2022-10-05 12:12:04 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:11.796: INFO: Container host-path-testing ready: true, restart count 0 Oct 5 12:12:11.796: INFO: hostexec-v122-worker-5mnr7 started at 2022-10-05 12:12:09 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:11.796: INFO: Container agnhost-container ready: true, restart count 0 Oct 5 12:12:11.918: INFO: Latency metrics for node v122-worker Oct 5 12:12:11.918: INFO: Logging node info for node v122-worker2 Oct 5 12:12:11.921: INFO: Node Info: &Node{ObjectMeta:{v122-worker2 e098b7b6-6804-492f-b9ec-650d1924542e 14363 0 2022-10-05 12:00:08 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:v122-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-949":"csi-mock-csi-mock-volumes-949"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-10-05 12:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2022-10-05 12:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-10-05 12:00:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-10-05 12:12:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v122/v122-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-10-05 12:11:49 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-10-05 12:11:49 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-10-05 12:11:49 +0000 UTC,LastTransitionTime:2022-10-05 12:00:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-10-05 12:11:49 +0000 UTC,LastTransitionTime:2022-10-05 12:00:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.5,},NodeAddress{Type:Hostname,Address:v122-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:feea07f38e414515ae57b946e27fa7bb,SystemUUID:07d898dc-4331-403b-9bdf-da8ef413d01c,BootID:5bd644eb-fc54-4a37-951a-44566369b55e,KernelVersion:5.15.0-48-generic,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.22.15,KubeProxyVersion:v1.22.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/sig-storage/nfs-provisioner@sha256:c1bedac8758029948afe060bf8f6ee63ea489b5e08d29745f44fab68ee0d46ca k8s.gcr.io/sig-storage/nfs-provisioner:v2.2.2],SizeBytes:138177747,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:df9770adc7c9d21f7f2c2f8e04efed16e9ca12f9c00aad56e5753bd1819ad95f k8s.gcr.io/kube-proxy:v1.22.15],SizeBytes:105434107,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:393eaa5fb5582428fe7b343ba5b729dbdb8a7d23832e3e9183f515fee5e478ca k8s.gcr.io/kube-apiserver:v1.22.15],SizeBytes:74691038,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:0c81eb77294a8fb7706e538aae9423a6301e00bae81cebf0a67c42f27e6714c7 k8s.gcr.io/kube-controller-manager:v1.22.15],SizeBytes:67534962,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:974ce90d1dfaa71470d7e80077a3dd85ed92848347005df49d710c1576511a2c k8s.gcr.io/kube-scheduler:v1.22.15],SizeBytes:53936244,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220726-ed811e41],SizeBytes:25818452,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220607-9a4d8d2a],SizeBytes:2859509,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 5 12:12:11.922: INFO: Logging kubelet events for node v122-worker2 Oct 5 12:12:11.927: INFO: Logging pods the kubelet thinks is on node v122-worker2 Oct 5 12:12:11.947: INFO: csi-mockplugin-0 started at 2022-10-05 12:12:08 +0000 UTC (0+3 container statuses recorded) Oct 5 12:12:11.947: INFO: Container csi-provisioner ready: true, restart count 0 Oct 5 12:12:11.947: INFO: Container driver-registrar ready: true, restart count 0 Oct 5 12:12:11.947: INFO: Container mock ready: true, restart count 0 Oct 5 12:12:11.947: INFO: coredns-78fcd69978-vrzs8 started at 2022-10-05 12:00:18 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:11.947: INFO: Container coredns ready: true, restart count 0 Oct 5 12:12:11.947: INFO: pod-configmaps-0701a096-7034-45ea-90fd-45bfd2a603de started at 2022-10-05 12:09:23 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:11.947: INFO: Container agnhost-container ready: false, restart count 0 Oct 5 12:12:11.947: INFO: csi-mockplugin-attacher-0 started at 2022-10-05 12:12:08 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:11.947: INFO: Container csi-attacher ready: true, restart count 0 Oct 5 12:12:11.947: INFO: create-loop-devs-6sf59 started at 2022-10-05 12:00:14 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:11.947: INFO: Container loopdev ready: true, restart count 0 Oct 5 12:12:11.947: INFO: pod-f79ff21f-4fdf-4e74-928e-a01f6395dff5 started at 2022-10-05 12:07:08 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:11.947: INFO: Container write-pod ready: false, restart count 0 Oct 5 12:12:11.947: INFO: csi-mockplugin-resizer-0 started at 2022-10-05 12:12:08 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:11.947: INFO: Container csi-resizer ready: true, restart count 0 Oct 5 12:12:11.947: INFO: kindnet-vqtz2 started at 2022-10-05 12:00:09 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:11.947: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 12:12:11.947: INFO: kube-proxy-pwsq7 started at 2022-10-05 12:00:09 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:11.947: INFO: Container kube-proxy ready: true, restart count 0 Oct 5 12:12:11.947: INFO: pod-subpath-test-configmap-j7dz started at 2022-10-05 12:10:57 +0000 UTC (1+2 container statuses recorded) Oct 5 12:12:11.947: INFO: Init container init-volume-configmap-j7dz ready: true, restart count 0 Oct 5 12:12:11.947: INFO: Container test-container-subpath-configmap-j7dz ready: true, restart count 3 Oct 5 12:12:11.947: INFO: Container test-container-volume-configmap-j7dz ready: true, restart count 0 Oct 5 12:12:11.947: INFO: hostexec-v122-worker2-75hw5 started at 2022-10-05 12:06:49 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:11.947: INFO: Container agnhost-container ready: true, restart count 0 Oct 5 12:12:11.947: INFO: pod-2dfdc6f1-9191-4301-96e6-b9954dca0603 started at 2022-10-05 12:07:05 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:11.947: INFO: Container write-pod ready: true, restart count 0 Oct 5 12:12:11.947: INFO: pod-secrets-e827c9fc-8fe2-4070-8ecd-1f57a842134f started at 2022-10-05 12:08:46 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:11.947: INFO: Container creates-volume-test ready: false, restart count 0 Oct 5 12:12:11.947: INFO: pod-configmaps-e719227e-7d0c-41a8-a6cd-9102f2fe8d3f started at 2022-10-05 12:11:30 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:11.947: INFO: Container agnhost-container ready: false, restart count 0 Oct 5 12:12:11.947: INFO: coredns-78fcd69978-srwh8 started at 2022-10-05 12:00:18 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:11.947: INFO: Container coredns ready: true, restart count 0 Oct 5 12:12:11.947: INFO: local-path-provisioner-58c8ccd54c-lkwwv started at 2022-10-05 12:00:18 +0000 UTC (0+1 container statuses recorded) Oct 5 12:12:11.947: INFO: Container local-path-provisioner ready: true, restart count 0 Oct 5 12:12:12.107: INFO: Latency metrics for node v122-worker2 Oct 5 12:12:12.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3463" for this suite. • Failure in Spec Setup (BeforeEach) [2.961 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 Oct 5 12:12:11.377: Unexpected error: : { Err: { s: "command terminated with exit code 1", }, Code: 1, } command terminated with exit code 1 occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:133 ------------------------------ {"msg":"FAILED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":11,"skipped":235,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1"]} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:12:09.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] new files should be created with FSGroup ownership when container is non-root /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:59 STEP: Creating a pod to test emptydir 0644 on tmpfs Oct 5 12:12:09.151: INFO: Waiting up to 5m0s for pod "pod-3a44a224-19d4-40eb-bffd-4d1756aee3aa" in namespace "emptydir-4671" to be "Succeeded or Failed" Oct 5 12:12:09.153: INFO: Pod "pod-3a44a224-19d4-40eb-bffd-4d1756aee3aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.769022ms Oct 5 12:12:11.158: INFO: Pod "pod-3a44a224-19d4-40eb-bffd-4d1756aee3aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007457274s Oct 5 12:12:13.163: INFO: Pod "pod-3a44a224-19d4-40eb-bffd-4d1756aee3aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01272707s STEP: Saw pod success Oct 5 12:12:13.163: INFO: Pod "pod-3a44a224-19d4-40eb-bffd-4d1756aee3aa" satisfied condition "Succeeded or Failed" Oct 5 12:12:13.167: INFO: Trying to get logs from node v122-worker pod pod-3a44a224-19d4-40eb-bffd-4d1756aee3aa container test-container: STEP: delete the pod Oct 5 12:12:13.182: INFO: Waiting for pod pod-3a44a224-19d4-40eb-bffd-4d1756aee3aa to disappear Oct 5 12:12:13.185: INFO: Pod pod-3a44a224-19d4-40eb-bffd-4d1756aee3aa no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:12:13.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4671" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root","total":-1,"completed":11,"skipped":383,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:12:12.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Oct 5 12:12:12.184: INFO: The status of Pod test-hostpath-type-6vxzl is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:12:14.189: INFO: The status of Pod test-hostpath-type-6vxzl is Running (Ready = true) STEP: running on node v122-worker STEP: Create a block device for further testing Oct 5 12:12:14.192: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-953 PodName:test-hostpath-type-6vxzl ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:12:14.192: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:364 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:12:16.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-953" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathSocket","total":-1,"completed":12,"skipped":251,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1"]} S ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:12:16.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Oct 5 12:12:16.413: INFO: The status of Pod test-hostpath-type-f2jpc is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:12:18.420: INFO: The status of Pod test-hostpath-type-f2jpc is Running (Ready = true) STEP: running on node v122-worker [It] Should fail on mounting socket 'asocket' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:221 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:12:20.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-525" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathFile","total":-1,"completed":13,"skipped":252,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:12:13.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Oct 5 12:12:17.323: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-22308ac8-d614-4095-ab79-4263fa06aef0] Namespace:persistent-local-volumes-test-7307 PodName:hostexec-v122-worker-5t24j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:12:17.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:12:17.433: INFO: Creating a PV followed by a PVC Oct 5 12:12:17.441: INFO: Waiting for PV local-pv6czlm to bind to PVC pvc-rlgmp Oct 5 12:12:17.441: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-rlgmp] to have phase Bound Oct 5 12:12:17.443: INFO: PersistentVolumeClaim pvc-rlgmp found but phase is Pending instead of Bound. Oct 5 12:12:19.447: INFO: PersistentVolumeClaim pvc-rlgmp found and phase=Bound (2.006233398s) Oct 5 12:12:19.447: INFO: Waiting up to 3m0s for PersistentVolume local-pv6czlm to have phase Bound Oct 5 12:12:19.450: INFO: PersistentVolume local-pv6czlm found and phase=Bound (3.040855ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Oct 5 12:12:21.473: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:45799 --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-7307 exec pod-a09dc9c4-3878-4d10-b4ca-6d30b6ded117 --namespace=persistent-local-volumes-test-7307 -- stat -c %g /mnt/volume1' Oct 5 12:12:21.643: INFO: stderr: "" Oct 5 12:12:21.644: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-a09dc9c4-3878-4d10-b4ca-6d30b6ded117 in namespace persistent-local-volumes-test-7307 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:12:21.648: INFO: Deleting PersistentVolumeClaim "pvc-rlgmp" Oct 5 12:12:21.652: INFO: Deleting PersistentVolume "local-pv6czlm" STEP: Removing the test directory Oct 5 12:12:21.655: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-22308ac8-d614-4095-ab79-4263fa06aef0] Namespace:persistent-local-volumes-test-7307 PodName:hostexec-v122-worker-5t24j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:12:21.655: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:12:21.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7307" for this suite. • [SLOW TEST:8.539 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:12:09.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:354 STEP: Initializing test volumes Oct 5 12:12:11.800: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-efce3f45-b29e-446e-9ca8-390c5657226b] Namespace:persistent-local-volumes-test-7308 PodName:hostexec-v122-worker-5mnr7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:12:11.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:12:11.951: INFO: Creating a PV followed by a PVC Oct 5 12:12:11.958: INFO: Waiting for PV local-pvcbhcn to bind to PVC pvc-xjfrs Oct 5 12:12:11.958: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-xjfrs] to have phase Bound Oct 5 12:12:11.960: INFO: PersistentVolumeClaim pvc-xjfrs found but phase is Pending instead of Bound. Oct 5 12:12:13.964: INFO: PersistentVolumeClaim pvc-xjfrs found but phase is Pending instead of Bound. Oct 5 12:12:15.969: INFO: PersistentVolumeClaim pvc-xjfrs found but phase is Pending instead of Bound. Oct 5 12:12:17.974: INFO: PersistentVolumeClaim pvc-xjfrs found but phase is Pending instead of Bound. Oct 5 12:12:19.979: INFO: PersistentVolumeClaim pvc-xjfrs found and phase=Bound (8.020888803s) Oct 5 12:12:19.979: INFO: Waiting up to 3m0s for PersistentVolume local-pvcbhcn to have phase Bound Oct 5 12:12:19.982: INFO: PersistentVolume local-pvcbhcn found and phase=Bound (2.929307ms) [It] should fail scheduling due to different NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375 STEP: local-volume-type: dir Oct 5 12:12:19.993: INFO: Waiting up to 5m0s for pod "pod-2faaf468-4912-449f-95c6-b40a7fb23570" in namespace "persistent-local-volumes-test-7308" to be "Unschedulable" Oct 5 12:12:19.996: INFO: Pod "pod-2faaf468-4912-449f-95c6-b40a7fb23570": Phase="Pending", Reason="", readiness=false. Elapsed: 3.110057ms Oct 5 12:12:22.001: INFO: Pod "pod-2faaf468-4912-449f-95c6-b40a7fb23570": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007839889s Oct 5 12:12:22.001: INFO: Pod "pod-2faaf468-4912-449f-95c6-b40a7fb23570" satisfied condition "Unschedulable" [AfterEach] Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:370 STEP: Cleaning up PVC and PV Oct 5 12:12:22.001: INFO: Deleting PersistentVolumeClaim "pvc-xjfrs" Oct 5 12:12:22.006: INFO: Deleting PersistentVolume "local-pvcbhcn" STEP: Removing the test directory Oct 5 12:12:22.011: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-efce3f45-b29e-446e-9ca8-390c5657226b] Namespace:persistent-local-volumes-test-7308 PodName:hostexec-v122-worker-5mnr7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:12:22.011: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:12:22.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7308" for this suite. • [SLOW TEST:12.431 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347 should fail scheduling due to different NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity","total":-1,"completed":12,"skipped":315,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local StatefulSet with pod affinity [Slow] should use volumes on one node when pod management is parallel and pod has affinity","total":-1,"completed":6,"skipped":210,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: block] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:10:57.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that container can restart successfully after configmaps modified /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:131 STEP: Create configmap STEP: Creating pod pod-subpath-test-configmap-j7dz STEP: Failing liveness probe Oct 5 12:11:01.192: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:45799 --kubeconfig=/root/.kube/config --namespace=subpath-8142 exec pod-subpath-test-configmap-j7dz --container test-container-volume-configmap-j7dz -- /bin/sh -c rm /probe-volume/probe-file' Oct 5 12:11:01.429: INFO: stderr: "" Oct 5 12:11:01.429: INFO: stdout: "" Oct 5 12:11:01.429: INFO: Pod exec output: STEP: Waiting for container to restart Oct 5 12:11:01.433: INFO: Container test-container-subpath-configmap-j7dz, restarts: 0 Oct 5 12:11:11.439: INFO: Container test-container-subpath-configmap-j7dz, restarts: 2 Oct 5 12:11:11.439: INFO: Container has restart count: 2 STEP: Fix liveness probe STEP: Waiting for container to stop restarting Oct 5 12:11:27.450: INFO: Container has restart count: 3 Oct 5 12:12:29.452: INFO: Container restart has stabilized Oct 5 12:12:29.453: INFO: Deleting pod "pod-subpath-test-configmap-j7dz" in namespace "subpath-8142" Oct 5 12:12:29.460: INFO: Wait up to 5m0s for pod "pod-subpath-test-configmap-j7dz" to be fully deleted [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:12:31.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8142" for this suite. • [SLOW TEST:94.346 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Container restart /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:130 should verify that container can restart successfully after configmaps modified /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:131 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Container restart should verify that container can restart successfully after configmaps modified","total":-1,"completed":7,"skipped":210,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: block] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":12,"skipped":420,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2"]} [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:12:21.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Oct 5 12:12:21.852: INFO: The status of Pod test-hostpath-type-9zk5d is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:12:23.857: INFO: The status of Pod test-hostpath-type-9zk5d is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:12:25.857: INFO: The status of Pod test-hostpath-type-9zk5d is Running (Ready = true) STEP: running on node v122-worker STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should fail on mounting file 'afile' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:161 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:12:31.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-287" for this suite. • [SLOW TEST:10.105 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting file 'afile' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:161 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathCharDev","total":-1,"completed":13,"skipped":420,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2"]} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:12:31.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Oct 5 12:12:33.595: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-5139 PodName:hostexec-v122-worker-sq5z7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:12:33.595: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:12:33.712: INFO: exec v122-worker: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Oct 5 12:12:33.712: INFO: exec v122-worker: stdout: "0\n" Oct 5 12:12:33.712: INFO: exec v122-worker: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Oct 5 12:12:33.712: INFO: exec v122-worker: exit code: 0 Oct 5 12:12:33.712: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:12:33.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5139" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.183 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1250 ------------------------------ SSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:12:31.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:59 STEP: Creating configMap with name projected-configmap-test-volume-fba12fa0-74b0-4208-9b4e-9b19cb02fdeb STEP: Creating a pod to test consume configMaps Oct 5 12:12:31.987: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4e003a53-d7d0-4942-ab49-a749f86239f5" in namespace "projected-1930" to be "Succeeded or Failed" Oct 5 12:12:31.990: INFO: Pod "pod-projected-configmaps-4e003a53-d7d0-4942-ab49-a749f86239f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.842067ms Oct 5 12:12:33.994: INFO: Pod "pod-projected-configmaps-4e003a53-d7d0-4942-ab49-a749f86239f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007498147s Oct 5 12:12:35.999: INFO: Pod "pod-projected-configmaps-4e003a53-d7d0-4942-ab49-a749f86239f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012143439s STEP: Saw pod success Oct 5 12:12:35.999: INFO: Pod "pod-projected-configmaps-4e003a53-d7d0-4942-ab49-a749f86239f5" satisfied condition "Succeeded or Failed" Oct 5 12:12:36.003: INFO: Trying to get logs from node v122-worker pod pod-projected-configmaps-4e003a53-d7d0-4942-ab49-a749f86239f5 container agnhost-container: STEP: delete the pod Oct 5 12:12:36.018: INFO: Waiting for pod pod-projected-configmaps-4e003a53-d7d0-4942-ab49-a749f86239f5 to disappear Oct 5 12:12:36.021: INFO: Pod pod-projected-configmaps-4e003a53-d7d0-4942-ab49-a749f86239f5 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:12:36.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1930" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":14,"skipped":432,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2"]} SSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:12:20.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Oct 5 12:12:22.530: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-e2e43337-63da-4ff5-a109-0e3be26e0fc9-backend && mount --bind /tmp/local-volume-test-e2e43337-63da-4ff5-a109-0e3be26e0fc9-backend /tmp/local-volume-test-e2e43337-63da-4ff5-a109-0e3be26e0fc9-backend && ln -s /tmp/local-volume-test-e2e43337-63da-4ff5-a109-0e3be26e0fc9-backend /tmp/local-volume-test-e2e43337-63da-4ff5-a109-0e3be26e0fc9] Namespace:persistent-local-volumes-test-1005 PodName:hostexec-v122-worker-98x6k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:12:22.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:12:22.691: INFO: Creating a PV followed by a PVC Oct 5 12:12:22.698: INFO: Waiting for PV local-pvskzz4 to bind to PVC pvc-9277j Oct 5 12:12:22.698: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-9277j] to have phase Bound Oct 5 12:12:22.700: INFO: PersistentVolumeClaim pvc-9277j found but phase is Pending instead of Bound. Oct 5 12:12:24.704: INFO: PersistentVolumeClaim pvc-9277j found but phase is Pending instead of Bound. Oct 5 12:12:26.708: INFO: PersistentVolumeClaim pvc-9277j found but phase is Pending instead of Bound. Oct 5 12:12:28.712: INFO: PersistentVolumeClaim pvc-9277j found but phase is Pending instead of Bound. Oct 5 12:12:30.716: INFO: PersistentVolumeClaim pvc-9277j found but phase is Pending instead of Bound. Oct 5 12:12:32.721: INFO: PersistentVolumeClaim pvc-9277j found but phase is Pending instead of Bound. Oct 5 12:12:34.725: INFO: PersistentVolumeClaim pvc-9277j found and phase=Bound (12.027379561s) Oct 5 12:12:34.725: INFO: Waiting up to 3m0s for PersistentVolume local-pvskzz4 to have phase Bound Oct 5 12:12:34.728: INFO: PersistentVolume local-pvskzz4 found and phase=Bound (3.208889ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Oct 5 12:12:40.756: INFO: pod "pod-7844e3ba-dcaa-40d8-acd6-059aa15992dc" created on Node "v122-worker" STEP: Writing in pod1 Oct 5 12:12:40.756: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1005 PodName:pod-7844e3ba-dcaa-40d8-acd6-059aa15992dc ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:12:40.756: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:12:40.894: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Oct 5 12:12:40.894: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1005 PodName:pod-7844e3ba-dcaa-40d8-acd6-059aa15992dc ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:12:40.894: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:12:41.016: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Oct 5 12:12:41.016: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-e2e43337-63da-4ff5-a109-0e3be26e0fc9 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1005 PodName:pod-7844e3ba-dcaa-40d8-acd6-059aa15992dc ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:12:41.016: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:12:41.136: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-e2e43337-63da-4ff5-a109-0e3be26e0fc9 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-7844e3ba-dcaa-40d8-acd6-059aa15992dc in namespace persistent-local-volumes-test-1005 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:12:41.142: INFO: Deleting PersistentVolumeClaim "pvc-9277j" Oct 5 12:12:41.146: INFO: Deleting PersistentVolume "local-pvskzz4" STEP: Removing the test directory Oct 5 12:12:41.151: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-e2e43337-63da-4ff5-a109-0e3be26e0fc9 && umount /tmp/local-volume-test-e2e43337-63da-4ff5-a109-0e3be26e0fc9-backend && rm -r /tmp/local-volume-test-e2e43337-63da-4ff5-a109-0e3be26e0fc9-backend] Namespace:persistent-local-volumes-test-1005 PodName:hostexec-v122-worker-98x6k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:12:41.151: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:12:41.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1005" for this suite. • [SLOW TEST:20.834 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":14,"skipped":263,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:12:22.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "v122-worker" at path "/tmp/local-volume-test-36d5dc4d-cfac-42e3-88c9-e79156450cf4" Oct 5 12:12:26.306: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-36d5dc4d-cfac-42e3-88c9-e79156450cf4" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-36d5dc4d-cfac-42e3-88c9-e79156450cf4" "/tmp/local-volume-test-36d5dc4d-cfac-42e3-88c9-e79156450cf4"] Namespace:persistent-local-volumes-test-8257 PodName:hostexec-v122-worker-thxmk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:12:26.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:12:26.451: INFO: Creating a PV followed by a PVC Oct 5 12:12:26.459: INFO: Waiting for PV local-pvnf7lp to bind to PVC pvc-bmtrm Oct 5 12:12:26.459: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-bmtrm] to have phase Bound Oct 5 12:12:26.462: INFO: PersistentVolumeClaim pvc-bmtrm found but phase is Pending instead of Bound. Oct 5 12:12:28.467: INFO: PersistentVolumeClaim pvc-bmtrm found but phase is Pending instead of Bound. Oct 5 12:12:30.472: INFO: PersistentVolumeClaim pvc-bmtrm found but phase is Pending instead of Bound. Oct 5 12:12:32.476: INFO: PersistentVolumeClaim pvc-bmtrm found but phase is Pending instead of Bound. Oct 5 12:12:34.480: INFO: PersistentVolumeClaim pvc-bmtrm found and phase=Bound (8.020592163s) Oct 5 12:12:34.480: INFO: Waiting up to 3m0s for PersistentVolume local-pvnf7lp to have phase Bound Oct 5 12:12:34.483: INFO: PersistentVolume local-pvnf7lp found and phase=Bound (3.002459ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Oct 5 12:12:36.508: INFO: pod "pod-13c7b5fd-3cd1-4c38-9ab4-13827538b6f0" created on Node "v122-worker" STEP: Writing in pod1 Oct 5 12:12:36.508: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8257 PodName:pod-13c7b5fd-3cd1-4c38-9ab4-13827538b6f0 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:12:36.508: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:12:36.628: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Oct 5 12:12:36.628: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8257 PodName:pod-13c7b5fd-3cd1-4c38-9ab4-13827538b6f0 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:12:36.628: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:12:36.712: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-13c7b5fd-3cd1-4c38-9ab4-13827538b6f0 in namespace persistent-local-volumes-test-8257 STEP: Creating pod2 STEP: Creating a pod Oct 5 12:12:42.735: INFO: pod "pod-e6ac8057-1e48-42ac-9336-2ba447e05950" created on Node "v122-worker" STEP: Reading in pod2 Oct 5 12:12:42.735: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8257 PodName:pod-e6ac8057-1e48-42ac-9336-2ba447e05950 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:12:42.735: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:12:42.858: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-e6ac8057-1e48-42ac-9336-2ba447e05950 in namespace persistent-local-volumes-test-8257 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:12:42.863: INFO: Deleting PersistentVolumeClaim "pvc-bmtrm" Oct 5 12:12:42.868: INFO: Deleting PersistentVolume "local-pvnf7lp" STEP: Unmount tmpfs mount point on node "v122-worker" at path "/tmp/local-volume-test-36d5dc4d-cfac-42e3-88c9-e79156450cf4" Oct 5 12:12:42.873: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-36d5dc4d-cfac-42e3-88c9-e79156450cf4"] Namespace:persistent-local-volumes-test-8257 PodName:hostexec-v122-worker-thxmk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:12:42.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:12:43.035: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-36d5dc4d-cfac-42e3-88c9-e79156450cf4] Namespace:persistent-local-volumes-test-8257 PodName:hostexec-v122-worker-thxmk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:12:43.035: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:12:43.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8257" for this suite. • [SLOW TEST:20.877 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":13,"skipped":349,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:12:41.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Oct 5 12:12:41.471: INFO: The status of Pod test-hostpath-type-vxbvs is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:12:43.475: INFO: The status of Pod test-hostpath-type-vxbvs is Running (Ready = true) STEP: running on node v122-worker2 STEP: Create a block device for further testing Oct 5 12:12:43.478: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-9062 PodName:test-hostpath-type-vxbvs ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:12:43.478: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:359 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:12:45.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-9062" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathFile","total":-1,"completed":15,"skipped":325,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1"]} S ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:12:08.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume without restarting pod if nodeExpansion=off /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:591 STEP: Building a driver namespace object, basename csi-mock-volumes-949 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Oct 5 12:12:08.332: INFO: creating *v1.ServiceAccount: csi-mock-volumes-949-728/csi-attacher Oct 5 12:12:08.336: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-949 Oct 5 12:12:08.336: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-949 Oct 5 12:12:08.340: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-949 Oct 5 12:12:08.344: INFO: creating *v1.Role: csi-mock-volumes-949-728/external-attacher-cfg-csi-mock-volumes-949 Oct 5 12:12:08.348: INFO: creating *v1.RoleBinding: csi-mock-volumes-949-728/csi-attacher-role-cfg Oct 5 12:12:08.351: INFO: creating *v1.ServiceAccount: csi-mock-volumes-949-728/csi-provisioner Oct 5 12:12:08.354: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-949 Oct 5 12:12:08.354: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-949 Oct 5 12:12:08.358: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-949 Oct 5 12:12:08.362: INFO: creating *v1.Role: csi-mock-volumes-949-728/external-provisioner-cfg-csi-mock-volumes-949 Oct 5 12:12:08.365: INFO: creating *v1.RoleBinding: csi-mock-volumes-949-728/csi-provisioner-role-cfg Oct 5 12:12:08.368: INFO: creating *v1.ServiceAccount: csi-mock-volumes-949-728/csi-resizer Oct 5 12:12:08.372: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-949 Oct 5 12:12:08.372: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-949 Oct 5 12:12:08.375: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-949 Oct 5 12:12:08.379: INFO: creating *v1.Role: csi-mock-volumes-949-728/external-resizer-cfg-csi-mock-volumes-949 Oct 5 12:12:08.382: INFO: creating *v1.RoleBinding: csi-mock-volumes-949-728/csi-resizer-role-cfg Oct 5 12:12:08.385: INFO: creating *v1.ServiceAccount: csi-mock-volumes-949-728/csi-snapshotter Oct 5 12:12:08.388: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-949 Oct 5 12:12:08.388: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-949 Oct 5 12:12:08.390: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-949 Oct 5 12:12:08.393: INFO: creating *v1.Role: csi-mock-volumes-949-728/external-snapshotter-leaderelection-csi-mock-volumes-949 Oct 5 12:12:08.397: INFO: creating *v1.RoleBinding: csi-mock-volumes-949-728/external-snapshotter-leaderelection Oct 5 12:12:08.400: INFO: creating *v1.ServiceAccount: csi-mock-volumes-949-728/csi-mock Oct 5 12:12:08.403: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-949 Oct 5 12:12:08.406: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-949 Oct 5 12:12:08.410: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-949 Oct 5 12:12:08.413: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-949 Oct 5 12:12:08.416: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-949 Oct 5 12:12:08.420: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-949 Oct 5 12:12:08.423: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-949 Oct 5 12:12:08.426: INFO: creating *v1.StatefulSet: csi-mock-volumes-949-728/csi-mockplugin Oct 5 12:12:08.432: INFO: creating *v1.StatefulSet: csi-mock-volumes-949-728/csi-mockplugin-attacher Oct 5 12:12:08.437: INFO: creating *v1.StatefulSet: csi-mock-volumes-949-728/csi-mockplugin-resizer Oct 5 12:12:08.441: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-949 to register on node v122-worker2 STEP: Creating pod Oct 5 12:12:13.459: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Oct 5 12:12:13.465: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-fjdz4] to have phase Bound Oct 5 12:12:13.468: INFO: PersistentVolumeClaim pvc-fjdz4 found but phase is Pending instead of Bound. Oct 5 12:12:15.473: INFO: PersistentVolumeClaim pvc-fjdz4 found and phase=Bound (2.007734713s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-cs7pv Oct 5 12:12:23.509: INFO: Deleting pod "pvc-volume-tester-cs7pv" in namespace "csi-mock-volumes-949" Oct 5 12:12:23.513: INFO: Wait up to 5m0s for pod "pvc-volume-tester-cs7pv" to be fully deleted STEP: Deleting claim pvc-fjdz4 Oct 5 12:12:25.527: INFO: Waiting up to 2m0s for PersistentVolume pvc-9a9a502a-8ad2-49ee-a64b-7920258e4b25 to get deleted Oct 5 12:12:25.530: INFO: PersistentVolume pvc-9a9a502a-8ad2-49ee-a64b-7920258e4b25 found and phase=Bound (2.754178ms) Oct 5 12:12:27.534: INFO: PersistentVolume pvc-9a9a502a-8ad2-49ee-a64b-7920258e4b25 found and phase=Released (2.006477321s) Oct 5 12:12:29.538: INFO: PersistentVolume pvc-9a9a502a-8ad2-49ee-a64b-7920258e4b25 found and phase=Released (4.010240047s) Oct 5 12:12:31.542: INFO: PersistentVolume pvc-9a9a502a-8ad2-49ee-a64b-7920258e4b25 found and phase=Released (6.014326061s) Oct 5 12:12:33.546: INFO: PersistentVolume pvc-9a9a502a-8ad2-49ee-a64b-7920258e4b25 was removed STEP: Deleting storageclass csi-mock-volumes-949-scwc6wl STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-949 STEP: Waiting for namespaces [csi-mock-volumes-949] to vanish STEP: uninstalling csi mock driver Oct 5 12:12:39.561: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-949-728/csi-attacher Oct 5 12:12:39.566: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-949 Oct 5 12:12:39.570: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-949 Oct 5 12:12:39.576: INFO: deleting *v1.Role: csi-mock-volumes-949-728/external-attacher-cfg-csi-mock-volumes-949 Oct 5 12:12:39.581: INFO: deleting *v1.RoleBinding: csi-mock-volumes-949-728/csi-attacher-role-cfg Oct 5 12:12:39.585: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-949-728/csi-provisioner Oct 5 12:12:39.589: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-949 Oct 5 12:12:39.594: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-949 Oct 5 12:12:39.597: INFO: deleting *v1.Role: csi-mock-volumes-949-728/external-provisioner-cfg-csi-mock-volumes-949 Oct 5 12:12:39.602: INFO: deleting *v1.RoleBinding: csi-mock-volumes-949-728/csi-provisioner-role-cfg Oct 5 12:12:39.606: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-949-728/csi-resizer Oct 5 12:12:39.610: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-949 Oct 5 12:12:39.614: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-949 Oct 5 12:12:39.619: INFO: deleting *v1.Role: csi-mock-volumes-949-728/external-resizer-cfg-csi-mock-volumes-949 Oct 5 12:12:39.623: INFO: deleting *v1.RoleBinding: csi-mock-volumes-949-728/csi-resizer-role-cfg Oct 5 12:12:39.627: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-949-728/csi-snapshotter Oct 5 12:12:39.631: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-949 Oct 5 12:12:39.635: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-949 Oct 5 12:12:39.640: INFO: deleting *v1.Role: csi-mock-volumes-949-728/external-snapshotter-leaderelection-csi-mock-volumes-949 Oct 5 12:12:39.644: INFO: deleting *v1.RoleBinding: csi-mock-volumes-949-728/external-snapshotter-leaderelection Oct 5 12:12:39.649: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-949-728/csi-mock Oct 5 12:12:39.653: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-949 Oct 5 12:12:39.657: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-949 Oct 5 12:12:39.661: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-949 Oct 5 12:12:39.665: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-949 Oct 5 12:12:39.669: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-949 Oct 5 12:12:39.673: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-949 Oct 5 12:12:39.677: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-949 Oct 5 12:12:39.681: INFO: deleting *v1.StatefulSet: csi-mock-volumes-949-728/csi-mockplugin Oct 5 12:12:39.686: INFO: deleting *v1.StatefulSet: csi-mock-volumes-949-728/csi-mockplugin-attacher Oct 5 12:12:39.690: INFO: deleting *v1.StatefulSet: csi-mock-volumes-949-728/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-949-728 STEP: Waiting for namespaces [csi-mock-volumes-949-728] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:12:45.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:37.472 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI Volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:562 should expand volume without restarting pod if nodeExpansion=off /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:591 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off","total":-1,"completed":26,"skipped":960,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: block] Set fsGroup for local volume should set fsGroup for one pod [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:12:45.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:57 Oct 5 12:12:45.803: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:12:45.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-1780" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:91 S [SKIPPING] in Spec Setup (BeforeEach) [0.039 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create metrics for total time taken in volume operations in P/V Controller [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:335 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:61 ------------------------------ SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:12:36.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "v122-worker" at path "/tmp/local-volume-test-0940be65-7b43-40dd-98ca-626f3a36a2d2" Oct 5 12:12:42.095: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-0940be65-7b43-40dd-98ca-626f3a36a2d2" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-0940be65-7b43-40dd-98ca-626f3a36a2d2" "/tmp/local-volume-test-0940be65-7b43-40dd-98ca-626f3a36a2d2"] Namespace:persistent-local-volumes-test-140 PodName:hostexec-v122-worker-m5td2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:12:42.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:12:42.254: INFO: Creating a PV followed by a PVC Oct 5 12:12:42.263: INFO: Waiting for PV local-pv9t2gz to bind to PVC pvc-wl99r Oct 5 12:12:42.263: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-wl99r] to have phase Bound Oct 5 12:12:42.265: INFO: PersistentVolumeClaim pvc-wl99r found but phase is Pending instead of Bound. Oct 5 12:12:44.269: INFO: PersistentVolumeClaim pvc-wl99r found but phase is Pending instead of Bound. Oct 5 12:12:46.273: INFO: PersistentVolumeClaim pvc-wl99r found but phase is Pending instead of Bound. Oct 5 12:12:48.277: INFO: PersistentVolumeClaim pvc-wl99r found but phase is Pending instead of Bound. Oct 5 12:12:50.282: INFO: PersistentVolumeClaim pvc-wl99r found and phase=Bound (8.019176534s) Oct 5 12:12:50.282: INFO: Waiting up to 3m0s for PersistentVolume local-pv9t2gz to have phase Bound Oct 5 12:12:50.285: INFO: PersistentVolume local-pv9t2gz found and phase=Bound (2.824843ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Oct 5 12:12:50.289: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:12:50.290: INFO: Deleting PersistentVolumeClaim "pvc-wl99r" Oct 5 12:12:50.295: INFO: Deleting PersistentVolume "local-pv9t2gz" STEP: Unmount tmpfs mount point on node "v122-worker" at path "/tmp/local-volume-test-0940be65-7b43-40dd-98ca-626f3a36a2d2" Oct 5 12:12:50.300: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-0940be65-7b43-40dd-98ca-626f3a36a2d2"] Namespace:persistent-local-volumes-test-140 PodName:hostexec-v122-worker-m5td2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:12:50.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:12:50.398: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0940be65-7b43-40dd-98ca-626f3a36a2d2] Namespace:persistent-local-volumes-test-140 PodName:hostexec-v122-worker-m5td2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:12:50.398: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:12:50.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-140" for this suite. S [SKIPPING] [14.456 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:12:45.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Oct 5 12:12:47.707: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-407b2fdd-11c3-4138-ab0f-9222f78de4f4-backend && ln -s /tmp/local-volume-test-407b2fdd-11c3-4138-ab0f-9222f78de4f4-backend /tmp/local-volume-test-407b2fdd-11c3-4138-ab0f-9222f78de4f4] Namespace:persistent-local-volumes-test-9260 PodName:hostexec-v122-worker-hlpqm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:12:47.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:12:47.864: INFO: Creating a PV followed by a PVC Oct 5 12:12:47.872: INFO: Waiting for PV local-pvwktfc to bind to PVC pvc-54jx4 Oct 5 12:12:47.872: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-54jx4] to have phase Bound Oct 5 12:12:47.875: INFO: PersistentVolumeClaim pvc-54jx4 found but phase is Pending instead of Bound. Oct 5 12:12:49.879: INFO: PersistentVolumeClaim pvc-54jx4 found and phase=Bound (2.006506379s) Oct 5 12:12:49.879: INFO: Waiting up to 3m0s for PersistentVolume local-pvwktfc to have phase Bound Oct 5 12:12:49.882: INFO: PersistentVolume local-pvwktfc found and phase=Bound (2.812408ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Oct 5 12:12:51.906: INFO: pod "pod-64f22b18-942c-4b9e-9b45-85b8c13c24f4" created on Node "v122-worker" STEP: Writing in pod1 Oct 5 12:12:51.906: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9260 PodName:pod-64f22b18-942c-4b9e-9b45-85b8c13c24f4 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:12:51.906: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:12:52.008: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Oct 5 12:12:52.008: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9260 PodName:pod-64f22b18-942c-4b9e-9b45-85b8c13c24f4 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:12:52.008: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:12:52.140: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Oct 5 12:12:52.140: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-407b2fdd-11c3-4138-ab0f-9222f78de4f4 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9260 PodName:pod-64f22b18-942c-4b9e-9b45-85b8c13c24f4 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:12:52.140: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:12:52.268: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-407b2fdd-11c3-4138-ab0f-9222f78de4f4 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-64f22b18-942c-4b9e-9b45-85b8c13c24f4 in namespace persistent-local-volumes-test-9260 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:12:52.273: INFO: Deleting PersistentVolumeClaim "pvc-54jx4" Oct 5 12:12:52.278: INFO: Deleting PersistentVolume "local-pvwktfc" STEP: Removing the test directory Oct 5 12:12:52.282: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-407b2fdd-11c3-4138-ab0f-9222f78de4f4 && rm -r /tmp/local-volume-test-407b2fdd-11c3-4138-ab0f-9222f78de4f4-backend] Namespace:persistent-local-volumes-test-9260 PodName:hostexec-v122-worker-hlpqm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:12:52.282: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:12:52.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9260" for this suite. • [SLOW TEST:6.791 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":16,"skipped":326,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1"]} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:12:52.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename regional-pd STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:68 Oct 5 12:12:52.504: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:12:52.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "regional-pd-1344" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.050 seconds] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 RegionalPD [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:76 should provision storage in the allowedTopologies [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:86 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:72 ------------------------------ SS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:12:50.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:75 STEP: Creating configMap with name projected-configmap-test-volume-1e9342bf-8a17-4804-9110-78b5c9461daf STEP: Creating a pod to test consume configMaps Oct 5 12:12:50.549: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-096281cb-f80e-4d6c-a518-48536516060d" in namespace "projected-6257" to be "Succeeded or Failed" Oct 5 12:12:50.552: INFO: Pod "pod-projected-configmaps-096281cb-f80e-4d6c-a518-48536516060d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.178113ms Oct 5 12:12:52.555: INFO: Pod "pod-projected-configmaps-096281cb-f80e-4d6c-a518-48536516060d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005695742s Oct 5 12:12:54.559: INFO: Pod "pod-projected-configmaps-096281cb-f80e-4d6c-a518-48536516060d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009648283s Oct 5 12:12:56.563: INFO: Pod "pod-projected-configmaps-096281cb-f80e-4d6c-a518-48536516060d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013544771s STEP: Saw pod success Oct 5 12:12:56.563: INFO: Pod "pod-projected-configmaps-096281cb-f80e-4d6c-a518-48536516060d" satisfied condition "Succeeded or Failed" Oct 5 12:12:56.566: INFO: Trying to get logs from node v122-worker2 pod pod-projected-configmaps-096281cb-f80e-4d6c-a518-48536516060d container agnhost-container: STEP: delete the pod Oct 5 12:12:56.593: INFO: Waiting for pod pod-projected-configmaps-096281cb-f80e-4d6c-a518-48536516060d to disappear Oct 5 12:12:56.596: INFO: Pod pod-projected-configmaps-096281cb-f80e-4d6c-a518-48536516060d no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:12:56.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6257" for this suite. • [SLOW TEST:6.090 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:75 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":15,"skipped":449,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:12:52.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Oct 5 12:12:52.562: INFO: The status of Pod test-hostpath-type-pjgmg is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:12:54.566: INFO: The status of Pod test-hostpath-type-pjgmg is Running (Ready = true) STEP: running on node v122-worker2 STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should fail on mounting non-existent directory 'does-not-exist-dir' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:70 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:12:58.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-4564" for this suite. • [SLOW TEST:6.113 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting non-existent directory 'does-not-exist-dir' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:70 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should fail on mounting non-existent directory 'does-not-exist-dir' when HostPathType is HostPathDirectory","total":-1,"completed":17,"skipped":341,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1"]} SSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:12:58.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] volume on default medium should have the correct mode using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:71 STEP: Creating a pod to test emptydir volume type on node default medium Oct 5 12:12:58.689: INFO: Waiting up to 5m0s for pod "pod-7fa57533-8c7c-42ec-88b3-56bd8e297ad8" in namespace "emptydir-2293" to be "Succeeded or Failed" Oct 5 12:12:58.692: INFO: Pod "pod-7fa57533-8c7c-42ec-88b3-56bd8e297ad8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.269362ms Oct 5 12:13:00.700: INFO: Pod "pod-7fa57533-8c7c-42ec-88b3-56bd8e297ad8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011214379s Oct 5 12:13:02.704: INFO: Pod "pod-7fa57533-8c7c-42ec-88b3-56bd8e297ad8": Phase="Running", Reason="", readiness=true. Elapsed: 4.01585533s Oct 5 12:13:04.708: INFO: Pod "pod-7fa57533-8c7c-42ec-88b3-56bd8e297ad8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019522756s STEP: Saw pod success Oct 5 12:13:04.708: INFO: Pod "pod-7fa57533-8c7c-42ec-88b3-56bd8e297ad8" satisfied condition "Succeeded or Failed" Oct 5 12:13:04.711: INFO: Trying to get logs from node v122-worker pod pod-7fa57533-8c7c-42ec-88b3-56bd8e297ad8 container test-container: STEP: delete the pod Oct 5 12:13:04.740: INFO: Waiting for pod pod-7fa57533-8c7c-42ec-88b3-56bd8e297ad8 to disappear Oct 5 12:13:04.743: INFO: Pod pod-7fa57533-8c7c-42ec-88b3-56bd8e297ad8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:13:04.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2293" for this suite. • [SLOW TEST:6.108 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48 volume on default medium should have the correct mode using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:71 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":-1,"completed":18,"skipped":345,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1"]} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:12:56.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Oct 5 12:12:56.759: INFO: The status of Pod test-hostpath-type-v6l9b is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:12:58.764: INFO: The status of Pod test-hostpath-type-v6l9b is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:13:00.764: INFO: The status of Pod test-hostpath-type-v6l9b is Running (Ready = true) STEP: running on node v122-worker STEP: Create a block device for further testing Oct 5 12:13:00.767: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-9812 PodName:test-hostpath-type-v6l9b ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:13:00.767: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:346 [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:13:04.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-9812" for this suite. • [SLOW TEST:8.222 seconds] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:346 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathBlockDev","total":-1,"completed":16,"skipped":506,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:13:04.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Oct 5 12:13:04.821: INFO: The status of Pod test-hostpath-type-xlnxq is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:13:06.825: INFO: The status of Pod test-hostpath-type-xlnxq is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:13:08.825: INFO: The status of Pod test-hostpath-type-xlnxq is Running (Ready = true) STEP: running on node v122-worker STEP: Create a character device for further testing Oct 5 12:13:08.828: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-7247 PodName:test-hostpath-type-xlnxq ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:13:08.828: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting character device 'achardev' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:300 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:13:10.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-7247" for this suite. • [SLOW TEST:6.220 seconds] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting character device 'achardev' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:300 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathBlockDev","total":-1,"completed":19,"skipped":355,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:13:05.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pvc-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:72 Oct 5 12:13:05.059: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PVC Oct 5 12:13:05.065: INFO: Default storage class: "standard" Oct 5 12:13:05.065: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Creating a Pod that becomes Running and therefore is actively using the PVC STEP: Waiting for PVC to become Bound Oct 5 12:13:17.086: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-protection42m25] to have phase Bound Oct 5 12:13:17.090: INFO: PersistentVolumeClaim pvc-protection42m25 found and phase=Bound (3.326684ms) STEP: Checking that PVC Protection finalizer is set [It] Verify "immediate" deletion of a PVC that is not in active use by a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:114 STEP: Deleting the pod using the PVC Oct 5 12:13:17.093: INFO: Deleting pod "pvc-tester-hzvsr" in namespace "pvc-protection-3774" Oct 5 12:13:17.098: INFO: Wait up to 5m0s for pod "pvc-tester-hzvsr" to be fully deleted STEP: Deleting the PVC Oct 5 12:13:19.111: INFO: Waiting up to 3m0s for PersistentVolumeClaim pvc-protection42m25 to be removed Oct 5 12:13:21.118: INFO: Claim "pvc-protection42m25" in namespace "pvc-protection-3774" doesn't exist in the system [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:13:21.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pvc-protection-3774" for this suite. [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:108 • [SLOW TEST:16.104 seconds] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Verify "immediate" deletion of a PVC that is not in active use by a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:114 ------------------------------ {"msg":"PASSED [sig-storage] PVC Protection Verify \"immediate\" deletion of a PVC that is not in active use by a pod","total":-1,"completed":17,"skipped":549,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:13:21.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:68 Oct 5 12:13:21.262: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:13:21.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-5374" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.042 seconds] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 NFSv3 [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:102 should be mountable for NFSv3 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:103 Only supported for node OS distro [gci ubuntu custom] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:69 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:13:11.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Oct 5 12:13:11.113: INFO: The status of Pod test-hostpath-type-fsjsm is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:13:13.116: INFO: The status of Pod test-hostpath-type-fsjsm is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:13:15.117: INFO: The status of Pod test-hostpath-type-fsjsm is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:13:17.117: INFO: The status of Pod test-hostpath-type-fsjsm is Running (Ready = true) STEP: running on node v122-worker STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should be able to mount directory 'adir' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:80 [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:13:23.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-2652" for this suite. • [SLOW TEST:12.103 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount directory 'adir' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:80 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should be able to mount directory 'adir' successfully when HostPathType is HostPathUnset","total":-1,"completed":20,"skipped":391,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1"]} SS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:12:45.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should call NodeStage after NodeUnstage success /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:962 STEP: Building a driver namespace object, basename csi-mock-volumes-3989 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Oct 5 12:12:45.916: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3989-4249/csi-attacher Oct 5 12:12:45.920: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3989 Oct 5 12:12:45.920: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-3989 Oct 5 12:12:45.924: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3989 Oct 5 12:12:45.928: INFO: creating *v1.Role: csi-mock-volumes-3989-4249/external-attacher-cfg-csi-mock-volumes-3989 Oct 5 12:12:45.932: INFO: creating *v1.RoleBinding: csi-mock-volumes-3989-4249/csi-attacher-role-cfg Oct 5 12:12:45.936: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3989-4249/csi-provisioner Oct 5 12:12:45.939: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3989 Oct 5 12:12:45.940: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-3989 Oct 5 12:12:45.944: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3989 Oct 5 12:12:45.948: INFO: creating *v1.Role: csi-mock-volumes-3989-4249/external-provisioner-cfg-csi-mock-volumes-3989 Oct 5 12:12:45.952: INFO: creating *v1.RoleBinding: csi-mock-volumes-3989-4249/csi-provisioner-role-cfg Oct 5 12:12:45.956: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3989-4249/csi-resizer Oct 5 12:12:45.960: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3989 Oct 5 12:12:45.960: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-3989 Oct 5 12:12:45.964: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3989 Oct 5 12:12:45.968: INFO: creating *v1.Role: csi-mock-volumes-3989-4249/external-resizer-cfg-csi-mock-volumes-3989 Oct 5 12:12:45.971: INFO: creating *v1.RoleBinding: csi-mock-volumes-3989-4249/csi-resizer-role-cfg Oct 5 12:12:45.975: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3989-4249/csi-snapshotter Oct 5 12:12:45.979: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3989 Oct 5 12:12:45.979: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-3989 Oct 5 12:12:45.983: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3989 Oct 5 12:12:45.986: INFO: creating *v1.Role: csi-mock-volumes-3989-4249/external-snapshotter-leaderelection-csi-mock-volumes-3989 Oct 5 12:12:45.990: INFO: creating *v1.RoleBinding: csi-mock-volumes-3989-4249/external-snapshotter-leaderelection Oct 5 12:12:45.994: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3989-4249/csi-mock Oct 5 12:12:45.997: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3989 Oct 5 12:12:46.002: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3989 Oct 5 12:12:46.006: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3989 Oct 5 12:12:46.010: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3989 Oct 5 12:12:46.013: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3989 Oct 5 12:12:46.017: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3989 Oct 5 12:12:46.021: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3989 Oct 5 12:12:46.025: INFO: creating *v1.StatefulSet: csi-mock-volumes-3989-4249/csi-mockplugin Oct 5 12:12:46.031: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-3989 Oct 5 12:12:46.035: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-3989" Oct 5 12:12:46.038: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-3989 to register on node v122-worker STEP: Creating pod Oct 5 12:12:51.052: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Oct 5 12:12:51.059: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-fq62f] to have phase Bound Oct 5 12:12:51.062: INFO: PersistentVolumeClaim pvc-fq62f found but phase is Pending instead of Bound. Oct 5 12:12:53.067: INFO: PersistentVolumeClaim pvc-fq62f found and phase=Bound (2.007604282s) Oct 5 12:12:53.078: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-fq62f] to have phase Bound Oct 5 12:12:53.081: INFO: PersistentVolumeClaim pvc-fq62f found and phase=Bound (3.339892ms) Oct 5 12:12:55.088: INFO: Deleting pod "pvc-volume-tester-zcpff" in namespace "csi-mock-volumes-3989" Oct 5 12:12:55.093: INFO: Wait up to 5m0s for pod "pvc-volume-tester-zcpff" to be fully deleted Oct 5 12:13:05.111: INFO: Deleting pod "pvc-volume-tester-6jcd8" in namespace "csi-mock-volumes-3989" Oct 5 12:13:05.116: INFO: Wait up to 5m0s for pod "pvc-volume-tester-6jcd8" to be fully deleted STEP: Waiting for all remaining expected CSI calls STEP: Deleting pod pvc-volume-tester-zcpff Oct 5 12:13:10.140: INFO: Deleting pod "pvc-volume-tester-zcpff" in namespace "csi-mock-volumes-3989" STEP: Deleting pod pvc-volume-tester-6jcd8 Oct 5 12:13:10.143: INFO: Deleting pod "pvc-volume-tester-6jcd8" in namespace "csi-mock-volumes-3989" STEP: Deleting claim pvc-fq62f Oct 5 12:13:10.153: INFO: Waiting up to 2m0s for PersistentVolume pvc-cd3d9835-51e2-4786-b90a-d2a72b6902db to get deleted Oct 5 12:13:10.156: INFO: PersistentVolume pvc-cd3d9835-51e2-4786-b90a-d2a72b6902db found and phase=Bound (2.837283ms) Oct 5 12:13:12.160: INFO: PersistentVolume pvc-cd3d9835-51e2-4786-b90a-d2a72b6902db was removed STEP: Deleting storageclass csi-mock-volumes-3989-scbstzr STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-3989 STEP: Waiting for namespaces [csi-mock-volumes-3989] to vanish STEP: uninstalling csi mock driver Oct 5 12:13:18.175: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3989-4249/csi-attacher Oct 5 12:13:18.180: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3989 Oct 5 12:13:18.185: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3989 Oct 5 12:13:18.190: INFO: deleting *v1.Role: csi-mock-volumes-3989-4249/external-attacher-cfg-csi-mock-volumes-3989 Oct 5 12:13:18.195: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3989-4249/csi-attacher-role-cfg Oct 5 12:13:18.202: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3989-4249/csi-provisioner Oct 5 12:13:18.209: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3989 Oct 5 12:13:18.214: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3989 Oct 5 12:13:18.218: INFO: deleting *v1.Role: csi-mock-volumes-3989-4249/external-provisioner-cfg-csi-mock-volumes-3989 Oct 5 12:13:18.223: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3989-4249/csi-provisioner-role-cfg Oct 5 12:13:18.228: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3989-4249/csi-resizer Oct 5 12:13:18.232: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3989 Oct 5 12:13:18.237: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3989 Oct 5 12:13:18.241: INFO: deleting *v1.Role: csi-mock-volumes-3989-4249/external-resizer-cfg-csi-mock-volumes-3989 Oct 5 12:13:18.246: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3989-4249/csi-resizer-role-cfg Oct 5 12:13:18.250: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3989-4249/csi-snapshotter Oct 5 12:13:18.255: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3989 Oct 5 12:13:18.259: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3989 Oct 5 12:13:18.263: INFO: deleting *v1.Role: csi-mock-volumes-3989-4249/external-snapshotter-leaderelection-csi-mock-volumes-3989 Oct 5 12:13:18.268: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3989-4249/external-snapshotter-leaderelection Oct 5 12:13:18.273: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3989-4249/csi-mock Oct 5 12:13:18.277: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3989 Oct 5 12:13:18.283: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3989 Oct 5 12:13:18.287: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3989 Oct 5 12:13:18.291: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3989 Oct 5 12:13:18.295: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3989 Oct 5 12:13:18.300: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3989 Oct 5 12:13:18.304: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3989 Oct 5 12:13:18.309: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3989-4249/csi-mockplugin Oct 5 12:13:18.314: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-3989 STEP: deleting the driver namespace: csi-mock-volumes-3989-4249 STEP: Waiting for namespaces [csi-mock-volumes-3989-4249] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:13:24.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:38.502 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI NodeUnstage error cases [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:901 should call NodeStage after NodeUnstage success /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:962 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] should call NodeStage after NodeUnstage success","total":-1,"completed":27,"skipped":1005,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: block] Set fsGroup for local volume should set fsGroup for one pod [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:13:24.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:57 Oct 5 12:13:24.533: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:13:24.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3737" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:91 S [SKIPPING] in Spec Setup (BeforeEach) [0.046 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:457 should create unbound pvc count metrics for pvc controller after creating pvc only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:568 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:61 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:13:21.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Oct 5 12:13:21.343: INFO: The status of Pod test-hostpath-type-5h94l is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:13:23.347: INFO: The status of Pod test-hostpath-type-5h94l is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:13:25.348: INFO: The status of Pod test-hostpath-type-5h94l is Running (Ready = true) STEP: running on node v122-worker STEP: Create a character device for further testing Oct 5 12:13:25.351: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-205 PodName:test-hostpath-type-5h94l ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:13:25.351: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting character device 'achardev' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:295 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:13:27.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-205" for this suite. • [SLOW TEST:6.476 seconds] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting character device 'achardev' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:295 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathSocket","total":-1,"completed":18,"skipped":615,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2"]} SSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:13:24.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "v122-worker" using path "/tmp/local-volume-test-84352bdd-9c49-48e5-b355-b956326436fe" Oct 5 12:13:28.633: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-84352bdd-9c49-48e5-b355-b956326436fe && dd if=/dev/zero of=/tmp/local-volume-test-84352bdd-9c49-48e5-b355-b956326436fe/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-84352bdd-9c49-48e5-b355-b956326436fe/file] Namespace:persistent-local-volumes-test-1547 PodName:hostexec-v122-worker-hfqv7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:13:28.634: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:13:28.809: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-84352bdd-9c49-48e5-b355-b956326436fe/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1547 PodName:hostexec-v122-worker-hfqv7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:13:28.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:13:28.926: INFO: Creating a PV followed by a PVC Oct 5 12:13:28.934: INFO: Waiting for PV local-pv2dc6x to bind to PVC pvc-54l8w Oct 5 12:13:28.934: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-54l8w] to have phase Bound Oct 5 12:13:28.937: INFO: PersistentVolumeClaim pvc-54l8w found but phase is Pending instead of Bound. Oct 5 12:13:30.942: INFO: PersistentVolumeClaim pvc-54l8w found but phase is Pending instead of Bound. Oct 5 12:13:32.947: INFO: PersistentVolumeClaim pvc-54l8w found but phase is Pending instead of Bound. Oct 5 12:13:34.951: INFO: PersistentVolumeClaim pvc-54l8w found and phase=Bound (6.016478639s) Oct 5 12:13:34.951: INFO: Waiting up to 3m0s for PersistentVolume local-pv2dc6x to have phase Bound Oct 5 12:13:34.954: INFO: PersistentVolume local-pv2dc6x found and phase=Bound (3.4461ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Oct 5 12:13:34.960: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:13:34.961: INFO: Deleting PersistentVolumeClaim "pvc-54l8w" Oct 5 12:13:34.967: INFO: Deleting PersistentVolume "local-pv2dc6x" Oct 5 12:13:34.972: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-84352bdd-9c49-48e5-b355-b956326436fe/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1547 PodName:hostexec-v122-worker-hfqv7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:13:34.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop8" on node "v122-worker" at path /tmp/local-volume-test-84352bdd-9c49-48e5-b355-b956326436fe/file Oct 5 12:13:35.120: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop8] Namespace:persistent-local-volumes-test-1547 PodName:hostexec-v122-worker-hfqv7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:13:35.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-84352bdd-9c49-48e5-b355-b956326436fe Oct 5 12:13:35.306: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-84352bdd-9c49-48e5-b355-b956326436fe] Namespace:persistent-local-volumes-test-1547 PodName:hostexec-v122-worker-hfqv7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:13:35.306: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:13:35.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1547" for this suite. S [SKIPPING] [10.897 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:13:23.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Oct 5 12:13:25.234: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-9b46ee23-1c08-4d65-b06a-d3c499e72d86-backend && ln -s /tmp/local-volume-test-9b46ee23-1c08-4d65-b06a-d3c499e72d86-backend /tmp/local-volume-test-9b46ee23-1c08-4d65-b06a-d3c499e72d86] Namespace:persistent-local-volumes-test-7651 PodName:hostexec-v122-worker2-bft8j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:13:25.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:13:25.383: INFO: Creating a PV followed by a PVC Oct 5 12:13:25.392: INFO: Waiting for PV local-pv4xz9q to bind to PVC pvc-2l6nh Oct 5 12:13:25.392: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-2l6nh] to have phase Bound Oct 5 12:13:25.395: INFO: PersistentVolumeClaim pvc-2l6nh found but phase is Pending instead of Bound. Oct 5 12:13:27.400: INFO: PersistentVolumeClaim pvc-2l6nh found but phase is Pending instead of Bound. Oct 5 12:13:29.404: INFO: PersistentVolumeClaim pvc-2l6nh found but phase is Pending instead of Bound. Oct 5 12:13:31.409: INFO: PersistentVolumeClaim pvc-2l6nh found but phase is Pending instead of Bound. Oct 5 12:13:33.413: INFO: PersistentVolumeClaim pvc-2l6nh found but phase is Pending instead of Bound. Oct 5 12:13:35.418: INFO: PersistentVolumeClaim pvc-2l6nh found and phase=Bound (10.026234058s) Oct 5 12:13:35.418: INFO: Waiting up to 3m0s for PersistentVolume local-pv4xz9q to have phase Bound Oct 5 12:13:35.421: INFO: PersistentVolume local-pv4xz9q found and phase=Bound (3.04404ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Oct 5 12:13:37.454: INFO: pod "pod-0ff606f9-22e2-42f8-bdd0-1d6aaaa952dd" created on Node "v122-worker2" STEP: Writing in pod1 Oct 5 12:13:37.454: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7651 PodName:pod-0ff606f9-22e2-42f8-bdd0-1d6aaaa952dd ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:13:37.454: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:13:37.578: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Oct 5 12:13:37.578: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7651 PodName:pod-0ff606f9-22e2-42f8-bdd0-1d6aaaa952dd ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:13:37.578: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:13:37.660: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-0ff606f9-22e2-42f8-bdd0-1d6aaaa952dd in namespace persistent-local-volumes-test-7651 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:13:37.667: INFO: Deleting PersistentVolumeClaim "pvc-2l6nh" Oct 5 12:13:37.671: INFO: Deleting PersistentVolume "local-pv4xz9q" STEP: Removing the test directory Oct 5 12:13:37.676: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-9b46ee23-1c08-4d65-b06a-d3c499e72d86 && rm -r /tmp/local-volume-test-9b46ee23-1c08-4d65-b06a-d3c499e72d86-backend] Namespace:persistent-local-volumes-test-7651 PodName:hostexec-v122-worker2-bft8j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:13:37.676: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:13:37.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7651" for this suite. • [SLOW TEST:14.669 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":21,"skipped":393,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:13:37.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Oct 5 12:13:38.022: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:13:38.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-5643" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.044 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 schedule pods each with a PD, delete pod and verify detach [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:93 for read-only PD with pod delete grace period of "immediate (0s)" /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:135 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:13:38.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:77 Oct 5 12:13:38.090: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:13:38.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3624" for this suite. [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:111 Oct 5 12:13:38.099: INFO: AfterEach: Cleaning up test resources Oct 5 12:13:38.099: INFO: pvc is nil Oct 5 12:13:38.099: INFO: pv is nil S [SKIPPING] in Spec Setup (BeforeEach) [0.040 seconds] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:127 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:08:46.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to the key in the secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/secrets_volume.go:440 STEP: Creating secret with name s-test-opt-create-fa10127b-82cd-4309-a908-8c1a9761eaaf STEP: Creating the pod [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:13:46.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1702" for this suite. • [SLOW TEST:300.077 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to the key in the secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/secrets_volume.go:440 ------------------------------ {"msg":"PASSED [sig-storage] Secrets Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]","total":-1,"completed":8,"skipped":425,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:13:46.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Oct 5 12:13:46.539: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:13:46.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-3365" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.047 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 schedule pods each with a PD, delete pod and verify detach [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:93 for read-only PD with pod delete grace period of "default (30s)" /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:135 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:13:46.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Oct 5 12:13:48.760: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-d86fcee5-23a9-42c4-9bbd-239a52d4cb8c && mount --bind /tmp/local-volume-test-d86fcee5-23a9-42c4-9bbd-239a52d4cb8c /tmp/local-volume-test-d86fcee5-23a9-42c4-9bbd-239a52d4cb8c] Namespace:persistent-local-volumes-test-4801 PodName:hostexec-v122-worker2-bsmsk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:13:48.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:13:48.894: INFO: Creating a PV followed by a PVC Oct 5 12:13:48.904: INFO: Waiting for PV local-pvb56vb to bind to PVC pvc-m7g5r Oct 5 12:13:48.904: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-m7g5r] to have phase Bound Oct 5 12:13:48.907: INFO: PersistentVolumeClaim pvc-m7g5r found but phase is Pending instead of Bound. Oct 5 12:13:50.911: INFO: PersistentVolumeClaim pvc-m7g5r found and phase=Bound (2.007032012s) Oct 5 12:13:50.911: INFO: Waiting up to 3m0s for PersistentVolume local-pvb56vb to have phase Bound Oct 5 12:13:50.914: INFO: PersistentVolume local-pvb56vb found and phase=Bound (3.277568ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Oct 5 12:13:50.920: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:13:50.922: INFO: Deleting PersistentVolumeClaim "pvc-m7g5r" Oct 5 12:13:50.927: INFO: Deleting PersistentVolume "local-pvb56vb" STEP: Removing the test directory Oct 5 12:13:50.932: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-d86fcee5-23a9-42c4-9bbd-239a52d4cb8c && rm -r /tmp/local-volume-test-d86fcee5-23a9-42c4-9bbd-239a52d4cb8c] Namespace:persistent-local-volumes-test-4801 PodName:hostexec-v122-worker2-bsmsk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:13:50.932: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:13:51.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4801" for this suite. S [SKIPPING] [4.403 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:13:38.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "v122-worker" using path "/tmp/local-volume-test-f17a7a21-ce8b-4c68-88cb-9b5345356f3b" Oct 5 12:13:40.298: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-f17a7a21-ce8b-4c68-88cb-9b5345356f3b && dd if=/dev/zero of=/tmp/local-volume-test-f17a7a21-ce8b-4c68-88cb-9b5345356f3b/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-f17a7a21-ce8b-4c68-88cb-9b5345356f3b/file] Namespace:persistent-local-volumes-test-4972 PodName:hostexec-v122-worker-rnrxj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:13:40.298: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:13:40.473: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-f17a7a21-ce8b-4c68-88cb-9b5345356f3b/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-4972 PodName:hostexec-v122-worker-rnrxj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:13:40.473: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:13:40.617: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop9 && mount -t ext4 /dev/loop9 /tmp/local-volume-test-f17a7a21-ce8b-4c68-88cb-9b5345356f3b && chmod o+rwx /tmp/local-volume-test-f17a7a21-ce8b-4c68-88cb-9b5345356f3b] Namespace:persistent-local-volumes-test-4972 PodName:hostexec-v122-worker-rnrxj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:13:40.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:13:41.106: INFO: Creating a PV followed by a PVC Oct 5 12:13:41.115: INFO: Waiting for PV local-pvdfm96 to bind to PVC pvc-wfp4l Oct 5 12:13:41.115: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-wfp4l] to have phase Bound Oct 5 12:13:41.117: INFO: PersistentVolumeClaim pvc-wfp4l found but phase is Pending instead of Bound. Oct 5 12:13:43.122: INFO: PersistentVolumeClaim pvc-wfp4l found but phase is Pending instead of Bound. Oct 5 12:13:45.126: INFO: PersistentVolumeClaim pvc-wfp4l found but phase is Pending instead of Bound. Oct 5 12:13:47.130: INFO: PersistentVolumeClaim pvc-wfp4l found but phase is Pending instead of Bound. Oct 5 12:13:49.134: INFO: PersistentVolumeClaim pvc-wfp4l found but phase is Pending instead of Bound. Oct 5 12:13:51.138: INFO: PersistentVolumeClaim pvc-wfp4l found and phase=Bound (10.023741932s) Oct 5 12:13:51.138: INFO: Waiting up to 3m0s for PersistentVolume local-pvdfm96 to have phase Bound Oct 5 12:13:51.141: INFO: PersistentVolume local-pvdfm96 found and phase=Bound (2.942064ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Oct 5 12:13:51.148: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:13:51.149: INFO: Deleting PersistentVolumeClaim "pvc-wfp4l" Oct 5 12:13:51.153: INFO: Deleting PersistentVolume "local-pvdfm96" Oct 5 12:13:51.158: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-f17a7a21-ce8b-4c68-88cb-9b5345356f3b] Namespace:persistent-local-volumes-test-4972 PodName:hostexec-v122-worker-rnrxj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:13:51.158: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:13:51.350: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-f17a7a21-ce8b-4c68-88cb-9b5345356f3b/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-4972 PodName:hostexec-v122-worker-rnrxj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:13:51.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop9" on node "v122-worker" at path /tmp/local-volume-test-f17a7a21-ce8b-4c68-88cb-9b5345356f3b/file Oct 5 12:13:51.495: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop9] Namespace:persistent-local-volumes-test-4972 PodName:hostexec-v122-worker-rnrxj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:13:51.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-f17a7a21-ce8b-4c68-88cb-9b5345356f3b Oct 5 12:13:51.641: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f17a7a21-ce8b-4c68-88cb-9b5345356f3b] Namespace:persistent-local-volumes-test-4972 PodName:hostexec-v122-worker-rnrxj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:13:51.641: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:13:51.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4972" for this suite. S [SKIPPING] [13.532 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:13:51.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:112 [It] should be reschedulable [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:331 Oct 5 12:13:51.919: INFO: Only supported for providers [openstack gce gke vsphere azure] (not local) [AfterEach] pods that use multiple volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:327 [AfterEach] [sig-storage] PersistentVolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:13:51.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-5611" for this suite. S [SKIPPING] [0.050 seconds] [sig-storage] PersistentVolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Default StorageClass [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:324 pods that use multiple volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:325 should be reschedulable [Slow] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:331 Only supported for providers [openstack gce gke vsphere azure] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:333 ------------------------------ SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:12:43.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] exhausted, late binding, no topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1081 STEP: Building a driver namespace object, basename csi-mock-volumes-2322 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Oct 5 12:12:43.240: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2322-8497/csi-attacher Oct 5 12:12:43.243: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2322 Oct 5 12:12:43.243: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-2322 Oct 5 12:12:43.247: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2322 Oct 5 12:12:43.251: INFO: creating *v1.Role: csi-mock-volumes-2322-8497/external-attacher-cfg-csi-mock-volumes-2322 Oct 5 12:12:43.254: INFO: creating *v1.RoleBinding: csi-mock-volumes-2322-8497/csi-attacher-role-cfg Oct 5 12:12:43.258: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2322-8497/csi-provisioner Oct 5 12:12:43.262: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2322 Oct 5 12:12:43.262: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-2322 Oct 5 12:12:43.266: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2322 Oct 5 12:12:43.270: INFO: creating *v1.Role: csi-mock-volumes-2322-8497/external-provisioner-cfg-csi-mock-volumes-2322 Oct 5 12:12:43.274: INFO: creating *v1.RoleBinding: csi-mock-volumes-2322-8497/csi-provisioner-role-cfg Oct 5 12:12:43.278: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2322-8497/csi-resizer Oct 5 12:12:43.282: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2322 Oct 5 12:12:43.282: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-2322 Oct 5 12:12:43.285: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2322 Oct 5 12:12:43.289: INFO: creating *v1.Role: csi-mock-volumes-2322-8497/external-resizer-cfg-csi-mock-volumes-2322 Oct 5 12:12:43.293: INFO: creating *v1.RoleBinding: csi-mock-volumes-2322-8497/csi-resizer-role-cfg Oct 5 12:12:43.297: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2322-8497/csi-snapshotter Oct 5 12:12:43.301: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2322 Oct 5 12:12:43.301: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-2322 Oct 5 12:12:43.305: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2322 Oct 5 12:12:43.309: INFO: creating *v1.Role: csi-mock-volumes-2322-8497/external-snapshotter-leaderelection-csi-mock-volumes-2322 Oct 5 12:12:43.312: INFO: creating *v1.RoleBinding: csi-mock-volumes-2322-8497/external-snapshotter-leaderelection Oct 5 12:12:43.316: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2322-8497/csi-mock Oct 5 12:12:43.320: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2322 Oct 5 12:12:43.323: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2322 Oct 5 12:12:43.327: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2322 Oct 5 12:12:43.331: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2322 Oct 5 12:12:43.335: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2322 Oct 5 12:12:43.338: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2322 Oct 5 12:12:43.342: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2322 Oct 5 12:12:43.346: INFO: creating *v1.StatefulSet: csi-mock-volumes-2322-8497/csi-mockplugin Oct 5 12:12:43.353: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-2322 Oct 5 12:12:43.357: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-2322" Oct 5 12:12:43.360: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-2322 to register on node v122-worker2 I1005 12:12:45.386896 23 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I1005 12:12:45.389860 23 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-2322","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1005 12:12:45.392469 23 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I1005 12:12:45.395603 23 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I1005 12:12:45.493906 23 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-2322","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1005 12:12:45.844729 23 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-2322"},"Error":"","FullError":null} STEP: Creating pod Oct 5 12:12:48.379: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil I1005 12:12:48.411864 23 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-1dc85c04-6257-4bcd-9697-192e1227052b","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}} I1005 12:12:48.421753 23 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-1dc85c04-6257-4bcd-9697-192e1227052b","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-1dc85c04-6257-4bcd-9697-192e1227052b"}}},"Error":"","FullError":null} I1005 12:12:49.571569 23 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:12:49.574875 23 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Oct 5 12:12:49.577: INFO: >>> kubeConfig: /root/.kube/config I1005 12:12:49.721501 23 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-1dc85c04-6257-4bcd-9697-192e1227052b/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-1dc85c04-6257-4bcd-9697-192e1227052b","storage.kubernetes.io/csiProvisionerIdentity":"1664971965397-8081-csi-mock-csi-mock-volumes-2322"}},"Response":{},"Error":"","FullError":null} I1005 12:12:49.728098 23 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:12:49.730424 23 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Oct 5 12:12:49.732: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:12:49.871: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:12:50.000: INFO: >>> kubeConfig: /root/.kube/config I1005 12:12:50.149337 23 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-1dc85c04-6257-4bcd-9697-192e1227052b/globalmount","target_path":"/var/lib/kubelet/pods/b21a648c-b65c-4136-a358-5de780c1c81f/volumes/kubernetes.io~csi/pvc-1dc85c04-6257-4bcd-9697-192e1227052b/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-1dc85c04-6257-4bcd-9697-192e1227052b","storage.kubernetes.io/csiProvisionerIdentity":"1664971965397-8081-csi-mock-csi-mock-volumes-2322"}},"Response":{},"Error":"","FullError":null} Oct 5 12:12:52.399: INFO: Deleting pod "pvc-volume-tester-fff2g" in namespace "csi-mock-volumes-2322" Oct 5 12:12:52.404: INFO: Wait up to 5m0s for pod "pvc-volume-tester-fff2g" to be fully deleted Oct 5 12:12:53.506: INFO: >>> kubeConfig: /root/.kube/config I1005 12:12:53.631931 23 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/b21a648c-b65c-4136-a358-5de780c1c81f/volumes/kubernetes.io~csi/pvc-1dc85c04-6257-4bcd-9697-192e1227052b/mount"},"Response":{},"Error":"","FullError":null} I1005 12:12:53.709488 23 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:12:53.711715 23 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-1dc85c04-6257-4bcd-9697-192e1227052b/globalmount"},"Response":{},"Error":"","FullError":null} I1005 12:12:56.438011 23 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} STEP: Checking PVC events Oct 5 12:12:57.418: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-t4cmn", GenerateName:"pvc-", Namespace:"csi-mock-volumes-2322", SelfLink:"", UID:"1dc85c04-6257-4bcd-9697-192e1227052b", ResourceVersion:"15339", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63800568768, loc:(*time.Location)(0xa04d060)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003d11ce0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003d11cf8), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc002f821e0), VolumeMode:(*v1.PersistentVolumeMode)(0xc002f821f0), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Oct 5 12:12:57.418: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-t4cmn", GenerateName:"pvc-", Namespace:"csi-mock-volumes-2322", SelfLink:"", UID:"1dc85c04-6257-4bcd-9697-192e1227052b", ResourceVersion:"15342", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63800568768, loc:(*time.Location)(0xa04d060)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.kubernetes.io/selected-node":"v122-worker2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0032a62b8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0032a62d0), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0032a62e8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0032a6300), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc002f9e110), VolumeMode:(*v1.PersistentVolumeMode)(0xc002f9e120), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Oct 5 12:12:57.418: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-t4cmn", GenerateName:"pvc-", Namespace:"csi-mock-volumes-2322", SelfLink:"", UID:"1dc85c04-6257-4bcd-9697-192e1227052b", ResourceVersion:"15343", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63800568768, loc:(*time.Location)(0xa04d060)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-2322", "volume.kubernetes.io/selected-node":"v122-worker2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0043a6f90), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0043a6fa8), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0043a6fc0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0043a6fd8), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0043a6ff0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0043a7008), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc002f82d90), VolumeMode:(*v1.PersistentVolumeMode)(0xc002f82da0), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Oct 5 12:12:57.418: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-t4cmn", GenerateName:"pvc-", Namespace:"csi-mock-volumes-2322", SelfLink:"", UID:"1dc85c04-6257-4bcd-9697-192e1227052b", ResourceVersion:"15353", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63800568768, loc:(*time.Location)(0xa04d060)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-2322", "volume.kubernetes.io/selected-node":"v122-worker2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0059ab878), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0059ab890), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0059ab8a8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0059ab8c0), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0059ab8d8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0059ab8f0), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-1dc85c04-6257-4bcd-9697-192e1227052b", StorageClassName:(*string)(0xc0030b50f0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0030b5100), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Oct 5 12:12:57.418: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-t4cmn", GenerateName:"pvc-", Namespace:"csi-mock-volumes-2322", SelfLink:"", UID:"1dc85c04-6257-4bcd-9697-192e1227052b", ResourceVersion:"15354", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63800568768, loc:(*time.Location)(0xa04d060)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-2322", "volume.kubernetes.io/selected-node":"v122-worker2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0059ab920), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0059ab938), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0059ab950), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0059ab968), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0059ab980), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0059ab998), Subresource:"status"}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0059ab9b0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0059ab9c8), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-1dc85c04-6257-4bcd-9697-192e1227052b", StorageClassName:(*string)(0xc0030b5130), VolumeMode:(*v1.PersistentVolumeMode)(0xc0030b5140), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Oct 5 12:12:57.418: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-t4cmn", GenerateName:"pvc-", Namespace:"csi-mock-volumes-2322", SelfLink:"", UID:"1dc85c04-6257-4bcd-9697-192e1227052b", ResourceVersion:"15514", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63800568768, loc:(*time.Location)(0xa04d060)}}, DeletionTimestamp:(*v1.Time)(0xc0059ab9f8), DeletionGracePeriodSeconds:(*int64)(0xc004103638), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-2322", "volume.kubernetes.io/selected-node":"v122-worker2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0059aba10), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0059aba28), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0059aba40), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0059aba58), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0059aba70), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0059aba88), Subresource:"status"}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0059abaa0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0059abab8), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-1dc85c04-6257-4bcd-9697-192e1227052b", StorageClassName:(*string)(0xc0030b5180), VolumeMode:(*v1.PersistentVolumeMode)(0xc0030b5190), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Oct 5 12:12:57.419: INFO: PVC event DELETED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-t4cmn", GenerateName:"pvc-", Namespace:"csi-mock-volumes-2322", SelfLink:"", UID:"1dc85c04-6257-4bcd-9697-192e1227052b", ResourceVersion:"15515", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63800568768, loc:(*time.Location)(0xa04d060)}}, DeletionTimestamp:(*v1.Time)(0xc000fc9f08), DeletionGracePeriodSeconds:(*int64)(0xc003f99d08), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-2322", "volume.kubernetes.io/selected-node":"v122-worker2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000fc9f20), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000fc9f38), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000fc9f50), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000fc9f68), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000fc9f80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000fc9f98), Subresource:"status"}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000fc9fb0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000fc9fc8), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-1dc85c04-6257-4bcd-9697-192e1227052b", StorageClassName:(*string)(0xc003109cf0), VolumeMode:(*v1.PersistentVolumeMode)(0xc003109d00), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} STEP: Deleting pod pvc-volume-tester-fff2g Oct 5 12:12:57.419: INFO: Deleting pod "pvc-volume-tester-fff2g" in namespace "csi-mock-volumes-2322" STEP: Deleting claim pvc-t4cmn STEP: Deleting storageclass csi-mock-volumes-2322-sccphkd STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-2322 STEP: Waiting for namespaces [csi-mock-volumes-2322] to vanish STEP: uninstalling csi mock driver Oct 5 12:13:10.462: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2322-8497/csi-attacher Oct 5 12:13:10.468: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2322 Oct 5 12:13:10.473: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2322 Oct 5 12:13:10.477: INFO: deleting *v1.Role: csi-mock-volumes-2322-8497/external-attacher-cfg-csi-mock-volumes-2322 Oct 5 12:13:10.482: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2322-8497/csi-attacher-role-cfg Oct 5 12:13:10.486: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2322-8497/csi-provisioner Oct 5 12:13:10.490: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2322 Oct 5 12:13:10.494: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2322 Oct 5 12:13:10.499: INFO: deleting *v1.Role: csi-mock-volumes-2322-8497/external-provisioner-cfg-csi-mock-volumes-2322 Oct 5 12:13:10.503: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2322-8497/csi-provisioner-role-cfg Oct 5 12:13:10.508: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2322-8497/csi-resizer Oct 5 12:13:10.512: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2322 Oct 5 12:13:10.517: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2322 Oct 5 12:13:10.522: INFO: deleting *v1.Role: csi-mock-volumes-2322-8497/external-resizer-cfg-csi-mock-volumes-2322 Oct 5 12:13:10.527: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2322-8497/csi-resizer-role-cfg Oct 5 12:13:10.532: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2322-8497/csi-snapshotter Oct 5 12:13:10.536: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2322 Oct 5 12:13:10.541: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2322 Oct 5 12:13:10.545: INFO: deleting *v1.Role: csi-mock-volumes-2322-8497/external-snapshotter-leaderelection-csi-mock-volumes-2322 Oct 5 12:13:10.549: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2322-8497/external-snapshotter-leaderelection Oct 5 12:13:10.554: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2322-8497/csi-mock Oct 5 12:13:10.558: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2322 Oct 5 12:13:10.562: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2322 Oct 5 12:13:10.567: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2322 Oct 5 12:13:10.571: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2322 Oct 5 12:13:10.576: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2322 Oct 5 12:13:10.580: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2322 Oct 5 12:13:10.584: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2322 Oct 5 12:13:10.589: INFO: deleting *v1.StatefulSet: csi-mock-volumes-2322-8497/csi-mockplugin Oct 5 12:13:10.593: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-2322 STEP: deleting the driver namespace: csi-mock-volumes-2322-8497 STEP: Waiting for namespaces [csi-mock-volumes-2322-8497] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:13:54.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:71.467 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 storage capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1023 exhausted, late binding, no topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1081 ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:13:35.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "v122-worker" using path "/tmp/local-volume-test-216b29ea-cb3b-4192-9942-ed354131b387" Oct 5 12:13:37.540: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-216b29ea-cb3b-4192-9942-ed354131b387 && dd if=/dev/zero of=/tmp/local-volume-test-216b29ea-cb3b-4192-9942-ed354131b387/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-216b29ea-cb3b-4192-9942-ed354131b387/file] Namespace:persistent-local-volumes-test-3238 PodName:hostexec-v122-worker-l626p ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:13:37.541: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:13:37.680: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-216b29ea-cb3b-4192-9942-ed354131b387/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-3238 PodName:hostexec-v122-worker-l626p ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:13:37.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:13:37.835: INFO: Creating a PV followed by a PVC Oct 5 12:13:37.843: INFO: Waiting for PV local-pvkxwgg to bind to PVC pvc-np8k2 Oct 5 12:13:37.843: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-np8k2] to have phase Bound Oct 5 12:13:37.846: INFO: PersistentVolumeClaim pvc-np8k2 found but phase is Pending instead of Bound. Oct 5 12:13:39.850: INFO: PersistentVolumeClaim pvc-np8k2 found but phase is Pending instead of Bound. Oct 5 12:13:41.855: INFO: PersistentVolumeClaim pvc-np8k2 found but phase is Pending instead of Bound. Oct 5 12:13:43.859: INFO: PersistentVolumeClaim pvc-np8k2 found but phase is Pending instead of Bound. Oct 5 12:13:45.864: INFO: PersistentVolumeClaim pvc-np8k2 found but phase is Pending instead of Bound. Oct 5 12:13:47.868: INFO: PersistentVolumeClaim pvc-np8k2 found but phase is Pending instead of Bound. Oct 5 12:13:49.872: INFO: PersistentVolumeClaim pvc-np8k2 found and phase=Bound (12.029314964s) Oct 5 12:13:49.873: INFO: Waiting up to 3m0s for PersistentVolume local-pvkxwgg to have phase Bound Oct 5 12:13:49.876: INFO: PersistentVolume local-pvkxwgg found and phase=Bound (3.060995ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Oct 5 12:13:53.901: INFO: pod "pod-7fbc78e0-1d6b-440b-9f96-b668beb64e4c" created on Node "v122-worker" STEP: Writing in pod1 Oct 5 12:13:53.901: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3238 PodName:pod-7fbc78e0-1d6b-440b-9f96-b668beb64e4c ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:13:53.901: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:13:54.006: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n18 bytes (18B) copied, 0.000159 seconds, 110.6KB/s", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Oct 5 12:13:54.006: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-3238 PodName:pod-7fbc78e0-1d6b-440b-9f96-b668beb64e4c ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:13:54.006: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:13:54.112: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Writing in pod1 Oct 5 12:13:54.112: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo /dev/loop8 > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3238 PodName:pod-7fbc78e0-1d6b-440b-9f96-b668beb64e4c ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:13:54.112: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:13:54.179: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo /dev/loop8 > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n11 bytes (11B) copied, 0.000051 seconds, 210.6KB/s", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-7fbc78e0-1d6b-440b-9f96-b668beb64e4c in namespace persistent-local-volumes-test-3238 [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:13:54.184: INFO: Deleting PersistentVolumeClaim "pvc-np8k2" Oct 5 12:13:54.188: INFO: Deleting PersistentVolume "local-pvkxwgg" Oct 5 12:13:54.192: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-216b29ea-cb3b-4192-9942-ed354131b387/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-3238 PodName:hostexec-v122-worker-l626p ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:13:54.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop8" on node "v122-worker" at path /tmp/local-volume-test-216b29ea-cb3b-4192-9942-ed354131b387/file Oct 5 12:13:54.322: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop8] Namespace:persistent-local-volumes-test-3238 PodName:hostexec-v122-worker-l626p ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:13:54.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-216b29ea-cb3b-4192-9942-ed354131b387 Oct 5 12:13:54.465: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-216b29ea-cb3b-4192-9942-ed354131b387] Namespace:persistent-local-volumes-test-3238 PodName:hostexec-v122-worker-l626p ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:13:54.465: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:13:54.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3238" for this suite. • [SLOW TEST:19.138 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, no topology","total":-1,"completed":14,"skipped":368,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} S ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":28,"skipped":1109,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: block] Set fsGroup for local volume should set fsGroup for one pod [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:13:54.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:57 Oct 5 12:13:54.673: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:13:54.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-4676" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:91 S [SKIPPING] in Spec Setup (BeforeEach) [0.049 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create volume metrics in Volume Manager [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:366 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:61 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:13:54.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "v122-worker" using path "/tmp/local-volume-test-474bfbd8-9752-4894-b103-89cc52cb5960" Oct 5 12:13:56.795: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-474bfbd8-9752-4894-b103-89cc52cb5960 && dd if=/dev/zero of=/tmp/local-volume-test-474bfbd8-9752-4894-b103-89cc52cb5960/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-474bfbd8-9752-4894-b103-89cc52cb5960/file] Namespace:persistent-local-volumes-test-1327 PodName:hostexec-v122-worker-jblp8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:13:56.795: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:13:57.017: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-474bfbd8-9752-4894-b103-89cc52cb5960/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1327 PodName:hostexec-v122-worker-jblp8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:13:57.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:13:57.173: INFO: Creating a PV followed by a PVC Oct 5 12:13:57.180: INFO: Waiting for PV local-pvc6gwt to bind to PVC pvc-qqrwq Oct 5 12:13:57.180: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-qqrwq] to have phase Bound Oct 5 12:13:57.182: INFO: PersistentVolumeClaim pvc-qqrwq found but phase is Pending instead of Bound. Oct 5 12:13:59.187: INFO: PersistentVolumeClaim pvc-qqrwq found and phase=Bound (2.006654417s) Oct 5 12:13:59.187: INFO: Waiting up to 3m0s for PersistentVolume local-pvc6gwt to have phase Bound Oct 5 12:13:59.190: INFO: PersistentVolume local-pvc6gwt found and phase=Bound (3.144948ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Oct 5 12:14:03.223: INFO: pod "pod-7d8c0674-eff0-45f0-98df-af0819191204" created on Node "v122-worker" STEP: Writing in pod1 Oct 5 12:14:03.223: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1327 PodName:pod-7d8c0674-eff0-45f0-98df-af0819191204 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:14:03.223: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:03.337: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Oct 5 12:14:03.337: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1327 PodName:pod-7d8c0674-eff0-45f0-98df-af0819191204 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:14:03.337: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:03.419: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Oct 5 12:14:03.420: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /dev/loop8 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1327 PodName:pod-7d8c0674-eff0-45f0-98df-af0819191204 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:14:03.420: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:03.532: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /dev/loop8 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-7d8c0674-eff0-45f0-98df-af0819191204 in namespace persistent-local-volumes-test-1327 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:14:03.538: INFO: Deleting PersistentVolumeClaim "pvc-qqrwq" Oct 5 12:14:03.543: INFO: Deleting PersistentVolume "local-pvc6gwt" Oct 5 12:14:03.547: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-474bfbd8-9752-4894-b103-89cc52cb5960/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1327 PodName:hostexec-v122-worker-jblp8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:03.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop8" on node "v122-worker" at path /tmp/local-volume-test-474bfbd8-9752-4894-b103-89cc52cb5960/file Oct 5 12:14:03.699: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop8] Namespace:persistent-local-volumes-test-1327 PodName:hostexec-v122-worker-jblp8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:03.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-474bfbd8-9752-4894-b103-89cc52cb5960 Oct 5 12:14:03.851: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-474bfbd8-9752-4894-b103-89cc52cb5960] Namespace:persistent-local-volumes-test-1327 PodName:hostexec-v122-worker-jblp8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:03.851: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:14:03.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1327" for this suite. • [SLOW TEST:9.254 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":15,"skipped":432,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:13:51.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Oct 5 12:13:53.176: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-a65f8eb9-c654-4c22-b1be-a0e2598b2f40] Namespace:persistent-local-volumes-test-5312 PodName:hostexec-v122-worker2-mnbxj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:13:53.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:13:53.323: INFO: Creating a PV followed by a PVC Oct 5 12:13:53.332: INFO: Waiting for PV local-pvvpbbm to bind to PVC pvc-b6gks Oct 5 12:13:53.332: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-b6gks] to have phase Bound Oct 5 12:13:53.335: INFO: PersistentVolumeClaim pvc-b6gks found but phase is Pending instead of Bound. Oct 5 12:13:55.341: INFO: PersistentVolumeClaim pvc-b6gks found but phase is Pending instead of Bound. Oct 5 12:13:57.345: INFO: PersistentVolumeClaim pvc-b6gks found but phase is Pending instead of Bound. Oct 5 12:13:59.350: INFO: PersistentVolumeClaim pvc-b6gks found but phase is Pending instead of Bound. Oct 5 12:14:01.354: INFO: PersistentVolumeClaim pvc-b6gks found but phase is Pending instead of Bound. Oct 5 12:14:03.359: INFO: PersistentVolumeClaim pvc-b6gks found but phase is Pending instead of Bound. Oct 5 12:14:05.364: INFO: PersistentVolumeClaim pvc-b6gks found and phase=Bound (12.031373571s) Oct 5 12:14:05.364: INFO: Waiting up to 3m0s for PersistentVolume local-pvvpbbm to have phase Bound Oct 5 12:14:05.367: INFO: PersistentVolume local-pvvpbbm found and phase=Bound (3.198703ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Oct 5 12:14:07.393: INFO: pod "pod-a1d574ce-6dc1-4238-bade-6a7f476dcd5c" created on Node "v122-worker2" STEP: Writing in pod1 Oct 5 12:14:07.393: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5312 PodName:pod-a1d574ce-6dc1-4238-bade-6a7f476dcd5c ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:14:07.393: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:07.513: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Oct 5 12:14:07.513: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5312 PodName:pod-a1d574ce-6dc1-4238-bade-6a7f476dcd5c ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:14:07.513: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:07.628: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-a1d574ce-6dc1-4238-bade-6a7f476dcd5c in namespace persistent-local-volumes-test-5312 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:14:07.634: INFO: Deleting PersistentVolumeClaim "pvc-b6gks" Oct 5 12:14:07.639: INFO: Deleting PersistentVolume "local-pvvpbbm" STEP: Removing the test directory Oct 5 12:14:07.643: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a65f8eb9-c654-4c22-b1be-a0e2598b2f40] Namespace:persistent-local-volumes-test-5312 PodName:hostexec-v122-worker2-mnbxj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:07.643: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:14:07.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5312" for this suite. • [SLOW TEST:16.670 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":9,"skipped":556,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:13:51.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Oct 5 12:13:54.010: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-5e9960a7-d219-4194-977e-48fb9eaff792 && mount --bind /tmp/local-volume-test-5e9960a7-d219-4194-977e-48fb9eaff792 /tmp/local-volume-test-5e9960a7-d219-4194-977e-48fb9eaff792] Namespace:persistent-local-volumes-test-2830 PodName:hostexec-v122-worker2-pk8w9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:13:54.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:13:54.125: INFO: Creating a PV followed by a PVC Oct 5 12:13:54.135: INFO: Waiting for PV local-pv87qfj to bind to PVC pvc-q2cdg Oct 5 12:13:54.135: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-q2cdg] to have phase Bound Oct 5 12:13:54.138: INFO: PersistentVolumeClaim pvc-q2cdg found but phase is Pending instead of Bound. Oct 5 12:13:56.142: INFO: PersistentVolumeClaim pvc-q2cdg found but phase is Pending instead of Bound. Oct 5 12:13:58.147: INFO: PersistentVolumeClaim pvc-q2cdg found but phase is Pending instead of Bound. Oct 5 12:14:00.152: INFO: PersistentVolumeClaim pvc-q2cdg found but phase is Pending instead of Bound. Oct 5 12:14:02.156: INFO: PersistentVolumeClaim pvc-q2cdg found but phase is Pending instead of Bound. Oct 5 12:14:04.160: INFO: PersistentVolumeClaim pvc-q2cdg found but phase is Pending instead of Bound. Oct 5 12:14:06.164: INFO: PersistentVolumeClaim pvc-q2cdg found and phase=Bound (12.029309398s) Oct 5 12:14:06.164: INFO: Waiting up to 3m0s for PersistentVolume local-pv87qfj to have phase Bound Oct 5 12:14:06.168: INFO: PersistentVolume local-pv87qfj found and phase=Bound (3.225515ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Oct 5 12:14:10.194: INFO: pod "pod-0ac25643-2450-4316-bed5-ac24d0b2d7fa" created on Node "v122-worker2" STEP: Writing in pod1 Oct 5 12:14:10.194: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2830 PodName:pod-0ac25643-2450-4316-bed5-ac24d0b2d7fa ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:14:10.194: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:10.313: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Oct 5 12:14:10.313: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2830 PodName:pod-0ac25643-2450-4316-bed5-ac24d0b2d7fa ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:14:10.314: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:10.446: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Oct 5 12:14:12.466: INFO: pod "pod-8ad61731-1355-402b-82ad-5cf18d9ade81" created on Node "v122-worker2" Oct 5 12:14:12.466: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2830 PodName:pod-8ad61731-1355-402b-82ad-5cf18d9ade81 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:14:12.466: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:12.581: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Oct 5 12:14:12.581: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-5e9960a7-d219-4194-977e-48fb9eaff792 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2830 PodName:pod-8ad61731-1355-402b-82ad-5cf18d9ade81 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:14:12.581: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:12.707: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-5e9960a7-d219-4194-977e-48fb9eaff792 > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Oct 5 12:14:12.707: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2830 PodName:pod-0ac25643-2450-4316-bed5-ac24d0b2d7fa ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:14:12.707: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:12.789: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-5e9960a7-d219-4194-977e-48fb9eaff792", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-0ac25643-2450-4316-bed5-ac24d0b2d7fa in namespace persistent-local-volumes-test-2830 STEP: Deleting pod2 STEP: Deleting pod pod-8ad61731-1355-402b-82ad-5cf18d9ade81 in namespace persistent-local-volumes-test-2830 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:14:12.800: INFO: Deleting PersistentVolumeClaim "pvc-q2cdg" Oct 5 12:14:12.805: INFO: Deleting PersistentVolume "local-pv87qfj" STEP: Removing the test directory Oct 5 12:14:12.809: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-5e9960a7-d219-4194-977e-48fb9eaff792 && rm -r /tmp/local-volume-test-5e9960a7-d219-4194-977e-48fb9eaff792] Namespace:persistent-local-volumes-test-2830 PodName:hostexec-v122-worker2-pk8w9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:12.810: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:14:12.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2830" for this suite. • [SLOW TEST:20.996 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":22,"skipped":641,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:14:13.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:51 Oct 5 12:14:13.065: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PV STEP: Waiting for PV to enter phase Available Oct 5 12:14:13.072: INFO: Waiting up to 30s for PersistentVolume hostpath-pdf8l to have phase Available Oct 5 12:14:13.075: INFO: PersistentVolume hostpath-pdf8l found but phase is Pending instead of Available. Oct 5 12:14:14.079: INFO: PersistentVolume hostpath-pdf8l found and phase=Available (1.007478149s) STEP: Checking that PV Protection finalizer is set [It] Verify "immediate" deletion of a PV that is not bound to a PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:99 STEP: Deleting the PV Oct 5 12:14:14.089: INFO: Waiting up to 3m0s for PersistentVolume hostpath-pdf8l to get deleted Oct 5 12:14:14.093: INFO: PersistentVolume hostpath-pdf8l found and phase=Available (3.402078ms) Oct 5 12:14:16.097: INFO: PersistentVolume hostpath-pdf8l was removed [AfterEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:14:16.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-protection-5498" for this suite. [AfterEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:92 Oct 5 12:14:16.108: INFO: AfterEach: Cleaning up test resources. Oct 5 12:14:16.108: INFO: pvc is nil Oct 5 12:14:16.108: INFO: Deleting PersistentVolume "hostpath-pdf8l" • ------------------------------ {"msg":"PASSED [sig-storage] PV Protection Verify \"immediate\" deletion of a PV that is not bound to a PVC","total":-1,"completed":23,"skipped":679,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:13:54.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] unlimited /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1081 STEP: Building a driver namespace object, basename csi-mock-volumes-6895 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Oct 5 12:13:54.849: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6895-1061/csi-attacher Oct 5 12:13:54.853: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6895 Oct 5 12:13:54.853: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-6895 Oct 5 12:13:54.856: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6895 Oct 5 12:13:54.861: INFO: creating *v1.Role: csi-mock-volumes-6895-1061/external-attacher-cfg-csi-mock-volumes-6895 Oct 5 12:13:54.864: INFO: creating *v1.RoleBinding: csi-mock-volumes-6895-1061/csi-attacher-role-cfg Oct 5 12:13:54.868: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6895-1061/csi-provisioner Oct 5 12:13:54.872: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6895 Oct 5 12:13:54.872: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-6895 Oct 5 12:13:54.876: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6895 Oct 5 12:13:54.879: INFO: creating *v1.Role: csi-mock-volumes-6895-1061/external-provisioner-cfg-csi-mock-volumes-6895 Oct 5 12:13:54.883: INFO: creating *v1.RoleBinding: csi-mock-volumes-6895-1061/csi-provisioner-role-cfg Oct 5 12:13:54.888: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6895-1061/csi-resizer Oct 5 12:13:54.891: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6895 Oct 5 12:13:54.892: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-6895 Oct 5 12:13:54.895: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6895 Oct 5 12:13:54.899: INFO: creating *v1.Role: csi-mock-volumes-6895-1061/external-resizer-cfg-csi-mock-volumes-6895 Oct 5 12:13:54.903: INFO: creating *v1.RoleBinding: csi-mock-volumes-6895-1061/csi-resizer-role-cfg Oct 5 12:13:54.907: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6895-1061/csi-snapshotter Oct 5 12:13:54.910: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6895 Oct 5 12:13:54.910: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-6895 Oct 5 12:13:54.914: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6895 Oct 5 12:13:54.917: INFO: creating *v1.Role: csi-mock-volumes-6895-1061/external-snapshotter-leaderelection-csi-mock-volumes-6895 Oct 5 12:13:54.921: INFO: creating *v1.RoleBinding: csi-mock-volumes-6895-1061/external-snapshotter-leaderelection Oct 5 12:13:54.924: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6895-1061/csi-mock Oct 5 12:13:54.928: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6895 Oct 5 12:13:54.932: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6895 Oct 5 12:13:54.936: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6895 Oct 5 12:13:54.939: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6895 Oct 5 12:13:54.943: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6895 Oct 5 12:13:54.946: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6895 Oct 5 12:13:54.950: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6895 Oct 5 12:13:54.954: INFO: creating *v1.StatefulSet: csi-mock-volumes-6895-1061/csi-mockplugin Oct 5 12:13:54.960: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-6895 Oct 5 12:13:54.964: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-6895" Oct 5 12:13:54.967: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-6895 to register on node v122-worker STEP: Creating pod Oct 5 12:13:59.985: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Oct 5 12:13:59.991: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-47hfk] to have phase Bound Oct 5 12:13:59.994: INFO: PersistentVolumeClaim pvc-47hfk found but phase is Pending instead of Bound. Oct 5 12:14:01.999: INFO: PersistentVolumeClaim pvc-47hfk found and phase=Bound (2.007317029s) Oct 5 12:14:04.018: INFO: Deleting pod "pvc-volume-tester-d2bqh" in namespace "csi-mock-volumes-6895" Oct 5 12:14:04.023: INFO: Wait up to 5m0s for pod "pvc-volume-tester-d2bqh" to be fully deleted STEP: Checking PVC events Oct 5 12:14:08.063: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-47hfk", GenerateName:"pvc-", Namespace:"csi-mock-volumes-6895", SelfLink:"", UID:"b55780af-488e-4402-a2bd-02c624450c85", ResourceVersion:"16762", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63800568839, loc:(*time.Location)(0xa04d060)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000dc5620), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000dc5638), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0001d29b0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0001d29c0), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Oct 5 12:14:08.063: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-47hfk", GenerateName:"pvc-", Namespace:"csi-mock-volumes-6895", SelfLink:"", UID:"b55780af-488e-4402-a2bd-02c624450c85", ResourceVersion:"16763", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63800568839, loc:(*time.Location)(0xa04d060)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-6895"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0045e21f8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0045e2210), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0045e2228), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0045e2240), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0036e1cb0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0036e1cc0), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Oct 5 12:14:08.063: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-47hfk", GenerateName:"pvc-", Namespace:"csi-mock-volumes-6895", SelfLink:"", UID:"b55780af-488e-4402-a2bd-02c624450c85", ResourceVersion:"16770", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63800568839, loc:(*time.Location)(0xa04d060)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-6895"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0045e2cc0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0045e2cd8), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0045e2cf0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0045e2d08), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-b55780af-488e-4402-a2bd-02c624450c85", StorageClassName:(*string)(0xc00383a600), VolumeMode:(*v1.PersistentVolumeMode)(0xc00383a620), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Oct 5 12:14:08.063: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-47hfk", GenerateName:"pvc-", Namespace:"csi-mock-volumes-6895", SelfLink:"", UID:"b55780af-488e-4402-a2bd-02c624450c85", ResourceVersion:"16771", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63800568839, loc:(*time.Location)(0xa04d060)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-6895"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0045e2d38), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0045e2d50), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0045e2d68), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0045e2d80), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0045e2d98), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0045e2db0), Subresource:"status"}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-b55780af-488e-4402-a2bd-02c624450c85", StorageClassName:(*string)(0xc00383a680), VolumeMode:(*v1.PersistentVolumeMode)(0xc00383a690), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Oct 5 12:14:08.064: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-47hfk", GenerateName:"pvc-", Namespace:"csi-mock-volumes-6895", SelfLink:"", UID:"b55780af-488e-4402-a2bd-02c624450c85", ResourceVersion:"16905", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63800568839, loc:(*time.Location)(0xa04d060)}}, DeletionTimestamp:(*v1.Time)(0xc0045e2de0), DeletionGracePeriodSeconds:(*int64)(0xc000db9798), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-6895"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0045e2df8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0045e2e10), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0045e2e28), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0045e2e40), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0045e2e58), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0045e2e70), Subresource:"status"}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-b55780af-488e-4402-a2bd-02c624450c85", StorageClassName:(*string)(0xc00383a6d0), VolumeMode:(*v1.PersistentVolumeMode)(0xc00383a6e0), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Oct 5 12:14:08.064: INFO: PVC event DELETED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-47hfk", GenerateName:"pvc-", Namespace:"csi-mock-volumes-6895", SelfLink:"", UID:"b55780af-488e-4402-a2bd-02c624450c85", ResourceVersion:"16910", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63800568839, loc:(*time.Location)(0xa04d060)}}, DeletionTimestamp:(*v1.Time)(0xc0045e2ea0), DeletionGracePeriodSeconds:(*int64)(0xc000db9858), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-6895"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0045e2eb8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0045e2ed0), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0045e2ee8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0045e2f00), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0045e2f18), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0045e2f30), Subresource:"status"}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-b55780af-488e-4402-a2bd-02c624450c85", StorageClassName:(*string)(0xc00383a730), VolumeMode:(*v1.PersistentVolumeMode)(0xc00383a740), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} STEP: Deleting pod pvc-volume-tester-d2bqh Oct 5 12:14:08.064: INFO: Deleting pod "pvc-volume-tester-d2bqh" in namespace "csi-mock-volumes-6895" STEP: Deleting claim pvc-47hfk STEP: Deleting storageclass csi-mock-volumes-6895-sck6fsv STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-6895 STEP: Waiting for namespaces [csi-mock-volumes-6895] to vanish STEP: uninstalling csi mock driver Oct 5 12:14:14.084: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6895-1061/csi-attacher Oct 5 12:14:14.089: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6895 Oct 5 12:14:14.094: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6895 Oct 5 12:14:14.099: INFO: deleting *v1.Role: csi-mock-volumes-6895-1061/external-attacher-cfg-csi-mock-volumes-6895 Oct 5 12:14:14.104: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6895-1061/csi-attacher-role-cfg Oct 5 12:14:14.108: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6895-1061/csi-provisioner Oct 5 12:14:14.113: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6895 Oct 5 12:14:14.117: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6895 Oct 5 12:14:14.122: INFO: deleting *v1.Role: csi-mock-volumes-6895-1061/external-provisioner-cfg-csi-mock-volumes-6895 Oct 5 12:14:14.127: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6895-1061/csi-provisioner-role-cfg Oct 5 12:14:14.131: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6895-1061/csi-resizer Oct 5 12:14:14.136: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6895 Oct 5 12:14:14.141: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6895 Oct 5 12:14:14.146: INFO: deleting *v1.Role: csi-mock-volumes-6895-1061/external-resizer-cfg-csi-mock-volumes-6895 Oct 5 12:14:14.150: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6895-1061/csi-resizer-role-cfg Oct 5 12:14:14.155: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6895-1061/csi-snapshotter Oct 5 12:14:14.160: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6895 Oct 5 12:14:14.164: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6895 Oct 5 12:14:14.169: INFO: deleting *v1.Role: csi-mock-volumes-6895-1061/external-snapshotter-leaderelection-csi-mock-volumes-6895 Oct 5 12:14:14.173: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6895-1061/external-snapshotter-leaderelection Oct 5 12:14:14.181: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6895-1061/csi-mock Oct 5 12:14:14.187: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6895 Oct 5 12:14:14.192: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6895 Oct 5 12:14:14.197: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6895 Oct 5 12:14:14.201: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6895 Oct 5 12:14:14.206: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6895 Oct 5 12:14:14.210: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6895 Oct 5 12:14:14.215: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6895 Oct 5 12:14:14.219: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6895-1061/csi-mockplugin Oct 5 12:14:14.225: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-6895 STEP: deleting the driver namespace: csi-mock-volumes-6895-1061 STEP: Waiting for namespaces [csi-mock-volumes-6895-1061] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:14:20.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:25.482 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 storage capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1023 unlimited /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1081 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume storage capacity unlimited","total":-1,"completed":29,"skipped":1160,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: block] Set fsGroup for local volume should set fsGroup for one pod [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:09:23.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:548 STEP: Creating the pod [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:14:23.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2552" for this suite. • [SLOW TEST:300.066 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:548 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap Should fail non-optional pod creation due to configMap object does not exist [Slow]","total":-1,"completed":7,"skipped":447,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:14:23.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Oct 5 12:14:23.604: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:14:23.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-3597" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.050 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 schedule a pod w/ RW PD(s) mounted to 1 or more containers, write to PD, verify content, delete pod, and repeat in rapid succession [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:231 using 1 containers and 2 PDs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:254 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:14:04.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Oct 5 12:14:06.098: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-424de6d3-2226-4f54-93bf-bf21c076449b] Namespace:persistent-local-volumes-test-9233 PodName:hostexec-v122-worker-8sf8m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:06.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:14:06.258: INFO: Creating a PV followed by a PVC Oct 5 12:14:06.266: INFO: Waiting for PV local-pvpxfj5 to bind to PVC pvc-vpl9t Oct 5 12:14:06.266: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-vpl9t] to have phase Bound Oct 5 12:14:06.270: INFO: PersistentVolumeClaim pvc-vpl9t found but phase is Pending instead of Bound. Oct 5 12:14:08.273: INFO: PersistentVolumeClaim pvc-vpl9t found but phase is Pending instead of Bound. Oct 5 12:14:10.277: INFO: PersistentVolumeClaim pvc-vpl9t found but phase is Pending instead of Bound. Oct 5 12:14:12.282: INFO: PersistentVolumeClaim pvc-vpl9t found but phase is Pending instead of Bound. Oct 5 12:14:14.286: INFO: PersistentVolumeClaim pvc-vpl9t found but phase is Pending instead of Bound. Oct 5 12:14:16.290: INFO: PersistentVolumeClaim pvc-vpl9t found but phase is Pending instead of Bound. Oct 5 12:14:18.295: INFO: PersistentVolumeClaim pvc-vpl9t found but phase is Pending instead of Bound. Oct 5 12:14:20.299: INFO: PersistentVolumeClaim pvc-vpl9t found and phase=Bound (14.033206158s) Oct 5 12:14:20.299: INFO: Waiting up to 3m0s for PersistentVolume local-pvpxfj5 to have phase Bound Oct 5 12:14:20.302: INFO: PersistentVolume local-pvpxfj5 found and phase=Bound (3.057617ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Oct 5 12:14:24.330: INFO: pod "pod-ad0af48c-0ad1-42cb-bbb5-c1809341b643" created on Node "v122-worker" STEP: Writing in pod1 Oct 5 12:14:24.330: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9233 PodName:pod-ad0af48c-0ad1-42cb-bbb5-c1809341b643 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:14:24.330: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:24.442: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Oct 5 12:14:24.442: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9233 PodName:pod-ad0af48c-0ad1-42cb-bbb5-c1809341b643 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:14:24.442: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:24.551: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Oct 5 12:14:30.571: INFO: pod "pod-f649f603-94de-4e5c-ae4a-5441d2d3ca9c" created on Node "v122-worker" Oct 5 12:14:30.571: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9233 PodName:pod-f649f603-94de-4e5c-ae4a-5441d2d3ca9c ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:14:30.571: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:30.674: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Oct 5 12:14:30.674: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-424de6d3-2226-4f54-93bf-bf21c076449b > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9233 PodName:pod-f649f603-94de-4e5c-ae4a-5441d2d3ca9c ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:14:30.674: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:30.806: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-424de6d3-2226-4f54-93bf-bf21c076449b > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Oct 5 12:14:30.806: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9233 PodName:pod-ad0af48c-0ad1-42cb-bbb5-c1809341b643 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:14:30.806: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:30.920: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-424de6d3-2226-4f54-93bf-bf21c076449b", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-ad0af48c-0ad1-42cb-bbb5-c1809341b643 in namespace persistent-local-volumes-test-9233 STEP: Deleting pod2 STEP: Deleting pod pod-f649f603-94de-4e5c-ae4a-5441d2d3ca9c in namespace persistent-local-volumes-test-9233 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:14:30.930: INFO: Deleting PersistentVolumeClaim "pvc-vpl9t" Oct 5 12:14:30.934: INFO: Deleting PersistentVolume "local-pvpxfj5" STEP: Removing the test directory Oct 5 12:14:30.939: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-424de6d3-2226-4f54-93bf-bf21c076449b] Namespace:persistent-local-volumes-test-9233 PodName:hostexec-v122-worker-8sf8m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:30.939: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:14:31.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9233" for this suite. • [SLOW TEST:27.056 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":16,"skipped":450,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:14:23.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "v122-worker2" using path "/tmp/local-volume-test-d25e0010-59eb-4850-8cba-9157845feb79" Oct 5 12:14:25.855: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-d25e0010-59eb-4850-8cba-9157845feb79 && dd if=/dev/zero of=/tmp/local-volume-test-d25e0010-59eb-4850-8cba-9157845feb79/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-d25e0010-59eb-4850-8cba-9157845feb79/file] Namespace:persistent-local-volumes-test-9775 PodName:hostexec-v122-worker2-trrft ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:25.855: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:26.078: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-d25e0010-59eb-4850-8cba-9157845feb79/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9775 PodName:hostexec-v122-worker2-trrft ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:26.078: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:26.226: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop9 && mount -t ext4 /dev/loop9 /tmp/local-volume-test-d25e0010-59eb-4850-8cba-9157845feb79 && chmod o+rwx /tmp/local-volume-test-d25e0010-59eb-4850-8cba-9157845feb79] Namespace:persistent-local-volumes-test-9775 PodName:hostexec-v122-worker2-trrft ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:26.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:14:26.715: INFO: Creating a PV followed by a PVC Oct 5 12:14:26.724: INFO: Waiting for PV local-pv2cwqx to bind to PVC pvc-8v2z6 Oct 5 12:14:26.724: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-8v2z6] to have phase Bound Oct 5 12:14:26.727: INFO: PersistentVolumeClaim pvc-8v2z6 found but phase is Pending instead of Bound. Oct 5 12:14:28.731: INFO: PersistentVolumeClaim pvc-8v2z6 found but phase is Pending instead of Bound. Oct 5 12:14:30.736: INFO: PersistentVolumeClaim pvc-8v2z6 found but phase is Pending instead of Bound. Oct 5 12:14:32.742: INFO: PersistentVolumeClaim pvc-8v2z6 found but phase is Pending instead of Bound. Oct 5 12:14:34.746: INFO: PersistentVolumeClaim pvc-8v2z6 found and phase=Bound (8.021801141s) Oct 5 12:14:34.746: INFO: Waiting up to 3m0s for PersistentVolume local-pv2cwqx to have phase Bound Oct 5 12:14:34.750: INFO: PersistentVolume local-pv2cwqx found and phase=Bound (3.243647ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Oct 5 12:14:36.775: INFO: pod "pod-20bd6ea4-9d5e-421f-aea2-ee20035d5c8c" created on Node "v122-worker2" STEP: Writing in pod1 Oct 5 12:14:36.775: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9775 PodName:pod-20bd6ea4-9d5e-421f-aea2-ee20035d5c8c ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:14:36.775: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:36.909: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Oct 5 12:14:36.909: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9775 PodName:pod-20bd6ea4-9d5e-421f-aea2-ee20035d5c8c ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:14:36.909: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:37.026: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Oct 5 12:14:37.026: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-d25e0010-59eb-4850-8cba-9157845feb79 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9775 PodName:pod-20bd6ea4-9d5e-421f-aea2-ee20035d5c8c ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:14:37.026: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:37.090: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-d25e0010-59eb-4850-8cba-9157845feb79 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-20bd6ea4-9d5e-421f-aea2-ee20035d5c8c in namespace persistent-local-volumes-test-9775 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:14:37.096: INFO: Deleting PersistentVolumeClaim "pvc-8v2z6" Oct 5 12:14:37.101: INFO: Deleting PersistentVolume "local-pv2cwqx" Oct 5 12:14:37.105: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-d25e0010-59eb-4850-8cba-9157845feb79] Namespace:persistent-local-volumes-test-9775 PodName:hostexec-v122-worker2-trrft ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:37.106: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:37.234: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-d25e0010-59eb-4850-8cba-9157845feb79/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9775 PodName:hostexec-v122-worker2-trrft ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:37.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop9" on node "v122-worker2" at path /tmp/local-volume-test-d25e0010-59eb-4850-8cba-9157845feb79/file Oct 5 12:14:37.389: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop9] Namespace:persistent-local-volumes-test-9775 PodName:hostexec-v122-worker2-trrft ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:37.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-d25e0010-59eb-4850-8cba-9157845feb79 Oct 5 12:14:37.535: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-d25e0010-59eb-4850-8cba-9157845feb79] Namespace:persistent-local-volumes-test-9775 PodName:hostexec-v122-worker2-trrft ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:37.535: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:14:37.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9775" for this suite. • [SLOW TEST:13.911 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":8,"skipped":558,"failed":0} [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:14:37.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:57 Oct 5 12:14:37.739: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:14:37.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-1234" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:91 S [SKIPPING] in Spec Setup (BeforeEach) [0.041 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:457 should create bound pv/pvc count metrics for pvc controller after creating both pv and pvc /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:577 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:61 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:14:16.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "v122-worker" using path "/tmp/local-volume-test-c73cbad2-9a56-4315-b05d-d489cb20726b" Oct 5 12:14:18.304: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-c73cbad2-9a56-4315-b05d-d489cb20726b && dd if=/dev/zero of=/tmp/local-volume-test-c73cbad2-9a56-4315-b05d-d489cb20726b/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-c73cbad2-9a56-4315-b05d-d489cb20726b/file] Namespace:persistent-local-volumes-test-9401 PodName:hostexec-v122-worker-5g2fn ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:18.305: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:18.470: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-c73cbad2-9a56-4315-b05d-d489cb20726b/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9401 PodName:hostexec-v122-worker-5g2fn ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:18.470: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:18.614: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop8 && mount -t ext4 /dev/loop8 /tmp/local-volume-test-c73cbad2-9a56-4315-b05d-d489cb20726b && chmod o+rwx /tmp/local-volume-test-c73cbad2-9a56-4315-b05d-d489cb20726b] Namespace:persistent-local-volumes-test-9401 PodName:hostexec-v122-worker-5g2fn ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:18.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:14:19.084: INFO: Creating a PV followed by a PVC Oct 5 12:14:19.094: INFO: Waiting for PV local-pvlxvhb to bind to PVC pvc-7nwr9 Oct 5 12:14:19.094: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-7nwr9] to have phase Bound Oct 5 12:14:19.097: INFO: PersistentVolumeClaim pvc-7nwr9 found but phase is Pending instead of Bound. Oct 5 12:14:21.102: INFO: PersistentVolumeClaim pvc-7nwr9 found and phase=Bound (2.008086808s) Oct 5 12:14:21.102: INFO: Waiting up to 3m0s for PersistentVolume local-pvlxvhb to have phase Bound Oct 5 12:14:21.105: INFO: PersistentVolume local-pvlxvhb found and phase=Bound (3.180828ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Oct 5 12:14:29.130: INFO: pod "pod-17839233-331a-465f-bb00-7539c8b69d01" created on Node "v122-worker" STEP: Writing in pod1 Oct 5 12:14:29.131: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9401 PodName:pod-17839233-331a-465f-bb00-7539c8b69d01 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:14:29.131: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:29.259: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Oct 5 12:14:29.259: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9401 PodName:pod-17839233-331a-465f-bb00-7539c8b69d01 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:14:29.259: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:29.378: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-17839233-331a-465f-bb00-7539c8b69d01 in namespace persistent-local-volumes-test-9401 STEP: Creating pod2 STEP: Creating a pod Oct 5 12:14:37.403: INFO: pod "pod-cafa9aaf-7b8d-4ad6-91b6-6c3557c757b9" created on Node "v122-worker" STEP: Reading in pod2 Oct 5 12:14:37.403: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9401 PodName:pod-cafa9aaf-7b8d-4ad6-91b6-6c3557c757b9 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:14:37.403: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:37.528: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-cafa9aaf-7b8d-4ad6-91b6-6c3557c757b9 in namespace persistent-local-volumes-test-9401 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:14:37.533: INFO: Deleting PersistentVolumeClaim "pvc-7nwr9" Oct 5 12:14:37.538: INFO: Deleting PersistentVolume "local-pvlxvhb" Oct 5 12:14:37.543: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-c73cbad2-9a56-4315-b05d-d489cb20726b] Namespace:persistent-local-volumes-test-9401 PodName:hostexec-v122-worker-5g2fn ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:37.543: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:37.698: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-c73cbad2-9a56-4315-b05d-d489cb20726b/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9401 PodName:hostexec-v122-worker-5g2fn ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:37.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop8" on node "v122-worker" at path /tmp/local-volume-test-c73cbad2-9a56-4315-b05d-d489cb20726b/file Oct 5 12:14:37.864: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop8] Namespace:persistent-local-volumes-test-9401 PodName:hostexec-v122-worker-5g2fn ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:37.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-c73cbad2-9a56-4315-b05d-d489cb20726b Oct 5 12:14:38.000: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c73cbad2-9a56-4315-b05d-d489cb20726b] Namespace:persistent-local-volumes-test-9401 PodName:hostexec-v122-worker-5g2fn ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:38.000: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:14:38.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9401" for this suite. • [SLOW TEST:21.926 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":24,"skipped":744,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1"]} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:14:38.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Oct 5 12:14:38.239: INFO: The status of Pod test-hostpath-type-kktbx is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:14:40.243: INFO: The status of Pod test-hostpath-type-kktbx is Running (Ready = true) STEP: running on node v122-worker2 STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should fail on mounting non-existent file 'does-not-exist-file' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:137 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:14:44.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-586" for this suite. • [SLOW TEST:6.113 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting non-existent file 'does-not-exist-file' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:137 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should fail on mounting non-existent file 'does-not-exist-file' when HostPathType is HostPathFile","total":-1,"completed":25,"skipped":757,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1"]} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:14:44.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:57 Oct 5 12:14:44.367: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:14:44.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-2149" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:91 S [SKIPPING] in Spec Setup (BeforeEach) [0.041 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:457 should create total pv count metrics for with plugin and volume mode labels after creating pv /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:587 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:61 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:14:44.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:57 Oct 5 12:14:44.571: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:14:44.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-1220" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:91 S [SKIPPING] in Spec Setup (BeforeEach) [0.047 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create volume metrics with the correct FilesystemMode PVC ref [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:213 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:61 ------------------------------ S ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:14:31.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:391 STEP: Setting up local volumes on node "v122-worker" STEP: Initializing test volumes Oct 5 12:14:39.192: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-a1de4b84-8ed2-4f29-b3f4-33df6e91d480] Namespace:persistent-local-volumes-test-8414 PodName:hostexec-v122-worker-dfw58 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:39.192: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:39.332: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-75908bf0-e00a-4e27-97d8-60995dc23457] Namespace:persistent-local-volumes-test-8414 PodName:hostexec-v122-worker-dfw58 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:39.332: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:39.464: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-030a542b-8cfc-494d-87f7-bb56d7fedf9c] Namespace:persistent-local-volumes-test-8414 PodName:hostexec-v122-worker-dfw58 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:39.464: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:39.623: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-19e1aea5-ed05-48b3-a237-d0d58c47fa52] Namespace:persistent-local-volumes-test-8414 PodName:hostexec-v122-worker-dfw58 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:39.623: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:39.781: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-e5c08c16-6db0-45bb-a397-f2f085399bb7] Namespace:persistent-local-volumes-test-8414 PodName:hostexec-v122-worker-dfw58 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:39.781: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:39.899: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-4c953e37-eb5c-4049-a634-070ab793d0d5] Namespace:persistent-local-volumes-test-8414 PodName:hostexec-v122-worker-dfw58 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:39.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:14:40.032: INFO: Creating a PV followed by a PVC Oct 5 12:14:40.042: INFO: Creating a PV followed by a PVC Oct 5 12:14:40.050: INFO: Creating a PV followed by a PVC Oct 5 12:14:40.058: INFO: Creating a PV followed by a PVC Oct 5 12:14:40.066: INFO: Creating a PV followed by a PVC Oct 5 12:14:40.074: INFO: Creating a PV followed by a PVC Oct 5 12:14:50.135: INFO: PVCs were not bound within 10s (that's good) STEP: Setting up local volumes on node "v122-worker2" STEP: Initializing test volumes Oct 5 12:14:52.149: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-21e6a682-2c54-4dcc-8348-d597a9acb3fb] Namespace:persistent-local-volumes-test-8414 PodName:hostexec-v122-worker2-5z5kv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:52.149: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:52.304: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-6131078b-8e08-430b-886e-b93a15e4adfc] Namespace:persistent-local-volumes-test-8414 PodName:hostexec-v122-worker2-5z5kv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:52.304: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:52.442: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-cb5852f3-7620-480d-bb67-5d953923124d] Namespace:persistent-local-volumes-test-8414 PodName:hostexec-v122-worker2-5z5kv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:52.443: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:52.591: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-4e6786dd-27be-4eda-8570-0b8d58562855] Namespace:persistent-local-volumes-test-8414 PodName:hostexec-v122-worker2-5z5kv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:52.591: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:52.740: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-efa31de4-3f60-4142-b476-2dde613befcc] Namespace:persistent-local-volumes-test-8414 PodName:hostexec-v122-worker2-5z5kv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:52.740: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:52.878: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-83e99823-d0ce-489c-8fe5-a6271e582095] Namespace:persistent-local-volumes-test-8414 PodName:hostexec-v122-worker2-5z5kv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:52.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:14:53.025: INFO: Creating a PV followed by a PVC Oct 5 12:14:53.034: INFO: Creating a PV followed by a PVC Oct 5 12:14:53.043: INFO: Creating a PV followed by a PVC Oct 5 12:14:53.051: INFO: Creating a PV followed by a PVC Oct 5 12:14:53.059: INFO: Creating a PV followed by a PVC Oct 5 12:14:53.067: INFO: Creating a PV followed by a PVC Oct 5 12:15:03.128: INFO: PVCs were not bound within 10s (that's good) [It] should use volumes spread across nodes when pod has anti-affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:410 Oct 5 12:15:03.128: INFO: Runs only when number of nodes >= 3 [AfterEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:403 STEP: Cleaning up PVC and PV Oct 5 12:15:03.129: INFO: Deleting PersistentVolumeClaim "pvc-rlsmh" Oct 5 12:15:03.136: INFO: Deleting PersistentVolume "local-pvmzwch" STEP: Cleaning up PVC and PV Oct 5 12:15:03.141: INFO: Deleting PersistentVolumeClaim "pvc-7xsxb" Oct 5 12:15:03.147: INFO: Deleting PersistentVolume "local-pvwjb9d" STEP: Cleaning up PVC and PV Oct 5 12:15:03.151: INFO: Deleting PersistentVolumeClaim "pvc-fsl2c" Oct 5 12:15:03.156: INFO: Deleting PersistentVolume "local-pvnrh57" STEP: Cleaning up PVC and PV Oct 5 12:15:03.161: INFO: Deleting PersistentVolumeClaim "pvc-xwq6l" Oct 5 12:15:03.166: INFO: Deleting PersistentVolume "local-pv4l84f" STEP: Cleaning up PVC and PV Oct 5 12:15:03.170: INFO: Deleting PersistentVolumeClaim "pvc-rq5zp" Oct 5 12:15:03.175: INFO: Deleting PersistentVolume "local-pvnmhvc" STEP: Cleaning up PVC and PV Oct 5 12:15:03.179: INFO: Deleting PersistentVolumeClaim "pvc-fj68f" Oct 5 12:15:03.184: INFO: Deleting PersistentVolume "local-pvvhkdc" STEP: Removing the test directory Oct 5 12:15:03.188: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a1de4b84-8ed2-4f29-b3f4-33df6e91d480] Namespace:persistent-local-volumes-test-8414 PodName:hostexec-v122-worker-dfw58 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:15:03.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:15:03.337: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-75908bf0-e00a-4e27-97d8-60995dc23457] Namespace:persistent-local-volumes-test-8414 PodName:hostexec-v122-worker-dfw58 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:15:03.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:15:03.470: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-030a542b-8cfc-494d-87f7-bb56d7fedf9c] Namespace:persistent-local-volumes-test-8414 PodName:hostexec-v122-worker-dfw58 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:15:03.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:15:03.613: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-19e1aea5-ed05-48b3-a237-d0d58c47fa52] Namespace:persistent-local-volumes-test-8414 PodName:hostexec-v122-worker-dfw58 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:15:03.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:15:03.755: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e5c08c16-6db0-45bb-a397-f2f085399bb7] Namespace:persistent-local-volumes-test-8414 PodName:hostexec-v122-worker-dfw58 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:15:03.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:15:03.919: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4c953e37-eb5c-4049-a634-070ab793d0d5] Namespace:persistent-local-volumes-test-8414 PodName:hostexec-v122-worker-dfw58 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:15:03.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up PVC and PV Oct 5 12:15:04.063: INFO: Deleting PersistentVolumeClaim "pvc-w5ksq" Oct 5 12:15:04.067: INFO: Deleting PersistentVolume "local-pvf4s62" STEP: Cleaning up PVC and PV Oct 5 12:15:04.072: INFO: Deleting PersistentVolumeClaim "pvc-th48p" Oct 5 12:15:04.077: INFO: Deleting PersistentVolume "local-pvmfh7d" STEP: Cleaning up PVC and PV Oct 5 12:15:04.081: INFO: Deleting PersistentVolumeClaim "pvc-nx8j6" Oct 5 12:15:04.086: INFO: Deleting PersistentVolume "local-pv9wcx9" STEP: Cleaning up PVC and PV Oct 5 12:15:04.090: INFO: Deleting PersistentVolumeClaim "pvc-xbq7v" Oct 5 12:15:04.094: INFO: Deleting PersistentVolume "local-pvrzgtv" STEP: Cleaning up PVC and PV Oct 5 12:15:04.099: INFO: Deleting PersistentVolumeClaim "pvc-tbrgp" Oct 5 12:15:04.103: INFO: Deleting PersistentVolume "local-pv789pj" STEP: Cleaning up PVC and PV Oct 5 12:15:04.113: INFO: Deleting PersistentVolumeClaim "pvc-9gxf9" Oct 5 12:15:04.118: INFO: Deleting PersistentVolume "local-pvvvzpd" STEP: Removing the test directory Oct 5 12:15:04.122: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-21e6a682-2c54-4dcc-8348-d597a9acb3fb] Namespace:persistent-local-volumes-test-8414 PodName:hostexec-v122-worker2-5z5kv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:15:04.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:15:04.268: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6131078b-8e08-430b-886e-b93a15e4adfc] Namespace:persistent-local-volumes-test-8414 PodName:hostexec-v122-worker2-5z5kv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:15:04.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:15:04.355: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-cb5852f3-7620-480d-bb67-5d953923124d] Namespace:persistent-local-volumes-test-8414 PodName:hostexec-v122-worker2-5z5kv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:15:04.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:15:04.480: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4e6786dd-27be-4eda-8570-0b8d58562855] Namespace:persistent-local-volumes-test-8414 PodName:hostexec-v122-worker2-5z5kv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:15:04.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:15:04.610: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-efa31de4-3f60-4142-b476-2dde613befcc] Namespace:persistent-local-volumes-test-8414 PodName:hostexec-v122-worker2-5z5kv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:15:04.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:15:04.718: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-83e99823-d0ce-489c-8fe5-a6271e582095] Namespace:persistent-local-volumes-test-8414 PodName:hostexec-v122-worker2-5z5kv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:15:04.718: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:15:04.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8414" for this suite. S [SKIPPING] [33.739 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:384 should use volumes spread across nodes when pod has anti-affinity [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:410 Runs only when number of nodes >= 3 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:412 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:15:04.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Oct 5 12:15:06.942: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-6803 PodName:hostexec-v122-worker-9xm44 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:15:06.942: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:15:07.100: INFO: exec v122-worker: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Oct 5 12:15:07.100: INFO: exec v122-worker: stdout: "0\n" Oct 5 12:15:07.100: INFO: exec v122-worker: stderr: "ls: cannot access '/mnt/disks/by-uuid/google-local-ssds-scsi-fs/': No such file or directory\n" Oct 5 12:15:07.100: INFO: exec v122-worker: exit code: 0 Oct 5 12:15:07.100: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:15:07.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6803" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.226 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1250 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:14:07.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should retry NodeStage after NodeStage ephemeral error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:829 STEP: Building a driver namespace object, basename csi-mock-volumes-5773 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Oct 5 12:14:07.939: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5773-6997/csi-attacher Oct 5 12:14:07.945: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5773 Oct 5 12:14:07.945: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-5773 Oct 5 12:14:07.949: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5773 Oct 5 12:14:07.953: INFO: creating *v1.Role: csi-mock-volumes-5773-6997/external-attacher-cfg-csi-mock-volumes-5773 Oct 5 12:14:07.957: INFO: creating *v1.RoleBinding: csi-mock-volumes-5773-6997/csi-attacher-role-cfg Oct 5 12:14:07.961: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5773-6997/csi-provisioner Oct 5 12:14:07.965: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5773 Oct 5 12:14:07.965: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-5773 Oct 5 12:14:07.969: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5773 Oct 5 12:14:07.973: INFO: creating *v1.Role: csi-mock-volumes-5773-6997/external-provisioner-cfg-csi-mock-volumes-5773 Oct 5 12:14:07.977: INFO: creating *v1.RoleBinding: csi-mock-volumes-5773-6997/csi-provisioner-role-cfg Oct 5 12:14:07.981: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5773-6997/csi-resizer Oct 5 12:14:07.984: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5773 Oct 5 12:14:07.984: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-5773 Oct 5 12:14:07.988: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5773 Oct 5 12:14:07.992: INFO: creating *v1.Role: csi-mock-volumes-5773-6997/external-resizer-cfg-csi-mock-volumes-5773 Oct 5 12:14:07.995: INFO: creating *v1.RoleBinding: csi-mock-volumes-5773-6997/csi-resizer-role-cfg Oct 5 12:14:07.999: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5773-6997/csi-snapshotter Oct 5 12:14:08.002: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5773 Oct 5 12:14:08.002: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-5773 Oct 5 12:14:08.006: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5773 Oct 5 12:14:08.010: INFO: creating *v1.Role: csi-mock-volumes-5773-6997/external-snapshotter-leaderelection-csi-mock-volumes-5773 Oct 5 12:14:08.014: INFO: creating *v1.RoleBinding: csi-mock-volumes-5773-6997/external-snapshotter-leaderelection Oct 5 12:14:08.017: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5773-6997/csi-mock Oct 5 12:14:08.021: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5773 Oct 5 12:14:08.024: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5773 Oct 5 12:14:08.028: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5773 Oct 5 12:14:08.032: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5773 Oct 5 12:14:08.040: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5773 Oct 5 12:14:08.044: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5773 Oct 5 12:14:08.047: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5773 Oct 5 12:14:08.051: INFO: creating *v1.StatefulSet: csi-mock-volumes-5773-6997/csi-mockplugin Oct 5 12:14:08.058: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-5773 Oct 5 12:14:08.062: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-5773" Oct 5 12:14:08.065: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5773 to register on node v122-worker I1005 12:14:11.089451 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I1005 12:14:11.091551 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-5773","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1005 12:14:11.093809 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I1005 12:14:11.096196 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I1005 12:14:11.199279 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-5773","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1005 12:14:12.154036 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-5773"},"Error":"","FullError":null} STEP: Creating pod Oct 5 12:14:13.079: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Oct 5 12:14:13.085: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-xgmnk] to have phase Bound Oct 5 12:14:13.088: INFO: PersistentVolumeClaim pvc-xgmnk found but phase is Pending instead of Bound. I1005 12:14:13.092072 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-a8e3a1ad-95ce-4223-9b76-55468bd10519","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-a8e3a1ad-95ce-4223-9b76-55468bd10519"}}},"Error":"","FullError":null} Oct 5 12:14:15.092: INFO: PersistentVolumeClaim pvc-xgmnk found and phase=Bound (2.007050187s) Oct 5 12:14:15.106: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-xgmnk] to have phase Bound Oct 5 12:14:15.113: INFO: PersistentVolumeClaim pvc-xgmnk found and phase=Bound (6.761723ms) STEP: Waiting for expected CSI calls I1005 12:14:15.274991 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:14:15.277923 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:14:15.280462 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a8e3a1ad-95ce-4223-9b76-55468bd10519/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-a8e3a1ad-95ce-4223-9b76-55468bd10519","storage.kubernetes.io/csiProvisionerIdentity":"1664972051097-8081-csi-mock-csi-mock-volumes-5773"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I1005 12:14:15.882360 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:14:15.884900 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:14:15.887743 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a8e3a1ad-95ce-4223-9b76-55468bd10519/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-a8e3a1ad-95ce-4223-9b76-55468bd10519","storage.kubernetes.io/csiProvisionerIdentity":"1664972051097-8081-csi-mock-csi-mock-volumes-5773"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I1005 12:14:16.990982 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:14:16.993376 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:14:16.995902 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a8e3a1ad-95ce-4223-9b76-55468bd10519/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-a8e3a1ad-95ce-4223-9b76-55468bd10519","storage.kubernetes.io/csiProvisionerIdentity":"1664972051097-8081-csi-mock-csi-mock-volumes-5773"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I1005 12:14:19.011552 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:14:19.014882 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Oct 5 12:14:19.017: INFO: >>> kubeConfig: /root/.kube/config I1005 12:14:19.174566 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a8e3a1ad-95ce-4223-9b76-55468bd10519/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-a8e3a1ad-95ce-4223-9b76-55468bd10519","storage.kubernetes.io/csiProvisionerIdentity":"1664972051097-8081-csi-mock-csi-mock-volumes-5773"}},"Response":{},"Error":"","FullError":null} I1005 12:14:19.183951 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:14:19.186235 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Oct 5 12:14:19.188: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:19.344: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:19.502: INFO: >>> kubeConfig: /root/.kube/config I1005 12:14:19.645351 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a8e3a1ad-95ce-4223-9b76-55468bd10519/globalmount","target_path":"/var/lib/kubelet/pods/6b4c3c1a-df5c-4ff9-9829-229bc2fe7e31/volumes/kubernetes.io~csi/pvc-a8e3a1ad-95ce-4223-9b76-55468bd10519/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-a8e3a1ad-95ce-4223-9b76-55468bd10519","storage.kubernetes.io/csiProvisionerIdentity":"1664972051097-8081-csi-mock-csi-mock-volumes-5773"}},"Response":{},"Error":"","FullError":null} STEP: Waiting for pod to be running STEP: Deleting the previously created pod Oct 5 12:14:22.125: INFO: Deleting pod "pvc-volume-tester-cxh9j" in namespace "csi-mock-volumes-5773" Oct 5 12:14:22.130: INFO: Wait up to 5m0s for pod "pvc-volume-tester-cxh9j" to be fully deleted Oct 5 12:14:23.351: INFO: >>> kubeConfig: /root/.kube/config I1005 12:14:23.478549 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/6b4c3c1a-df5c-4ff9-9829-229bc2fe7e31/volumes/kubernetes.io~csi/pvc-a8e3a1ad-95ce-4223-9b76-55468bd10519/mount"},"Response":{},"Error":"","FullError":null} I1005 12:14:23.557553 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1005 12:14:23.559743 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a8e3a1ad-95ce-4223-9b76-55468bd10519/globalmount"},"Response":{},"Error":"","FullError":null} STEP: Waiting for all remaining expected CSI calls STEP: Deleting pod pvc-volume-tester-cxh9j Oct 5 12:14:29.137: INFO: Deleting pod "pvc-volume-tester-cxh9j" in namespace "csi-mock-volumes-5773" STEP: Deleting claim pvc-xgmnk Oct 5 12:14:29.147: INFO: Waiting up to 2m0s for PersistentVolume pvc-a8e3a1ad-95ce-4223-9b76-55468bd10519 to get deleted Oct 5 12:14:29.150: INFO: PersistentVolume pvc-a8e3a1ad-95ce-4223-9b76-55468bd10519 found and phase=Bound (3.303599ms) I1005 12:14:29.166002 24 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} Oct 5 12:14:31.153: INFO: PersistentVolume pvc-a8e3a1ad-95ce-4223-9b76-55468bd10519 was removed STEP: Deleting storageclass csi-mock-volumes-5773-sc26pkb STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-5773 STEP: Waiting for namespaces [csi-mock-volumes-5773] to vanish STEP: uninstalling csi mock driver Oct 5 12:14:44.187: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5773-6997/csi-attacher Oct 5 12:14:44.192: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5773 Oct 5 12:14:44.197: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5773 Oct 5 12:14:44.202: INFO: deleting *v1.Role: csi-mock-volumes-5773-6997/external-attacher-cfg-csi-mock-volumes-5773 Oct 5 12:14:44.207: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5773-6997/csi-attacher-role-cfg Oct 5 12:14:44.212: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5773-6997/csi-provisioner Oct 5 12:14:44.216: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5773 Oct 5 12:14:44.221: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5773 Oct 5 12:14:44.225: INFO: deleting *v1.Role: csi-mock-volumes-5773-6997/external-provisioner-cfg-csi-mock-volumes-5773 Oct 5 12:14:44.231: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5773-6997/csi-provisioner-role-cfg Oct 5 12:14:44.236: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5773-6997/csi-resizer Oct 5 12:14:44.241: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5773 Oct 5 12:14:44.245: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5773 Oct 5 12:14:44.250: INFO: deleting *v1.Role: csi-mock-volumes-5773-6997/external-resizer-cfg-csi-mock-volumes-5773 Oct 5 12:14:44.254: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5773-6997/csi-resizer-role-cfg Oct 5 12:14:44.259: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5773-6997/csi-snapshotter Oct 5 12:14:44.264: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5773 Oct 5 12:14:44.268: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5773 Oct 5 12:14:44.272: INFO: deleting *v1.Role: csi-mock-volumes-5773-6997/external-snapshotter-leaderelection-csi-mock-volumes-5773 Oct 5 12:14:44.277: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5773-6997/external-snapshotter-leaderelection Oct 5 12:14:44.282: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5773-6997/csi-mock Oct 5 12:14:44.286: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5773 Oct 5 12:14:44.291: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5773 Oct 5 12:14:44.295: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5773 Oct 5 12:14:44.300: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5773 Oct 5 12:14:44.304: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5773 Oct 5 12:14:44.308: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5773 Oct 5 12:14:44.312: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5773 Oct 5 12:14:44.317: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5773-6997/csi-mockplugin Oct 5 12:14:44.322: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-5773 STEP: deleting the driver namespace: csi-mock-volumes-5773-6997 STEP: Waiting for namespaces [csi-mock-volumes-5773-6997] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:15:26.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:78.497 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI NodeStage error cases [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:735 should retry NodeStage after NodeStage ephemeral error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:829 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should retry NodeStage after NodeStage ephemeral error","total":-1,"completed":10,"skipped":587,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:14:37.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:391 STEP: Setting up local volumes on node "v122-worker" STEP: Initializing test volumes Oct 5 12:14:41.811: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-55177544-7501-4c64-a0bb-4531374eaacb] Namespace:persistent-local-volumes-test-1646 PodName:hostexec-v122-worker-m7h64 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:41.812: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:41.956: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-39025901-a009-49b5-ae5e-16964b915448] Namespace:persistent-local-volumes-test-1646 PodName:hostexec-v122-worker-m7h64 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:41.956: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:42.116: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-bff5c6d4-7e84-4f50-8c45-0cee53e07758] Namespace:persistent-local-volumes-test-1646 PodName:hostexec-v122-worker-m7h64 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:42.116: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:42.249: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-c15c10a9-aaa6-4ff3-85b9-4a789828be59] Namespace:persistent-local-volumes-test-1646 PodName:hostexec-v122-worker-m7h64 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:42.249: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:42.353: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-c15925cb-b48a-4088-9d25-ccd2715a8000] Namespace:persistent-local-volumes-test-1646 PodName:hostexec-v122-worker-m7h64 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:42.353: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:42.482: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-f5e7823d-e807-4dc6-a01c-9d4c52f9ceb5] Namespace:persistent-local-volumes-test-1646 PodName:hostexec-v122-worker-m7h64 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:42.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:14:42.624: INFO: Creating a PV followed by a PVC Oct 5 12:14:42.633: INFO: Creating a PV followed by a PVC Oct 5 12:14:42.641: INFO: Creating a PV followed by a PVC Oct 5 12:14:42.648: INFO: Creating a PV followed by a PVC Oct 5 12:14:42.657: INFO: Creating a PV followed by a PVC Oct 5 12:14:42.673: INFO: Creating a PV followed by a PVC Oct 5 12:14:52.736: INFO: PVCs were not bound within 10s (that's good) STEP: Setting up local volumes on node "v122-worker2" STEP: Initializing test volumes Oct 5 12:14:54.749: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-1e8d9cbf-f8af-40b6-a624-11056cd2ead2] Namespace:persistent-local-volumes-test-1646 PodName:hostexec-v122-worker2-m5qhg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:54.749: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:54.893: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-fe2d0c43-887c-4fa8-9630-deea573cbeae] Namespace:persistent-local-volumes-test-1646 PodName:hostexec-v122-worker2-m5qhg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:54.894: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:55.043: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-5cb6c6a3-e050-4d94-8162-78b90f68abed] Namespace:persistent-local-volumes-test-1646 PodName:hostexec-v122-worker2-m5qhg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:55.043: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:55.183: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-fe2629bb-af38-422b-b4c3-b3c891cf42ca] Namespace:persistent-local-volumes-test-1646 PodName:hostexec-v122-worker2-m5qhg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:55.183: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:55.331: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-cd8b3736-ca85-4fb5-a46b-b276e023f28e] Namespace:persistent-local-volumes-test-1646 PodName:hostexec-v122-worker2-m5qhg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:55.331: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:14:55.499: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-bfc99e64-38d2-4121-afd6-96d0f2f5544a] Namespace:persistent-local-volumes-test-1646 PodName:hostexec-v122-worker2-m5qhg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:14:55.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:14:55.631: INFO: Creating a PV followed by a PVC Oct 5 12:14:55.640: INFO: Creating a PV followed by a PVC Oct 5 12:14:55.647: INFO: Creating a PV followed by a PVC Oct 5 12:14:55.655: INFO: Creating a PV followed by a PVC Oct 5 12:14:55.663: INFO: Creating a PV followed by a PVC Oct 5 12:14:55.671: INFO: Creating a PV followed by a PVC Oct 5 12:15:05.731: INFO: PVCs were not bound within 10s (that's good) [It] should use volumes on one node when pod has affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:419 STEP: Creating a StatefulSet with pod affinity on nodes Oct 5 12:15:05.740: INFO: Found 0 stateful pods, waiting for 3 Oct 5 12:15:15.746: INFO: Waiting for pod local-volume-statefulset-0 to enter Running - Ready=true, currently Running - Ready=true Oct 5 12:15:15.747: INFO: Waiting for pod local-volume-statefulset-1 to enter Running - Ready=true, currently Running - Ready=true Oct 5 12:15:15.747: INFO: Waiting for pod local-volume-statefulset-2 to enter Running - Ready=true, currently Pending - Ready=false Oct 5 12:15:25.745: INFO: Waiting for pod local-volume-statefulset-0 to enter Running - Ready=true, currently Running - Ready=true Oct 5 12:15:25.745: INFO: Waiting for pod local-volume-statefulset-1 to enter Running - Ready=true, currently Running - Ready=true Oct 5 12:15:25.745: INFO: Waiting for pod local-volume-statefulset-2 to enter Running - Ready=true, currently Running - Ready=true Oct 5 12:15:25.750: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol1-local-volume-statefulset-0] to have phase Bound Oct 5 12:15:25.753: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-0 found and phase=Bound (3.070842ms) Oct 5 12:15:25.753: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol2-local-volume-statefulset-0] to have phase Bound Oct 5 12:15:25.756: INFO: PersistentVolumeClaim vol2-local-volume-statefulset-0 found and phase=Bound (3.095492ms) Oct 5 12:15:25.756: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol1-local-volume-statefulset-1] to have phase Bound Oct 5 12:15:25.759: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-1 found and phase=Bound (2.966971ms) Oct 5 12:15:25.759: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol2-local-volume-statefulset-1] to have phase Bound Oct 5 12:15:25.762: INFO: PersistentVolumeClaim vol2-local-volume-statefulset-1 found and phase=Bound (3.046197ms) Oct 5 12:15:25.762: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol1-local-volume-statefulset-2] to have phase Bound Oct 5 12:15:25.765: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-2 found and phase=Bound (3.032579ms) Oct 5 12:15:25.765: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol2-local-volume-statefulset-2] to have phase Bound Oct 5 12:15:25.768: INFO: PersistentVolumeClaim vol2-local-volume-statefulset-2 found and phase=Bound (2.851108ms) [AfterEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:403 STEP: Cleaning up PVC and PV Oct 5 12:15:25.768: INFO: Deleting PersistentVolumeClaim "pvc-rqzxw" Oct 5 12:15:25.773: INFO: Deleting PersistentVolume "local-pvmlm7d" STEP: Cleaning up PVC and PV Oct 5 12:15:25.779: INFO: Deleting PersistentVolumeClaim "pvc-n2zlw" Oct 5 12:15:25.783: INFO: Deleting PersistentVolume "local-pvfjjzt" STEP: Cleaning up PVC and PV Oct 5 12:15:25.788: INFO: Deleting PersistentVolumeClaim "pvc-4mmmf" Oct 5 12:15:25.792: INFO: Deleting PersistentVolume "local-pv9qzps" STEP: Cleaning up PVC and PV Oct 5 12:15:25.797: INFO: Deleting PersistentVolumeClaim "pvc-pfg9q" Oct 5 12:15:25.801: INFO: Deleting PersistentVolume "local-pv5g8qp" STEP: Cleaning up PVC and PV Oct 5 12:15:25.806: INFO: Deleting PersistentVolumeClaim "pvc-cgbgj" Oct 5 12:15:25.811: INFO: Deleting PersistentVolume "local-pv5ql79" STEP: Cleaning up PVC and PV Oct 5 12:15:25.815: INFO: Deleting PersistentVolumeClaim "pvc-sw9t4" Oct 5 12:15:25.820: INFO: Deleting PersistentVolume "local-pvh8n79" STEP: Removing the test directory Oct 5 12:15:25.826: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-1e8d9cbf-f8af-40b6-a624-11056cd2ead2] Namespace:persistent-local-volumes-test-1646 PodName:hostexec-v122-worker2-m5qhg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:15:25.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:15:25.969: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-fe2d0c43-887c-4fa8-9630-deea573cbeae] Namespace:persistent-local-volumes-test-1646 PodName:hostexec-v122-worker2-m5qhg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:15:25.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:15:26.102: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-5cb6c6a3-e050-4d94-8162-78b90f68abed] Namespace:persistent-local-volumes-test-1646 PodName:hostexec-v122-worker2-m5qhg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:15:26.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:15:26.238: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-fe2629bb-af38-422b-b4c3-b3c891cf42ca] Namespace:persistent-local-volumes-test-1646 PodName:hostexec-v122-worker2-m5qhg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:15:26.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:15:26.344: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-cd8b3736-ca85-4fb5-a46b-b276e023f28e] Namespace:persistent-local-volumes-test-1646 PodName:hostexec-v122-worker2-m5qhg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:15:26.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:15:26.470: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-bfc99e64-38d2-4121-afd6-96d0f2f5544a] Namespace:persistent-local-volumes-test-1646 PodName:hostexec-v122-worker2-m5qhg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:15:26.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up PVC and PV Oct 5 12:15:26.591: INFO: Deleting PersistentVolumeClaim "pvc-lrpnr" Oct 5 12:15:26.596: INFO: Deleting PersistentVolume "local-pvfjst4" STEP: Cleaning up PVC and PV Oct 5 12:15:26.601: INFO: Deleting PersistentVolumeClaim "pvc-ktdcl" Oct 5 12:15:26.605: INFO: Deleting PersistentVolume "local-pvd4zhz" STEP: Cleaning up PVC and PV Oct 5 12:15:26.610: INFO: Deleting PersistentVolumeClaim "pvc-jzsgv" Oct 5 12:15:26.615: INFO: Deleting PersistentVolume "local-pv69qxm" STEP: Cleaning up PVC and PV Oct 5 12:15:26.620: INFO: Deleting PersistentVolumeClaim "pvc-pwv79" Oct 5 12:15:26.624: INFO: Deleting PersistentVolume "local-pv5qgqf" STEP: Cleaning up PVC and PV Oct 5 12:15:26.629: INFO: Deleting PersistentVolumeClaim "pvc-hc4qk" Oct 5 12:15:26.634: INFO: Deleting PersistentVolume "local-pv4prt6" STEP: Cleaning up PVC and PV Oct 5 12:15:26.639: INFO: Deleting PersistentVolumeClaim "pvc-ncnsc" Oct 5 12:15:26.644: INFO: Deleting PersistentVolume "local-pvrfkpz" STEP: Removing the test directory Oct 5 12:15:26.649: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-55177544-7501-4c64-a0bb-4531374eaacb] Namespace:persistent-local-volumes-test-1646 PodName:hostexec-v122-worker-m7h64 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:15:26.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:15:26.809: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-39025901-a009-49b5-ae5e-16964b915448] Namespace:persistent-local-volumes-test-1646 PodName:hostexec-v122-worker-m7h64 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:15:26.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:15:26.894: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-bff5c6d4-7e84-4f50-8c45-0cee53e07758] Namespace:persistent-local-volumes-test-1646 PodName:hostexec-v122-worker-m7h64 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:15:26.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:15:26.983: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c15c10a9-aaa6-4ff3-85b9-4a789828be59] Namespace:persistent-local-volumes-test-1646 PodName:hostexec-v122-worker-m7h64 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:15:26.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:15:27.082: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c15925cb-b48a-4088-9d25-ccd2715a8000] Namespace:persistent-local-volumes-test-1646 PodName:hostexec-v122-worker-m7h64 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:15:27.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:15:27.209: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f5e7823d-e807-4dc6-a01c-9d4c52f9ceb5] Namespace:persistent-local-volumes-test-1646 PodName:hostexec-v122-worker-m7h64 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:15:27.209: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:15:27.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1646" for this suite. • [SLOW TEST:49.603 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:384 should use volumes on one node when pod has affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:419 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local StatefulSet with pod affinity [Slow] should use volumes on one node when pod has affinity","total":-1,"completed":9,"skipped":562,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:15:27.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename mounted-volume-expand STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:61 Oct 5 12:15:27.409: INFO: Only supported for providers [aws gce] (not local) [AfterEach] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:15:27.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "mounted-volume-expand-5019" for this suite. [AfterEach] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:108 Oct 5 12:15:27.420: INFO: AfterEach: Cleaning up resources for mounted volume resize S [SKIPPING] in Spec Setup (BeforeEach) [0.048 seconds] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should verify mounted devices can be resized [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:122 Only supported for providers [aws gce] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:62 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:15:27.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Oct 5 12:15:27.642: INFO: The status of Pod test-hostpath-type-jg5p2 is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:15:29.648: INFO: The status of Pod test-hostpath-type-jg5p2 is Running (Ready = true) STEP: running on node v122-worker2 [It] Should fail on mounting non-existent socket 'does-not-exist-socket' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:202 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:15:31.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-9449" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should fail on mounting non-existent socket 'does-not-exist-socket' when HostPathType is HostPathSocket","total":-1,"completed":10,"skipped":661,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:15:31.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Oct 5 12:15:31.751: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:15:31.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-1861" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.046 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Serial] attach on previously attached volumes should work [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:458 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:15:31.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48 STEP: Creating a pod to test hostPath mode Oct 5 12:15:31.923: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9366" to be "Succeeded or Failed" Oct 5 12:15:31.926: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.147367ms Oct 5 12:15:33.930: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2.007334834s Oct 5 12:15:35.935: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011927371s STEP: Saw pod success Oct 5 12:15:35.935: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Oct 5 12:15:35.938: INFO: Trying to get logs from node v122-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Oct 5 12:15:35.964: INFO: Waiting for pod pod-host-path-test to disappear Oct 5 12:15:35.967: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:15:35.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-9366" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","total":-1,"completed":11,"skipped":734,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:15:35.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename regional-pd STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:68 Oct 5 12:15:36.031: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:15:36.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "regional-pd-4539" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.047 seconds] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 RegionalPD [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:76 should provision storage in the allowedTopologies with delayed binding [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:90 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:72 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:15:26.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "v122-worker2" at path "/tmp/local-volume-test-b03c2213-8207-4822-82d3-1092692be4e1" Oct 5 12:15:28.545: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-b03c2213-8207-4822-82d3-1092692be4e1" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-b03c2213-8207-4822-82d3-1092692be4e1" "/tmp/local-volume-test-b03c2213-8207-4822-82d3-1092692be4e1"] Namespace:persistent-local-volumes-test-8880 PodName:hostexec-v122-worker2-kw744 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:15:28.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:15:28.702: INFO: Creating a PV followed by a PVC Oct 5 12:15:28.711: INFO: Waiting for PV local-pvz4ls9 to bind to PVC pvc-zgtvg Oct 5 12:15:28.711: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-zgtvg] to have phase Bound Oct 5 12:15:28.715: INFO: PersistentVolumeClaim pvc-zgtvg found but phase is Pending instead of Bound. Oct 5 12:15:30.720: INFO: PersistentVolumeClaim pvc-zgtvg found but phase is Pending instead of Bound. Oct 5 12:15:32.724: INFO: PersistentVolumeClaim pvc-zgtvg found but phase is Pending instead of Bound. Oct 5 12:15:34.728: INFO: PersistentVolumeClaim pvc-zgtvg found and phase=Bound (6.016992652s) Oct 5 12:15:34.728: INFO: Waiting up to 3m0s for PersistentVolume local-pvz4ls9 to have phase Bound Oct 5 12:15:34.732: INFO: PersistentVolume local-pvz4ls9 found and phase=Bound (3.481956ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Oct 5 12:15:36.757: INFO: pod "pod-6faf9fa7-38ea-4833-9c38-4b0b62918099" created on Node "v122-worker2" STEP: Writing in pod1 Oct 5 12:15:36.757: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8880 PodName:pod-6faf9fa7-38ea-4833-9c38-4b0b62918099 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:15:36.757: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:15:36.875: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Oct 5 12:15:36.875: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8880 PodName:pod-6faf9fa7-38ea-4833-9c38-4b0b62918099 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:15:36.876: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:15:36.996: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Oct 5 12:15:41.016: INFO: pod "pod-6c1fe12b-6941-42d0-b2ae-5ed1cadb57cc" created on Node "v122-worker2" Oct 5 12:15:41.016: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8880 PodName:pod-6c1fe12b-6941-42d0-b2ae-5ed1cadb57cc ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:15:41.016: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:15:41.145: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Oct 5 12:15:41.145: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-b03c2213-8207-4822-82d3-1092692be4e1 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8880 PodName:pod-6c1fe12b-6941-42d0-b2ae-5ed1cadb57cc ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:15:41.145: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:15:41.284: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-b03c2213-8207-4822-82d3-1092692be4e1 > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Oct 5 12:15:41.284: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8880 PodName:pod-6faf9fa7-38ea-4833-9c38-4b0b62918099 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:15:41.284: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:15:41.414: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-b03c2213-8207-4822-82d3-1092692be4e1", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-6faf9fa7-38ea-4833-9c38-4b0b62918099 in namespace persistent-local-volumes-test-8880 STEP: Deleting pod2 STEP: Deleting pod pod-6c1fe12b-6941-42d0-b2ae-5ed1cadb57cc in namespace persistent-local-volumes-test-8880 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:15:41.425: INFO: Deleting PersistentVolumeClaim "pvc-zgtvg" Oct 5 12:15:41.430: INFO: Deleting PersistentVolume "local-pvz4ls9" STEP: Unmount tmpfs mount point on node "v122-worker2" at path "/tmp/local-volume-test-b03c2213-8207-4822-82d3-1092692be4e1" Oct 5 12:15:41.435: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-b03c2213-8207-4822-82d3-1092692be4e1"] Namespace:persistent-local-volumes-test-8880 PodName:hostexec-v122-worker2-kw744 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:15:41.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:15:41.584: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-b03c2213-8207-4822-82d3-1092692be4e1] Namespace:persistent-local-volumes-test-8880 PodName:hostexec-v122-worker2-kw744 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:15:41.584: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:15:41.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8880" for this suite. • [SLOW TEST:15.196 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":11,"skipped":685,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:15:36.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Oct 5 12:15:36.119: INFO: The status of Pod test-hostpath-type-65qr9 is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:15:38.123: INFO: The status of Pod test-hostpath-type-65qr9 is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:15:40.124: INFO: The status of Pod test-hostpath-type-65qr9 is Running (Ready = true) STEP: running on node v122-worker2 STEP: Create a character device for further testing Oct 5 12:15:40.127: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-7880 PodName:test-hostpath-type-65qr9 ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:15:40.127: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting non-existent character device 'does-not-exist-char-dev' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:271 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:15:42.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-7880" for this suite. • [SLOW TEST:6.216 seconds] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting non-existent character device 'does-not-exist-char-dev' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:271 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should fail on mounting non-existent character device 'does-not-exist-char-dev' when HostPathType is HostPathCharDev","total":-1,"completed":12,"skipped":758,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:15:07.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should preserve attachment policy when no CSIDriver present /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:339 STEP: Building a driver namespace object, basename csi-mock-volumes-2611 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Oct 5 12:15:07.328: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2611-2667/csi-attacher Oct 5 12:15:07.332: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2611 Oct 5 12:15:07.332: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-2611 Oct 5 12:15:07.336: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2611 Oct 5 12:15:07.339: INFO: creating *v1.Role: csi-mock-volumes-2611-2667/external-attacher-cfg-csi-mock-volumes-2611 Oct 5 12:15:07.343: INFO: creating *v1.RoleBinding: csi-mock-volumes-2611-2667/csi-attacher-role-cfg Oct 5 12:15:07.347: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2611-2667/csi-provisioner Oct 5 12:15:07.351: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2611 Oct 5 12:15:07.351: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-2611 Oct 5 12:15:07.355: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2611 Oct 5 12:15:07.358: INFO: creating *v1.Role: csi-mock-volumes-2611-2667/external-provisioner-cfg-csi-mock-volumes-2611 Oct 5 12:15:07.362: INFO: creating *v1.RoleBinding: csi-mock-volumes-2611-2667/csi-provisioner-role-cfg Oct 5 12:15:07.366: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2611-2667/csi-resizer Oct 5 12:15:07.370: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2611 Oct 5 12:15:07.370: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-2611 Oct 5 12:15:07.373: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2611 Oct 5 12:15:07.377: INFO: creating *v1.Role: csi-mock-volumes-2611-2667/external-resizer-cfg-csi-mock-volumes-2611 Oct 5 12:15:07.380: INFO: creating *v1.RoleBinding: csi-mock-volumes-2611-2667/csi-resizer-role-cfg Oct 5 12:15:07.384: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2611-2667/csi-snapshotter Oct 5 12:15:07.388: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2611 Oct 5 12:15:07.388: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-2611 Oct 5 12:15:07.392: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2611 Oct 5 12:15:07.395: INFO: creating *v1.Role: csi-mock-volumes-2611-2667/external-snapshotter-leaderelection-csi-mock-volumes-2611 Oct 5 12:15:07.399: INFO: creating *v1.RoleBinding: csi-mock-volumes-2611-2667/external-snapshotter-leaderelection Oct 5 12:15:07.403: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2611-2667/csi-mock Oct 5 12:15:07.406: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2611 Oct 5 12:15:07.409: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2611 Oct 5 12:15:07.413: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2611 Oct 5 12:15:07.417: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2611 Oct 5 12:15:07.420: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2611 Oct 5 12:15:07.424: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2611 Oct 5 12:15:07.427: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2611 Oct 5 12:15:07.431: INFO: creating *v1.StatefulSet: csi-mock-volumes-2611-2667/csi-mockplugin Oct 5 12:15:07.438: INFO: creating *v1.StatefulSet: csi-mock-volumes-2611-2667/csi-mockplugin-attacher Oct 5 12:15:07.443: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-2611 to register on node v122-worker STEP: Creating pod Oct 5 12:15:12.461: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Oct 5 12:15:12.468: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-2f8x9] to have phase Bound Oct 5 12:15:12.472: INFO: PersistentVolumeClaim pvc-2f8x9 found but phase is Pending instead of Bound. Oct 5 12:15:14.477: INFO: PersistentVolumeClaim pvc-2f8x9 found and phase=Bound (2.008582756s) STEP: Checking if VolumeAttachment was created for the pod STEP: Deleting pod pvc-volume-tester-g42vf Oct 5 12:15:22.508: INFO: Deleting pod "pvc-volume-tester-g42vf" in namespace "csi-mock-volumes-2611" Oct 5 12:15:22.513: INFO: Wait up to 5m0s for pod "pvc-volume-tester-g42vf" to be fully deleted STEP: Deleting claim pvc-2f8x9 Oct 5 12:15:24.531: INFO: Waiting up to 2m0s for PersistentVolume pvc-779a3daf-c979-4a4f-bfe6-3d7db399d2e6 to get deleted Oct 5 12:15:24.534: INFO: PersistentVolume pvc-779a3daf-c979-4a4f-bfe6-3d7db399d2e6 found and phase=Bound (3.23231ms) Oct 5 12:15:26.538: INFO: PersistentVolume pvc-779a3daf-c979-4a4f-bfe6-3d7db399d2e6 found and phase=Released (2.006717925s) Oct 5 12:15:28.543: INFO: PersistentVolume pvc-779a3daf-c979-4a4f-bfe6-3d7db399d2e6 found and phase=Released (4.012233056s) Oct 5 12:15:30.548: INFO: PersistentVolume pvc-779a3daf-c979-4a4f-bfe6-3d7db399d2e6 found and phase=Released (6.017412951s) Oct 5 12:15:32.552: INFO: PersistentVolume pvc-779a3daf-c979-4a4f-bfe6-3d7db399d2e6 was removed STEP: Deleting storageclass csi-mock-volumes-2611-sckxzqp STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-2611 STEP: Waiting for namespaces [csi-mock-volumes-2611] to vanish STEP: uninstalling csi mock driver Oct 5 12:15:38.565: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2611-2667/csi-attacher Oct 5 12:15:38.571: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2611 Oct 5 12:15:38.575: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2611 Oct 5 12:15:38.580: INFO: deleting *v1.Role: csi-mock-volumes-2611-2667/external-attacher-cfg-csi-mock-volumes-2611 Oct 5 12:15:38.584: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2611-2667/csi-attacher-role-cfg Oct 5 12:15:38.589: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2611-2667/csi-provisioner Oct 5 12:15:38.593: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2611 Oct 5 12:15:38.598: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2611 Oct 5 12:15:38.602: INFO: deleting *v1.Role: csi-mock-volumes-2611-2667/external-provisioner-cfg-csi-mock-volumes-2611 Oct 5 12:15:38.607: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2611-2667/csi-provisioner-role-cfg Oct 5 12:15:38.611: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2611-2667/csi-resizer Oct 5 12:15:38.615: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2611 Oct 5 12:15:38.620: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2611 Oct 5 12:15:38.624: INFO: deleting *v1.Role: csi-mock-volumes-2611-2667/external-resizer-cfg-csi-mock-volumes-2611 Oct 5 12:15:38.628: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2611-2667/csi-resizer-role-cfg Oct 5 12:15:38.633: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2611-2667/csi-snapshotter Oct 5 12:15:38.637: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2611 Oct 5 12:15:38.641: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2611 Oct 5 12:15:38.645: INFO: deleting *v1.Role: csi-mock-volumes-2611-2667/external-snapshotter-leaderelection-csi-mock-volumes-2611 Oct 5 12:15:38.650: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2611-2667/external-snapshotter-leaderelection Oct 5 12:15:38.654: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2611-2667/csi-mock Oct 5 12:15:38.659: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2611 Oct 5 12:15:38.662: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2611 Oct 5 12:15:38.666: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2611 Oct 5 12:15:38.670: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2611 Oct 5 12:15:38.674: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2611 Oct 5 12:15:38.678: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2611 Oct 5 12:15:38.683: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2611 Oct 5 12:15:38.687: INFO: deleting *v1.StatefulSet: csi-mock-volumes-2611-2667/csi-mockplugin Oct 5 12:15:38.692: INFO: deleting *v1.StatefulSet: csi-mock-volumes-2611-2667/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-2611-2667 STEP: Waiting for namespaces [csi-mock-volumes-2611-2667] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:15:44.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:37.479 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI attach test using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:317 should preserve attachment policy when no CSIDriver present /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:339 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present","total":-1,"completed":17,"skipped":546,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:15:44.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Oct 5 12:15:46.821: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-fa9ab042-dcf1-4b09-8d5b-1f6295ded0ed && mount --bind /tmp/local-volume-test-fa9ab042-dcf1-4b09-8d5b-1f6295ded0ed /tmp/local-volume-test-fa9ab042-dcf1-4b09-8d5b-1f6295ded0ed] Namespace:persistent-local-volumes-test-3647 PodName:hostexec-v122-worker-zdclk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:15:46.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:15:46.987: INFO: Creating a PV followed by a PVC Oct 5 12:15:46.997: INFO: Waiting for PV local-pvj859h to bind to PVC pvc-pgtpf Oct 5 12:15:46.997: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-pgtpf] to have phase Bound Oct 5 12:15:46.999: INFO: PersistentVolumeClaim pvc-pgtpf found but phase is Pending instead of Bound. Oct 5 12:15:49.004: INFO: PersistentVolumeClaim pvc-pgtpf found and phase=Bound (2.007151374s) Oct 5 12:15:49.004: INFO: Waiting up to 3m0s for PersistentVolume local-pvj859h to have phase Bound Oct 5 12:15:49.007: INFO: PersistentVolume local-pvj859h found and phase=Bound (3.425459ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Oct 5 12:15:51.034: INFO: pod "pod-59dfc9df-97d3-4856-9dcb-3dd1ba17266d" created on Node "v122-worker" STEP: Writing in pod1 Oct 5 12:15:51.034: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3647 PodName:pod-59dfc9df-97d3-4856-9dcb-3dd1ba17266d ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:15:51.034: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:15:51.160: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Oct 5 12:15:51.160: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3647 PodName:pod-59dfc9df-97d3-4856-9dcb-3dd1ba17266d ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:15:51.160: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:15:51.288: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-59dfc9df-97d3-4856-9dcb-3dd1ba17266d in namespace persistent-local-volumes-test-3647 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:15:51.294: INFO: Deleting PersistentVolumeClaim "pvc-pgtpf" Oct 5 12:15:51.298: INFO: Deleting PersistentVolume "local-pvj859h" STEP: Removing the test directory Oct 5 12:15:51.303: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-fa9ab042-dcf1-4b09-8d5b-1f6295ded0ed && rm -r /tmp/local-volume-test-fa9ab042-dcf1-4b09-8d5b-1f6295ded0ed] Namespace:persistent-local-volumes-test-3647 PodName:hostexec-v122-worker-zdclk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:15:51.303: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:15:51.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3647" for this suite. • [SLOW TEST:6.656 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":18,"skipped":568,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:15:41.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "v122-worker2" using path "/tmp/local-volume-test-db075aa8-cb55-4809-9ee4-6d13595849db" Oct 5 12:15:43.759: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-db075aa8-cb55-4809-9ee4-6d13595849db && dd if=/dev/zero of=/tmp/local-volume-test-db075aa8-cb55-4809-9ee4-6d13595849db/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-db075aa8-cb55-4809-9ee4-6d13595849db/file] Namespace:persistent-local-volumes-test-7516 PodName:hostexec-v122-worker2-twx9z ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:15:43.759: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:15:43.961: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-db075aa8-cb55-4809-9ee4-6d13595849db/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-7516 PodName:hostexec-v122-worker2-twx9z ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:15:43.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:15:44.059: INFO: Creating a PV followed by a PVC Oct 5 12:15:44.068: INFO: Waiting for PV local-pvncf4c to bind to PVC pvc-t476h Oct 5 12:15:44.068: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-t476h] to have phase Bound Oct 5 12:15:44.071: INFO: PersistentVolumeClaim pvc-t476h found but phase is Pending instead of Bound. Oct 5 12:15:46.075: INFO: PersistentVolumeClaim pvc-t476h found but phase is Pending instead of Bound. Oct 5 12:15:48.079: INFO: PersistentVolumeClaim pvc-t476h found but phase is Pending instead of Bound. Oct 5 12:15:50.084: INFO: PersistentVolumeClaim pvc-t476h found and phase=Bound (6.016399699s) Oct 5 12:15:50.085: INFO: Waiting up to 3m0s for PersistentVolume local-pvncf4c to have phase Bound Oct 5 12:15:50.088: INFO: PersistentVolume local-pvncf4c found and phase=Bound (3.21166ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Oct 5 12:15:52.113: INFO: pod "pod-3fc23098-4719-494e-970f-331082a039c9" created on Node "v122-worker2" STEP: Writing in pod1 Oct 5 12:15:52.113: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7516 PodName:pod-3fc23098-4719-494e-970f-331082a039c9 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:15:52.113: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:15:52.240: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Oct 5 12:15:52.240: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7516 PodName:pod-3fc23098-4719-494e-970f-331082a039c9 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:15:52.240: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:15:52.328: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-3fc23098-4719-494e-970f-331082a039c9 in namespace persistent-local-volumes-test-7516 STEP: Creating pod2 STEP: Creating a pod Oct 5 12:15:56.354: INFO: pod "pod-b8b1bd76-ce8c-4538-93fe-a4915496f4fc" created on Node "v122-worker2" STEP: Reading in pod2 Oct 5 12:15:56.355: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7516 PodName:pod-b8b1bd76-ce8c-4538-93fe-a4915496f4fc ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:15:56.355: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:15:56.480: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-b8b1bd76-ce8c-4538-93fe-a4915496f4fc in namespace persistent-local-volumes-test-7516 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:15:56.485: INFO: Deleting PersistentVolumeClaim "pvc-t476h" Oct 5 12:15:56.489: INFO: Deleting PersistentVolume "local-pvncf4c" Oct 5 12:15:56.493: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-db075aa8-cb55-4809-9ee4-6d13595849db/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-7516 PodName:hostexec-v122-worker2-twx9z ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:15:56.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop8" on node "v122-worker2" at path /tmp/local-volume-test-db075aa8-cb55-4809-9ee4-6d13595849db/file Oct 5 12:15:56.659: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop8] Namespace:persistent-local-volumes-test-7516 PodName:hostexec-v122-worker2-twx9z ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:15:56.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-db075aa8-cb55-4809-9ee4-6d13595849db Oct 5 12:15:56.788: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-db075aa8-cb55-4809-9ee4-6d13595849db] Namespace:persistent-local-volumes-test-7516 PodName:hostexec-v122-worker2-twx9z ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:15:56.788: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:15:56.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7516" for this suite. • [SLOW TEST:15.218 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":12,"skipped":696,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:15:42.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:144 [It] should create and delete persistent volumes [fast] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:707 STEP: creating a Gluster DP server Pod STEP: locating the provisioner pod STEP: creating a StorageClass STEP: Creating a StorageClass STEP: creating a claim object with a suffix for gluster dynamic provisioner Oct 5 12:15:54.458: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: creating claim=&PersistentVolumeClaim{ObjectMeta:{ pvc- volume-provisioning-8023 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {} 2Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*volume-provisioning-8023-glusterdptestzj2w5,VolumeMode:nil,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} Oct 5 12:15:54.468: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-659g8] to have phase Bound Oct 5 12:15:54.471: INFO: PersistentVolumeClaim pvc-659g8 found but phase is Pending instead of Bound. Oct 5 12:15:56.475: INFO: PersistentVolumeClaim pvc-659g8 found and phase=Bound (2.006921369s) STEP: checking the claim STEP: checking the PV STEP: deleting claim "volume-provisioning-8023"/"pvc-659g8" STEP: deleting the claim's PV "pvc-a8921ad4-7e57-42aa-bdf6-542252cb4985" Oct 5 12:15:56.486: INFO: Waiting up to 20m0s for PersistentVolume pvc-a8921ad4-7e57-42aa-bdf6-542252cb4985 to get deleted Oct 5 12:15:56.489: INFO: PersistentVolume pvc-a8921ad4-7e57-42aa-bdf6-542252cb4985 found and phase=Bound (2.804081ms) Oct 5 12:16:01.494: INFO: PersistentVolume pvc-a8921ad4-7e57-42aa-bdf6-542252cb4985 was removed Oct 5 12:16:01.494: INFO: deleting claim "volume-provisioning-8023"/"pvc-659g8" Oct 5 12:16:01.498: INFO: deleting storage class volume-provisioning-8023-glusterdptestzj2w5 [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:16:01.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-8023" for this suite. • [SLOW TEST:19.119 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 GlusterDynamicProvisioner /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:706 should create and delete persistent volumes [fast] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:707 ------------------------------ {"msg":"PASSED [sig-storage] Dynamic Provisioning GlusterDynamicProvisioner should create and delete persistent volumes [fast]","total":-1,"completed":13,"skipped":813,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:13:27.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not expand volume if resizingOnDriver=off, resizingOnSC=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:591 STEP: Building a driver namespace object, basename csi-mock-volumes-9428 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Oct 5 12:13:27.887: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9428-9117/csi-attacher Oct 5 12:13:27.891: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9428 Oct 5 12:13:27.891: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-9428 Oct 5 12:13:27.895: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9428 Oct 5 12:13:27.900: INFO: creating *v1.Role: csi-mock-volumes-9428-9117/external-attacher-cfg-csi-mock-volumes-9428 Oct 5 12:13:27.904: INFO: creating *v1.RoleBinding: csi-mock-volumes-9428-9117/csi-attacher-role-cfg Oct 5 12:13:27.908: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9428-9117/csi-provisioner Oct 5 12:13:27.912: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9428 Oct 5 12:13:27.912: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-9428 Oct 5 12:13:27.916: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9428 Oct 5 12:13:27.919: INFO: creating *v1.Role: csi-mock-volumes-9428-9117/external-provisioner-cfg-csi-mock-volumes-9428 Oct 5 12:13:27.925: INFO: creating *v1.RoleBinding: csi-mock-volumes-9428-9117/csi-provisioner-role-cfg Oct 5 12:13:27.928: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9428-9117/csi-resizer Oct 5 12:13:27.932: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9428 Oct 5 12:13:27.932: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-9428 Oct 5 12:13:27.936: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9428 Oct 5 12:13:27.939: INFO: creating *v1.Role: csi-mock-volumes-9428-9117/external-resizer-cfg-csi-mock-volumes-9428 Oct 5 12:13:27.943: INFO: creating *v1.RoleBinding: csi-mock-volumes-9428-9117/csi-resizer-role-cfg Oct 5 12:13:27.947: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9428-9117/csi-snapshotter Oct 5 12:13:27.951: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9428 Oct 5 12:13:27.951: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-9428 Oct 5 12:13:27.955: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9428 Oct 5 12:13:27.958: INFO: creating *v1.Role: csi-mock-volumes-9428-9117/external-snapshotter-leaderelection-csi-mock-volumes-9428 Oct 5 12:13:27.962: INFO: creating *v1.RoleBinding: csi-mock-volumes-9428-9117/external-snapshotter-leaderelection Oct 5 12:13:27.966: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9428-9117/csi-mock Oct 5 12:13:27.969: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9428 Oct 5 12:13:27.973: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9428 Oct 5 12:13:27.976: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9428 Oct 5 12:13:27.980: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9428 Oct 5 12:13:27.983: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9428 Oct 5 12:13:27.987: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9428 Oct 5 12:13:27.991: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9428 Oct 5 12:13:27.995: INFO: creating *v1.StatefulSet: csi-mock-volumes-9428-9117/csi-mockplugin Oct 5 12:13:28.003: INFO: creating *v1.StatefulSet: csi-mock-volumes-9428-9117/csi-mockplugin-attacher Oct 5 12:13:28.008: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-9428 to register on node v122-worker STEP: Creating pod Oct 5 12:13:33.024: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Oct 5 12:13:33.031: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-82fsj] to have phase Bound Oct 5 12:13:33.034: INFO: PersistentVolumeClaim pvc-82fsj found but phase is Pending instead of Bound. Oct 5 12:13:35.040: INFO: PersistentVolumeClaim pvc-82fsj found and phase=Bound (2.009516824s) STEP: Expanding current pvc STEP: Deleting pod pvc-volume-tester-ppq25 Oct 5 12:15:43.080: INFO: Deleting pod "pvc-volume-tester-ppq25" in namespace "csi-mock-volumes-9428" Oct 5 12:15:43.085: INFO: Wait up to 5m0s for pod "pvc-volume-tester-ppq25" to be fully deleted STEP: Deleting claim pvc-82fsj Oct 5 12:15:45.100: INFO: Waiting up to 2m0s for PersistentVolume pvc-f2681a25-aaf6-4cb7-aea3-3d357ba2d3cb to get deleted Oct 5 12:15:45.104: INFO: PersistentVolume pvc-f2681a25-aaf6-4cb7-aea3-3d357ba2d3cb found and phase=Bound (3.179218ms) Oct 5 12:15:47.108: INFO: PersistentVolume pvc-f2681a25-aaf6-4cb7-aea3-3d357ba2d3cb found and phase=Released (2.007385811s) Oct 5 12:15:49.112: INFO: PersistentVolume pvc-f2681a25-aaf6-4cb7-aea3-3d357ba2d3cb found and phase=Released (4.011780207s) Oct 5 12:15:51.116: INFO: PersistentVolume pvc-f2681a25-aaf6-4cb7-aea3-3d357ba2d3cb found and phase=Released (6.015963817s) Oct 5 12:15:53.120: INFO: PersistentVolume pvc-f2681a25-aaf6-4cb7-aea3-3d357ba2d3cb was removed STEP: Deleting storageclass csi-mock-volumes-9428-sc999jg STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-9428 STEP: Waiting for namespaces [csi-mock-volumes-9428] to vanish STEP: uninstalling csi mock driver Oct 5 12:15:59.136: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9428-9117/csi-attacher Oct 5 12:15:59.142: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9428 Oct 5 12:15:59.147: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9428 Oct 5 12:15:59.152: INFO: deleting *v1.Role: csi-mock-volumes-9428-9117/external-attacher-cfg-csi-mock-volumes-9428 Oct 5 12:15:59.156: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9428-9117/csi-attacher-role-cfg Oct 5 12:15:59.161: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9428-9117/csi-provisioner Oct 5 12:15:59.165: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9428 Oct 5 12:15:59.170: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9428 Oct 5 12:15:59.175: INFO: deleting *v1.Role: csi-mock-volumes-9428-9117/external-provisioner-cfg-csi-mock-volumes-9428 Oct 5 12:15:59.179: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9428-9117/csi-provisioner-role-cfg Oct 5 12:15:59.184: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9428-9117/csi-resizer Oct 5 12:15:59.189: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9428 Oct 5 12:15:59.193: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9428 Oct 5 12:15:59.197: INFO: deleting *v1.Role: csi-mock-volumes-9428-9117/external-resizer-cfg-csi-mock-volumes-9428 Oct 5 12:15:59.205: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9428-9117/csi-resizer-role-cfg Oct 5 12:15:59.216: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9428-9117/csi-snapshotter Oct 5 12:15:59.222: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9428 Oct 5 12:15:59.227: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9428 Oct 5 12:15:59.231: INFO: deleting *v1.Role: csi-mock-volumes-9428-9117/external-snapshotter-leaderelection-csi-mock-volumes-9428 Oct 5 12:15:59.236: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9428-9117/external-snapshotter-leaderelection Oct 5 12:15:59.241: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9428-9117/csi-mock Oct 5 12:15:59.245: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9428 Oct 5 12:15:59.250: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9428 Oct 5 12:15:59.255: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9428 Oct 5 12:15:59.259: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9428 Oct 5 12:15:59.264: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9428 Oct 5 12:15:59.269: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9428 Oct 5 12:15:59.273: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9428 Oct 5 12:15:59.278: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9428-9117/csi-mockplugin Oct 5 12:15:59.284: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9428-9117/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-9428-9117 STEP: Waiting for namespaces [csi-mock-volumes-9428-9117] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:16:05.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:157.512 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI Volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:562 should not expand volume if resizingOnDriver=off, resizingOnSC=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:591 ------------------------------ [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:16:01.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Oct 5 12:16:01.588: INFO: The status of Pod test-hostpath-type-g57zb is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:16:03.593: INFO: The status of Pod test-hostpath-type-g57zb is Running (Ready = true) STEP: running on node v122-worker STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should fail on mounting file 'afile' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:156 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:16:07.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-4820" for this suite. • [SLOW TEST:6.112 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting file 'afile' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:156 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathSocket","total":-1,"completed":14,"skipped":828,"failed":0} SSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on","total":-1,"completed":19,"skipped":621,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2"]} [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:16:05.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Oct 5 12:16:07.367: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-4b29a85c-b1bf-4ac7-af6f-32699774dac8-backend && ln -s /tmp/local-volume-test-4b29a85c-b1bf-4ac7-af6f-32699774dac8-backend /tmp/local-volume-test-4b29a85c-b1bf-4ac7-af6f-32699774dac8] Namespace:persistent-local-volumes-test-9633 PodName:hostexec-v122-worker2-ln9jq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:16:07.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:16:07.504: INFO: Creating a PV followed by a PVC Oct 5 12:16:07.512: INFO: Waiting for PV local-pvkcm8t to bind to PVC pvc-wl7l6 Oct 5 12:16:07.513: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-wl7l6] to have phase Bound Oct 5 12:16:07.516: INFO: PersistentVolumeClaim pvc-wl7l6 found but phase is Pending instead of Bound. Oct 5 12:16:09.520: INFO: PersistentVolumeClaim pvc-wl7l6 found but phase is Pending instead of Bound. Oct 5 12:16:11.527: INFO: PersistentVolumeClaim pvc-wl7l6 found but phase is Pending instead of Bound. Oct 5 12:16:13.531: INFO: PersistentVolumeClaim pvc-wl7l6 found but phase is Pending instead of Bound. Oct 5 12:16:15.535: INFO: PersistentVolumeClaim pvc-wl7l6 found but phase is Pending instead of Bound. Oct 5 12:16:17.540: INFO: PersistentVolumeClaim pvc-wl7l6 found but phase is Pending instead of Bound. Oct 5 12:16:19.544: INFO: PersistentVolumeClaim pvc-wl7l6 found and phase=Bound (12.031308638s) Oct 5 12:16:19.544: INFO: Waiting up to 3m0s for PersistentVolume local-pvkcm8t to have phase Bound Oct 5 12:16:19.547: INFO: PersistentVolume local-pvkcm8t found and phase=Bound (2.938703ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Oct 5 12:16:21.570: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:45799 --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-9633 exec pod-b6251e6f-67a4-4505-98dc-6bee8f8a3051 --namespace=persistent-local-volumes-test-9633 -- stat -c %g /mnt/volume1' Oct 5 12:16:21.864: INFO: stderr: "" Oct 5 12:16:21.864: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Oct 5 12:16:29.881: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.90:45799 --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-9633 exec pod-831e5063-fe4d-4c1d-991d-44b92ad7a668 --namespace=persistent-local-volumes-test-9633 -- stat -c %g /mnt/volume1' Oct 5 12:16:30.120: INFO: stderr: "" Oct 5 12:16:30.120: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod pod-b6251e6f-67a4-4505-98dc-6bee8f8a3051 in namespace persistent-local-volumes-test-9633 STEP: Deleting second pod STEP: Deleting pod pod-831e5063-fe4d-4c1d-991d-44b92ad7a668 in namespace persistent-local-volumes-test-9633 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:16:30.130: INFO: Deleting PersistentVolumeClaim "pvc-wl7l6" Oct 5 12:16:30.135: INFO: Deleting PersistentVolume "local-pvkcm8t" STEP: Removing the test directory Oct 5 12:16:30.140: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4b29a85c-b1bf-4ac7-af6f-32699774dac8 && rm -r /tmp/local-volume-test-4b29a85c-b1bf-4ac7-af6f-32699774dac8-backend] Namespace:persistent-local-volumes-test-9633 PodName:hostexec-v122-worker2-ln9jq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:16:30.140: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:16:30.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9633" for this suite. • [SLOW TEST:25.002 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":20,"skipped":621,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:16:30.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:77 Oct 5 12:16:30.441: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:16:30.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3687" for this suite. [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:111 Oct 5 12:16:30.450: INFO: AfterEach: Cleaning up test resources Oct 5 12:16:30.450: INFO: pvc is nil Oct 5 12:16:30.450: INFO: pv is nil S [SKIPPING] in Spec Setup (BeforeEach) [0.045 seconds] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should test that deleting the Namespace of a PVC and Pod causes the successful detach of Persistent Disk [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:156 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:16:30.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:144 [It] should create and delete default persistent volumes [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:605 Oct 5 12:16:30.569: INFO: Only supported for providers [openstack gce aws gke vsphere azure] (not local) [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:16:30.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-6852" for this suite. S [SKIPPING] [0.041 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 DynamicProvisioner Default /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:604 should create and delete default persistent volumes [Slow] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:605 Only supported for providers [openstack gce aws gke vsphere azure] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:606 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:16:30.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:144 [It] should provision storage with non-default reclaim policy Retain /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:376 Oct 5 12:16:30.670: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:16:30.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-397" for this suite. S [SKIPPING] [0.037 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 DynamicProvisioner [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:150 should provision storage with non-default reclaim policy Retain [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:376 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:377 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:11:30.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:469 STEP: Creating configMap with name cm-test-opt-create-f8a7d5e9-2c35-4940-9b33-0824b1f51150 STEP: Creating the pod [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:16:30.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5525" for this suite. • [SLOW TEST:300.066 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:469 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]","total":-1,"completed":13,"skipped":402,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:16:07.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "v122-worker2" using path "/tmp/local-volume-test-639c930d-11b5-4611-a684-211964bb734b" Oct 5 12:16:09.745: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-639c930d-11b5-4611-a684-211964bb734b && dd if=/dev/zero of=/tmp/local-volume-test-639c930d-11b5-4611-a684-211964bb734b/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-639c930d-11b5-4611-a684-211964bb734b/file] Namespace:persistent-local-volumes-test-6914 PodName:hostexec-v122-worker2-xkc4d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:16:09.745: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:16:09.962: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-639c930d-11b5-4611-a684-211964bb734b/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6914 PodName:hostexec-v122-worker2-xkc4d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:16:09.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:16:10.103: INFO: Creating a PV followed by a PVC Oct 5 12:16:10.113: INFO: Waiting for PV local-pvxr6sh to bind to PVC pvc-lj7jw Oct 5 12:16:10.113: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-lj7jw] to have phase Bound Oct 5 12:16:10.116: INFO: PersistentVolumeClaim pvc-lj7jw found but phase is Pending instead of Bound. Oct 5 12:16:12.120: INFO: PersistentVolumeClaim pvc-lj7jw found but phase is Pending instead of Bound. Oct 5 12:16:14.124: INFO: PersistentVolumeClaim pvc-lj7jw found but phase is Pending instead of Bound. Oct 5 12:16:16.128: INFO: PersistentVolumeClaim pvc-lj7jw found but phase is Pending instead of Bound. Oct 5 12:16:18.133: INFO: PersistentVolumeClaim pvc-lj7jw found but phase is Pending instead of Bound. Oct 5 12:16:20.138: INFO: PersistentVolumeClaim pvc-lj7jw found and phase=Bound (10.024784086s) Oct 5 12:16:20.138: INFO: Waiting up to 3m0s for PersistentVolume local-pvxr6sh to have phase Bound Oct 5 12:16:20.141: INFO: PersistentVolume local-pvxr6sh found and phase=Bound (2.974549ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Oct 5 12:16:24.167: INFO: pod "pod-e06ed267-6f3e-4a89-91c9-e0a293ed50e2" created on Node "v122-worker2" STEP: Writing in pod1 Oct 5 12:16:24.167: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6914 PodName:pod-e06ed267-6f3e-4a89-91c9-e0a293ed50e2 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:16:24.167: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:16:24.302: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n18 bytes (18B) copied, 0.000242 seconds, 72.6KB/s", err: Oct 5 12:16:24.302: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-6914 PodName:pod-e06ed267-6f3e-4a89-91c9-e0a293ed50e2 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:16:24.302: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:16:24.412: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-e06ed267-6f3e-4a89-91c9-e0a293ed50e2 in namespace persistent-local-volumes-test-6914 STEP: Creating pod2 STEP: Creating a pod Oct 5 12:16:30.435: INFO: pod "pod-b4d5e1c8-c050-475e-8a8b-77c435ecfabc" created on Node "v122-worker2" STEP: Reading in pod2 Oct 5 12:16:30.435: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-6914 PodName:pod-b4d5e1c8-c050-475e-8a8b-77c435ecfabc ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:16:30.435: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:16:30.559: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-b4d5e1c8-c050-475e-8a8b-77c435ecfabc in namespace persistent-local-volumes-test-6914 [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:16:30.565: INFO: Deleting PersistentVolumeClaim "pvc-lj7jw" Oct 5 12:16:30.569: INFO: Deleting PersistentVolume "local-pvxr6sh" Oct 5 12:16:30.574: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-639c930d-11b5-4611-a684-211964bb734b/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6914 PodName:hostexec-v122-worker2-xkc4d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:16:30.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop8" on node "v122-worker2" at path /tmp/local-volume-test-639c930d-11b5-4611-a684-211964bb734b/file Oct 5 12:16:30.675: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop8] Namespace:persistent-local-volumes-test-6914 PodName:hostexec-v122-worker2-xkc4d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:16:30.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-639c930d-11b5-4611-a684-211964bb734b Oct 5 12:16:30.790: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-639c930d-11b5-4611-a684-211964bb734b] Namespace:persistent-local-volumes-test-6914 PodName:hostexec-v122-worker2-xkc4d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:16:30.790: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:16:30.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6914" for this suite. • [SLOW TEST:23.198 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":15,"skipped":841,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:15:56.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume by restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:591 STEP: Building a driver namespace object, basename csi-mock-volumes-6842 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Oct 5 12:15:57.022: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6842-137/csi-attacher Oct 5 12:15:57.026: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6842 Oct 5 12:15:57.026: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-6842 Oct 5 12:15:57.029: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6842 Oct 5 12:15:57.033: INFO: creating *v1.Role: csi-mock-volumes-6842-137/external-attacher-cfg-csi-mock-volumes-6842 Oct 5 12:15:57.037: INFO: creating *v1.RoleBinding: csi-mock-volumes-6842-137/csi-attacher-role-cfg Oct 5 12:15:57.040: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6842-137/csi-provisioner Oct 5 12:15:57.044: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6842 Oct 5 12:15:57.044: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-6842 Oct 5 12:15:57.048: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6842 Oct 5 12:15:57.052: INFO: creating *v1.Role: csi-mock-volumes-6842-137/external-provisioner-cfg-csi-mock-volumes-6842 Oct 5 12:15:57.055: INFO: creating *v1.RoleBinding: csi-mock-volumes-6842-137/csi-provisioner-role-cfg Oct 5 12:15:57.059: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6842-137/csi-resizer Oct 5 12:15:57.063: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6842 Oct 5 12:15:57.063: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-6842 Oct 5 12:15:57.066: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6842 Oct 5 12:15:57.070: INFO: creating *v1.Role: csi-mock-volumes-6842-137/external-resizer-cfg-csi-mock-volumes-6842 Oct 5 12:15:57.074: INFO: creating *v1.RoleBinding: csi-mock-volumes-6842-137/csi-resizer-role-cfg Oct 5 12:15:57.078: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6842-137/csi-snapshotter Oct 5 12:15:57.082: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6842 Oct 5 12:15:57.082: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-6842 Oct 5 12:15:57.085: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6842 Oct 5 12:15:57.089: INFO: creating *v1.Role: csi-mock-volumes-6842-137/external-snapshotter-leaderelection-csi-mock-volumes-6842 Oct 5 12:15:57.093: INFO: creating *v1.RoleBinding: csi-mock-volumes-6842-137/external-snapshotter-leaderelection Oct 5 12:15:57.097: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6842-137/csi-mock Oct 5 12:15:57.101: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6842 Oct 5 12:15:57.104: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6842 Oct 5 12:15:57.108: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6842 Oct 5 12:15:57.111: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6842 Oct 5 12:15:57.115: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6842 Oct 5 12:15:57.119: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6842 Oct 5 12:15:57.122: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6842 Oct 5 12:15:57.126: INFO: creating *v1.StatefulSet: csi-mock-volumes-6842-137/csi-mockplugin Oct 5 12:15:57.133: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-6842 Oct 5 12:15:57.136: INFO: creating *v1.StatefulSet: csi-mock-volumes-6842-137/csi-mockplugin-resizer Oct 5 12:15:57.141: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-6842" Oct 5 12:15:57.144: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-6842 to register on node v122-worker2 STEP: Creating pod Oct 5 12:16:02.163: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Oct 5 12:16:02.168: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-w5sl5] to have phase Bound Oct 5 12:16:02.171: INFO: PersistentVolumeClaim pvc-w5sl5 found but phase is Pending instead of Bound. Oct 5 12:16:04.174: INFO: PersistentVolumeClaim pvc-w5sl5 found and phase=Bound (2.006294554s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Checking for conditions on pvc STEP: Deleting the previously created pod Oct 5 12:16:08.212: INFO: Deleting pod "pvc-volume-tester-mdmk2" in namespace "csi-mock-volumes-6842" Oct 5 12:16:08.217: INFO: Wait up to 5m0s for pod "pvc-volume-tester-mdmk2" to be fully deleted STEP: Creating a new pod with same volume STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-mdmk2 Oct 5 12:16:10.233: INFO: Deleting pod "pvc-volume-tester-mdmk2" in namespace "csi-mock-volumes-6842" STEP: Deleting pod pvc-volume-tester-78zjw Oct 5 12:16:10.236: INFO: Deleting pod "pvc-volume-tester-78zjw" in namespace "csi-mock-volumes-6842" Oct 5 12:16:10.241: INFO: Wait up to 5m0s for pod "pvc-volume-tester-78zjw" to be fully deleted STEP: Deleting claim pvc-w5sl5 Oct 5 12:16:12.257: INFO: Waiting up to 2m0s for PersistentVolume pvc-085f48eb-6400-4921-a88e-2c1d10dc2124 to get deleted Oct 5 12:16:12.260: INFO: PersistentVolume pvc-085f48eb-6400-4921-a88e-2c1d10dc2124 found and phase=Bound (3.022279ms) Oct 5 12:16:14.264: INFO: PersistentVolume pvc-085f48eb-6400-4921-a88e-2c1d10dc2124 was removed STEP: Deleting storageclass csi-mock-volumes-6842-scflj4g STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-6842 STEP: Waiting for namespaces [csi-mock-volumes-6842] to vanish STEP: uninstalling csi mock driver Oct 5 12:16:20.278: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6842-137/csi-attacher Oct 5 12:16:20.283: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6842 Oct 5 12:16:20.287: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6842 Oct 5 12:16:20.292: INFO: deleting *v1.Role: csi-mock-volumes-6842-137/external-attacher-cfg-csi-mock-volumes-6842 Oct 5 12:16:20.296: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6842-137/csi-attacher-role-cfg Oct 5 12:16:20.302: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6842-137/csi-provisioner Oct 5 12:16:20.306: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6842 Oct 5 12:16:20.311: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6842 Oct 5 12:16:20.315: INFO: deleting *v1.Role: csi-mock-volumes-6842-137/external-provisioner-cfg-csi-mock-volumes-6842 Oct 5 12:16:20.320: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6842-137/csi-provisioner-role-cfg Oct 5 12:16:20.325: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6842-137/csi-resizer Oct 5 12:16:20.329: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6842 Oct 5 12:16:20.334: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6842 Oct 5 12:16:20.339: INFO: deleting *v1.Role: csi-mock-volumes-6842-137/external-resizer-cfg-csi-mock-volumes-6842 Oct 5 12:16:20.343: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6842-137/csi-resizer-role-cfg Oct 5 12:16:20.348: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6842-137/csi-snapshotter Oct 5 12:16:20.353: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6842 Oct 5 12:16:20.357: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6842 Oct 5 12:16:20.362: INFO: deleting *v1.Role: csi-mock-volumes-6842-137/external-snapshotter-leaderelection-csi-mock-volumes-6842 Oct 5 12:16:20.366: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6842-137/external-snapshotter-leaderelection Oct 5 12:16:20.371: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6842-137/csi-mock Oct 5 12:16:20.375: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6842 Oct 5 12:16:20.380: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6842 Oct 5 12:16:20.384: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6842 Oct 5 12:16:20.388: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6842 Oct 5 12:16:20.393: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6842 Oct 5 12:16:20.397: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6842 Oct 5 12:16:20.401: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6842 Oct 5 12:16:20.406: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6842-137/csi-mockplugin Oct 5 12:16:20.411: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-6842 Oct 5 12:16:20.416: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6842-137/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-6842-137 STEP: Waiting for namespaces [csi-mock-volumes-6842-137] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:16:32.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:35.495 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI Volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:562 should expand volume by restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:591 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":13,"skipped":713,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSSSSSSSSSSS ------------------------------ Oct 5 12:16:32.478: INFO: Running AfterSuite actions on all nodes Oct 5 12:16:32.478: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2 Oct 5 12:16:32.478: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Oct 5 12:16:32.478: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Oct 5 12:16:32.478: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Oct 5 12:16:32.478: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Oct 5 12:16:32.478: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Oct 5 12:16:32.478: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:16:30.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Oct 5 12:16:30.817: INFO: The status of Pod test-hostpath-type-ngk22 is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:16:32.821: INFO: The status of Pod test-hostpath-type-ngk22 is Running (Ready = true) STEP: running on node v122-worker [It] Should fail on mounting socket 'asocket' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:226 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:16:34.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-5571" for this suite. • ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:16:30.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Oct 5 12:16:30.974: INFO: The status of Pod test-hostpath-type-wqqgl is Pending, waiting for it to be Running (with Ready = true) Oct 5 12:16:32.978: INFO: The status of Pod test-hostpath-type-wqqgl is Running (Ready = true) STEP: running on node v122-worker [It] Should be able to mount socket 'asocket' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:212 [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:16:35.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-5247" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should be able to mount socket 'asocket' successfully when HostPathType is HostPathUnset","total":-1,"completed":16,"skipped":870,"failed":0} Oct 5 12:16:35.011: INFO: Running AfterSuite actions on all nodes Oct 5 12:16:35.011: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2 Oct 5 12:16:35.011: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Oct 5 12:16:35.011: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Oct 5 12:16:35.011: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Oct 5 12:16:35.011: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Oct 5 12:16:35.011: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Oct 5 12:16:35.011: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:16:30.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "v122-worker" at path "/tmp/local-volume-test-777c8f39-c7d3-4859-8607-f6cc8f50a9e7" Oct 5 12:16:32.744: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-777c8f39-c7d3-4859-8607-f6cc8f50a9e7" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-777c8f39-c7d3-4859-8607-f6cc8f50a9e7" "/tmp/local-volume-test-777c8f39-c7d3-4859-8607-f6cc8f50a9e7"] Namespace:persistent-local-volumes-test-8620 PodName:hostexec-v122-worker-9lr8b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:16:32.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:16:32.907: INFO: Creating a PV followed by a PVC Oct 5 12:16:32.916: INFO: Waiting for PV local-pvh799d to bind to PVC pvc-ljw9h Oct 5 12:16:32.916: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-ljw9h] to have phase Bound Oct 5 12:16:32.919: INFO: PersistentVolumeClaim pvc-ljw9h found but phase is Pending instead of Bound. Oct 5 12:16:34.923: INFO: PersistentVolumeClaim pvc-ljw9h found and phase=Bound (2.007009578s) Oct 5 12:16:34.923: INFO: Waiting up to 3m0s for PersistentVolume local-pvh799d to have phase Bound Oct 5 12:16:34.926: INFO: PersistentVolume local-pvh799d found and phase=Bound (2.995213ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Oct 5 12:16:36.951: INFO: pod "pod-48690d15-6943-4c91-b15e-b557a0b26bde" created on Node "v122-worker" STEP: Writing in pod1 Oct 5 12:16:36.951: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8620 PodName:pod-48690d15-6943-4c91-b15e-b557a0b26bde ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:16:36.951: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:16:37.017: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Oct 5 12:16:37.017: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8620 PodName:pod-48690d15-6943-4c91-b15e-b557a0b26bde ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:16:37.017: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:16:37.146: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Oct 5 12:16:37.146: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-777c8f39-c7d3-4859-8607-f6cc8f50a9e7 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8620 PodName:pod-48690d15-6943-4c91-b15e-b557a0b26bde ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 5 12:16:37.146: INFO: >>> kubeConfig: /root/.kube/config Oct 5 12:16:37.230: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-777c8f39-c7d3-4859-8607-f6cc8f50a9e7 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-48690d15-6943-4c91-b15e-b557a0b26bde in namespace persistent-local-volumes-test-8620 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Oct 5 12:16:37.236: INFO: Deleting PersistentVolumeClaim "pvc-ljw9h" Oct 5 12:16:37.241: INFO: Deleting PersistentVolume "local-pvh799d" STEP: Unmount tmpfs mount point on node "v122-worker" at path "/tmp/local-volume-test-777c8f39-c7d3-4859-8607-f6cc8f50a9e7" Oct 5 12:16:37.246: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-777c8f39-c7d3-4859-8607-f6cc8f50a9e7"] Namespace:persistent-local-volumes-test-8620 PodName:hostexec-v122-worker-9lr8b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:16:37.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Oct 5 12:16:37.393: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-777c8f39-c7d3-4859-8607-f6cc8f50a9e7] Namespace:persistent-local-volumes-test-8620 PodName:hostexec-v122-worker-9lr8b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:16:37.393: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:16:37.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8620" for this suite. • [SLOW TEST:6.859 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":21,"skipped":774,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2"]} Oct 5 12:16:37.546: INFO: Running AfterSuite actions on all nodes Oct 5 12:16:37.546: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2 Oct 5 12:16:37.546: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Oct 5 12:16:37.546: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Oct 5 12:16:37.546: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Oct 5 12:16:37.546: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Oct 5 12:16:37.546: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Oct 5 12:16:37.546: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:11:42.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [It] should fail due to wrong node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:324 STEP: Initializing test volumes Oct 5 12:11:44.443: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-1f7b0eff-9091-431f-b0a1-4d9271db1248] Namespace:persistent-local-volumes-test-2156 PodName:hostexec-v122-worker-mck7x ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:11:44.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Oct 5 12:11:44.606: INFO: Creating a PV followed by a PVC Oct 5 12:11:44.615: INFO: Waiting for PV local-pvwgkls to bind to PVC pvc-ncsmr Oct 5 12:11:44.615: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-ncsmr] to have phase Bound Oct 5 12:11:44.618: INFO: PersistentVolumeClaim pvc-ncsmr found but phase is Pending instead of Bound. Oct 5 12:11:46.623: INFO: PersistentVolumeClaim pvc-ncsmr found but phase is Pending instead of Bound. Oct 5 12:11:48.628: INFO: PersistentVolumeClaim pvc-ncsmr found but phase is Pending instead of Bound. Oct 5 12:11:50.633: INFO: PersistentVolumeClaim pvc-ncsmr found and phase=Bound (6.018196897s) Oct 5 12:11:50.633: INFO: Waiting up to 3m0s for PersistentVolume local-pvwgkls to have phase Bound Oct 5 12:11:50.636: INFO: PersistentVolume local-pvwgkls found and phase=Bound (3.249554ms) STEP: Cleaning up PVC and PV Oct 5 12:16:50.662: INFO: Deleting PersistentVolumeClaim "pvc-ncsmr" Oct 5 12:16:50.667: INFO: Deleting PersistentVolume "local-pvwgkls" STEP: Removing the test directory Oct 5 12:16:50.672: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-1f7b0eff-9091-431f-b0a1-4d9271db1248] Namespace:persistent-local-volumes-test-2156 PodName:hostexec-v122-worker-mck7x ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 5 12:16:50.672: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:16:50.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2156" for this suite. • [SLOW TEST:308.446 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Local volume that cannot be mounted [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:304 should fail due to wrong node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:324 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Local volume that cannot be mounted [Slow] should fail due to wrong node","total":-1,"completed":9,"skipped":220,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1"]} Oct 5 12:16:50.830: INFO: Running AfterSuite actions on all nodes Oct 5 12:16:50.830: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2 Oct 5 12:16:50.830: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Oct 5 12:16:50.830: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Oct 5 12:16:50.830: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Oct 5 12:16:50.830: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Oct 5 12:16:50.830: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Oct 5 12:16:50.830: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:14:20.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] contain ephemeral=true when using inline volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:494 STEP: Building a driver namespace object, basename csi-mock-volumes-7971 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Oct 5 12:14:20.409: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7971-5441/csi-attacher Oct 5 12:14:20.413: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7971 Oct 5 12:14:20.413: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-7971 Oct 5 12:14:20.417: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7971 Oct 5 12:14:20.421: INFO: creating *v1.Role: csi-mock-volumes-7971-5441/external-attacher-cfg-csi-mock-volumes-7971 Oct 5 12:14:20.426: INFO: creating *v1.RoleBinding: csi-mock-volumes-7971-5441/csi-attacher-role-cfg Oct 5 12:14:20.430: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7971-5441/csi-provisioner Oct 5 12:14:20.433: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7971 Oct 5 12:14:20.433: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-7971 Oct 5 12:14:20.437: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7971 Oct 5 12:14:20.441: INFO: creating *v1.Role: csi-mock-volumes-7971-5441/external-provisioner-cfg-csi-mock-volumes-7971 Oct 5 12:14:20.449: INFO: creating *v1.RoleBinding: csi-mock-volumes-7971-5441/csi-provisioner-role-cfg Oct 5 12:14:20.453: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7971-5441/csi-resizer Oct 5 12:14:20.457: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7971 Oct 5 12:14:20.457: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-7971 Oct 5 12:14:20.460: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7971 Oct 5 12:14:20.464: INFO: creating *v1.Role: csi-mock-volumes-7971-5441/external-resizer-cfg-csi-mock-volumes-7971 Oct 5 12:14:20.468: INFO: creating *v1.RoleBinding: csi-mock-volumes-7971-5441/csi-resizer-role-cfg Oct 5 12:14:20.472: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7971-5441/csi-snapshotter Oct 5 12:14:20.476: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7971 Oct 5 12:14:20.476: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-7971 Oct 5 12:14:20.481: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7971 Oct 5 12:14:20.484: INFO: creating *v1.Role: csi-mock-volumes-7971-5441/external-snapshotter-leaderelection-csi-mock-volumes-7971 Oct 5 12:14:20.488: INFO: creating *v1.RoleBinding: csi-mock-volumes-7971-5441/external-snapshotter-leaderelection Oct 5 12:14:20.491: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7971-5441/csi-mock Oct 5 12:14:20.495: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7971 Oct 5 12:14:20.499: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7971 Oct 5 12:14:20.502: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7971 Oct 5 12:14:20.506: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7971 Oct 5 12:14:20.510: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7971 Oct 5 12:14:20.513: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7971 Oct 5 12:14:20.517: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7971 Oct 5 12:14:20.531: INFO: creating *v1.StatefulSet: csi-mock-volumes-7971-5441/csi-mockplugin Oct 5 12:14:20.546: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-7971 Oct 5 12:14:20.550: INFO: creating *v1.StatefulSet: csi-mock-volumes-7971-5441/csi-mockplugin-attacher Oct 5 12:14:20.554: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-7971" Oct 5 12:14:20.557: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-7971 to register on node v122-worker STEP: Creating pod STEP: checking for CSIInlineVolumes feature Oct 5 12:14:38.096: INFO: Pod inline-volume-w6qhn has the following logs: Oct 5 12:14:38.100: INFO: Deleting pod "inline-volume-w6qhn" in namespace "csi-mock-volumes-7971" Oct 5 12:14:38.106: INFO: Wait up to 5m0s for pod "inline-volume-w6qhn" to be fully deleted STEP: Deleting the previously created pod Oct 5 12:16:44.113: INFO: Deleting pod "pvc-volume-tester-5jznj" in namespace "csi-mock-volumes-7971" Oct 5 12:16:44.119: INFO: Wait up to 5m0s for pod "pvc-volume-tester-5jznj" to be fully deleted STEP: Checking CSI driver logs Oct 5 12:16:46.136: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: true Oct 5 12:16:46.136: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-5jznj Oct 5 12:16:46.136: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-7971 Oct 5 12:16:46.136: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 20aa718b-ecf2-4c29-8b90-ab2f5ab98d00 Oct 5 12:16:46.136: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default Oct 5 12:16:46.136: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"csi-683d5f9e315388802c75e3039917d0bb7045b97b44ad34d42bdbbbca3b765c5b","target_path":"/var/lib/kubelet/pods/20aa718b-ecf2-4c29-8b90-ab2f5ab98d00/volumes/kubernetes.io~csi/my-volume/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-5jznj Oct 5 12:16:46.136: INFO: Deleting pod "pvc-volume-tester-5jznj" in namespace "csi-mock-volumes-7971" STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-7971 STEP: Waiting for namespaces [csi-mock-volumes-7971] to vanish STEP: uninstalling csi mock driver Oct 5 12:16:52.151: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7971-5441/csi-attacher Oct 5 12:16:52.157: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7971 Oct 5 12:16:52.161: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7971 Oct 5 12:16:52.166: INFO: deleting *v1.Role: csi-mock-volumes-7971-5441/external-attacher-cfg-csi-mock-volumes-7971 Oct 5 12:16:52.171: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7971-5441/csi-attacher-role-cfg Oct 5 12:16:52.175: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7971-5441/csi-provisioner Oct 5 12:16:52.180: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7971 Oct 5 12:16:52.185: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7971 Oct 5 12:16:52.189: INFO: deleting *v1.Role: csi-mock-volumes-7971-5441/external-provisioner-cfg-csi-mock-volumes-7971 Oct 5 12:16:52.194: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7971-5441/csi-provisioner-role-cfg Oct 5 12:16:52.198: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7971-5441/csi-resizer Oct 5 12:16:52.203: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7971 Oct 5 12:16:52.208: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7971 Oct 5 12:16:52.212: INFO: deleting *v1.Role: csi-mock-volumes-7971-5441/external-resizer-cfg-csi-mock-volumes-7971 Oct 5 12:16:52.217: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7971-5441/csi-resizer-role-cfg Oct 5 12:16:52.222: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7971-5441/csi-snapshotter Oct 5 12:16:52.226: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7971 Oct 5 12:16:52.231: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7971 Oct 5 12:16:52.236: INFO: deleting *v1.Role: csi-mock-volumes-7971-5441/external-snapshotter-leaderelection-csi-mock-volumes-7971 Oct 5 12:16:52.240: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7971-5441/external-snapshotter-leaderelection Oct 5 12:16:52.245: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7971-5441/csi-mock Oct 5 12:16:52.249: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7971 Oct 5 12:16:52.254: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7971 Oct 5 12:16:52.258: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7971 Oct 5 12:16:52.263: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7971 Oct 5 12:16:52.268: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7971 Oct 5 12:16:52.272: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7971 Oct 5 12:16:52.277: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7971 Oct 5 12:16:52.281: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7971-5441/csi-mockplugin Oct 5 12:16:52.286: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-7971 Oct 5 12:16:52.291: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7971-5441/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-7971-5441 STEP: Waiting for namespaces [csi-mock-volumes-7971-5441] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:16:58.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:157.999 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:444 contain ephemeral=true when using inline volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:494 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","total":-1,"completed":30,"skipped":1193,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: block] Set fsGroup for local volume should set fsGroup for one pod [Slow]"]} Oct 5 12:16:58.318: INFO: Running AfterSuite actions on all nodes Oct 5 12:16:58.318: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2 Oct 5 12:16:58.318: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Oct 5 12:16:58.318: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Oct 5 12:16:58.318: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Oct 5 12:16:58.318: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Oct 5 12:16:58.318: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Oct 5 12:16:58.318: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:12:33.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/secrets_volume.go:430 STEP: Creating the pod [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:17:33.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2310" for this suite. • [SLOW TEST:300.075 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/secrets_volume.go:430 ------------------------------ {"msg":"PASSED [sig-storage] Secrets Should fail non-optional pod creation due to secret object does not exist [Slow]","total":-1,"completed":8,"skipped":245,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: block] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} Oct 5 12:17:33.811: INFO: Running AfterSuite actions on all nodes Oct 5 12:17:33.811: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2 Oct 5 12:17:33.811: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Oct 5 12:17:33.811: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Oct 5 12:17:33.811: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Oct 5 12:17:33.811: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Oct 5 12:17:33.811: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Oct 5 12:17:33.811: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:14:44.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should report attach limit when limit is bigger than 0 [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:530 STEP: Building a driver namespace object, basename csi-mock-volumes-2113 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Oct 5 12:14:44.672: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2113-7968/csi-attacher Oct 5 12:14:44.676: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2113 Oct 5 12:14:44.676: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-2113 Oct 5 12:14:44.679: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2113 Oct 5 12:14:44.683: INFO: creating *v1.Role: csi-mock-volumes-2113-7968/external-attacher-cfg-csi-mock-volumes-2113 Oct 5 12:14:44.687: INFO: creating *v1.RoleBinding: csi-mock-volumes-2113-7968/csi-attacher-role-cfg Oct 5 12:14:44.691: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2113-7968/csi-provisioner Oct 5 12:14:44.695: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2113 Oct 5 12:14:44.695: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-2113 Oct 5 12:14:44.699: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2113 Oct 5 12:14:44.702: INFO: creating *v1.Role: csi-mock-volumes-2113-7968/external-provisioner-cfg-csi-mock-volumes-2113 Oct 5 12:14:44.706: INFO: creating *v1.RoleBinding: csi-mock-volumes-2113-7968/csi-provisioner-role-cfg Oct 5 12:14:44.710: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2113-7968/csi-resizer Oct 5 12:14:44.714: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2113 Oct 5 12:14:44.714: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-2113 Oct 5 12:14:44.718: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2113 Oct 5 12:14:44.722: INFO: creating *v1.Role: csi-mock-volumes-2113-7968/external-resizer-cfg-csi-mock-volumes-2113 Oct 5 12:14:44.726: INFO: creating *v1.RoleBinding: csi-mock-volumes-2113-7968/csi-resizer-role-cfg Oct 5 12:14:44.730: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2113-7968/csi-snapshotter Oct 5 12:14:44.733: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2113 Oct 5 12:14:44.733: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-2113 Oct 5 12:14:44.737: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2113 Oct 5 12:14:44.741: INFO: creating *v1.Role: csi-mock-volumes-2113-7968/external-snapshotter-leaderelection-csi-mock-volumes-2113 Oct 5 12:14:44.744: INFO: creating *v1.RoleBinding: csi-mock-volumes-2113-7968/external-snapshotter-leaderelection Oct 5 12:14:44.748: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2113-7968/csi-mock Oct 5 12:14:44.752: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2113 Oct 5 12:14:44.755: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2113 Oct 5 12:14:44.759: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2113 Oct 5 12:14:44.762: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2113 Oct 5 12:14:44.766: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2113 Oct 5 12:14:44.769: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2113 Oct 5 12:14:44.773: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2113 Oct 5 12:14:44.777: INFO: creating *v1.StatefulSet: csi-mock-volumes-2113-7968/csi-mockplugin Oct 5 12:14:44.783: INFO: creating *v1.StatefulSet: csi-mock-volumes-2113-7968/csi-mockplugin-attacher Oct 5 12:14:44.787: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-2113 to register on node v122-worker2 STEP: Creating pod Oct 5 12:14:49.805: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Oct 5 12:14:49.812: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-vf42w] to have phase Bound Oct 5 12:14:49.815: INFO: PersistentVolumeClaim pvc-vf42w found but phase is Pending instead of Bound. Oct 5 12:14:51.820: INFO: PersistentVolumeClaim pvc-vf42w found and phase=Bound (2.008167038s) STEP: Creating pod Oct 5 12:15:01.844: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Oct 5 12:15:01.848: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-68zw8] to have phase Bound Oct 5 12:15:01.851: INFO: PersistentVolumeClaim pvc-68zw8 found but phase is Pending instead of Bound. Oct 5 12:15:03.855: INFO: PersistentVolumeClaim pvc-68zw8 found and phase=Bound (2.007275772s) STEP: Creating pod Oct 5 12:15:13.881: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Oct 5 12:15:13.885: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-ks5p6] to have phase Bound Oct 5 12:15:13.888: INFO: PersistentVolumeClaim pvc-ks5p6 found but phase is Pending instead of Bound. Oct 5 12:15:15.893: INFO: PersistentVolumeClaim pvc-ks5p6 found and phase=Bound (2.007892253s) STEP: Deleting pod pvc-volume-tester-zjmmn Oct 5 12:15:25.917: INFO: Deleting pod "pvc-volume-tester-zjmmn" in namespace "csi-mock-volumes-2113" Oct 5 12:15:25.923: INFO: Wait up to 5m0s for pod "pvc-volume-tester-zjmmn" to be fully deleted STEP: Deleting pod pvc-volume-tester-8z5gc Oct 5 12:15:27.930: INFO: Deleting pod "pvc-volume-tester-8z5gc" in namespace "csi-mock-volumes-2113" Oct 5 12:15:27.936: INFO: Wait up to 5m0s for pod "pvc-volume-tester-8z5gc" to be fully deleted STEP: Deleting pod pvc-volume-tester-tctg5 Oct 5 12:15:29.943: INFO: Deleting pod "pvc-volume-tester-tctg5" in namespace "csi-mock-volumes-2113" Oct 5 12:15:29.949: INFO: Wait up to 5m0s for pod "pvc-volume-tester-tctg5" to be fully deleted STEP: Deleting claim pvc-vf42w Oct 5 12:17:35.978: INFO: Waiting up to 2m0s for PersistentVolume pvc-a981c732-eebe-4ddf-b863-91209f77684d to get deleted Oct 5 12:17:35.986: INFO: PersistentVolume pvc-a981c732-eebe-4ddf-b863-91209f77684d found and phase=Bound (7.038891ms) Oct 5 12:17:37.991: INFO: PersistentVolume pvc-a981c732-eebe-4ddf-b863-91209f77684d was removed STEP: Deleting claim pvc-68zw8 Oct 5 12:17:37.999: INFO: Waiting up to 2m0s for PersistentVolume pvc-67b24f3a-0757-4cd0-84dc-c688a709cd08 to get deleted Oct 5 12:17:38.002: INFO: PersistentVolume pvc-67b24f3a-0757-4cd0-84dc-c688a709cd08 found and phase=Bound (2.955364ms) Oct 5 12:17:40.006: INFO: PersistentVolume pvc-67b24f3a-0757-4cd0-84dc-c688a709cd08 was removed STEP: Deleting claim pvc-ks5p6 Oct 5 12:17:40.016: INFO: Waiting up to 2m0s for PersistentVolume pvc-b455391b-99be-44b5-b9e0-972abcf04ad5 to get deleted Oct 5 12:17:40.019: INFO: PersistentVolume pvc-b455391b-99be-44b5-b9e0-972abcf04ad5 found and phase=Bound (3.399699ms) Oct 5 12:17:42.023: INFO: PersistentVolume pvc-b455391b-99be-44b5-b9e0-972abcf04ad5 was removed STEP: Deleting storageclass csi-mock-volumes-2113-scqr87w STEP: Deleting storageclass csi-mock-volumes-2113-sc9xlln STEP: Deleting storageclass csi-mock-volumes-2113-sc6fvcq STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-2113 STEP: Waiting for namespaces [csi-mock-volumes-2113] to vanish STEP: uninstalling csi mock driver Oct 5 12:17:48.054: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2113-7968/csi-attacher Oct 5 12:17:48.059: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2113 Oct 5 12:17:48.064: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2113 Oct 5 12:17:48.069: INFO: deleting *v1.Role: csi-mock-volumes-2113-7968/external-attacher-cfg-csi-mock-volumes-2113 Oct 5 12:17:48.073: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2113-7968/csi-attacher-role-cfg Oct 5 12:17:48.077: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2113-7968/csi-provisioner Oct 5 12:17:48.082: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2113 Oct 5 12:17:48.086: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2113 Oct 5 12:17:48.090: INFO: deleting *v1.Role: csi-mock-volumes-2113-7968/external-provisioner-cfg-csi-mock-volumes-2113 Oct 5 12:17:48.094: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2113-7968/csi-provisioner-role-cfg Oct 5 12:17:48.099: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2113-7968/csi-resizer Oct 5 12:17:48.103: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2113 Oct 5 12:17:48.107: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2113 Oct 5 12:17:48.112: INFO: deleting *v1.Role: csi-mock-volumes-2113-7968/external-resizer-cfg-csi-mock-volumes-2113 Oct 5 12:17:48.116: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2113-7968/csi-resizer-role-cfg Oct 5 12:17:48.121: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2113-7968/csi-snapshotter Oct 5 12:17:48.126: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2113 Oct 5 12:17:48.130: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2113 Oct 5 12:17:48.135: INFO: deleting *v1.Role: csi-mock-volumes-2113-7968/external-snapshotter-leaderelection-csi-mock-volumes-2113 Oct 5 12:17:48.139: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2113-7968/external-snapshotter-leaderelection Oct 5 12:17:48.144: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2113-7968/csi-mock Oct 5 12:17:48.148: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2113 Oct 5 12:17:48.153: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2113 Oct 5 12:17:48.157: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2113 Oct 5 12:17:48.162: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2113 Oct 5 12:17:48.166: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2113 Oct 5 12:17:48.171: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2113 Oct 5 12:17:48.175: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2113 Oct 5 12:17:48.180: INFO: deleting *v1.StatefulSet: csi-mock-volumes-2113-7968/csi-mockplugin Oct 5 12:17:48.185: INFO: deleting *v1.StatefulSet: csi-mock-volumes-2113-7968/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-2113-7968 STEP: Waiting for namespaces [csi-mock-volumes-2113-7968] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:17:54.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:189.618 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI volume limit information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:529 should report attach limit when limit is bigger than 0 [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:530 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI volume limit information using mock driver should report attach limit when limit is bigger than 0 [Slow]","total":-1,"completed":26,"skipped":863,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1"]} Oct 5 12:17:54.208: INFO: Running AfterSuite actions on all nodes Oct 5 12:17:54.208: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2 Oct 5 12:17:54.208: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Oct 5 12:17:54.208: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Oct 5 12:17:54.208: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Oct 5 12:17:54.208: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Oct 5 12:17:54.208: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Oct 5 12:17:54.208: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:15:51.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:460 STEP: Creating the pod [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:20:51.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4108" for this suite. • [SLOW TEST:300.072 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:460 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap Should fail non-optional pod creation due to configMap object does not exist [Slow]","total":-1,"completed":19,"skipped":611,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} Oct 5 12:20:51.567: INFO: Running AfterSuite actions on all nodes Oct 5 12:20:51.567: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2 Oct 5 12:20:51.567: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Oct 5 12:20:51.567: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Oct 5 12:20:51.567: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Oct 5 12:20:51.567: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Oct 5 12:20:51.567: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Oct 5 12:20:51.567: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 5 12:11:49.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [It] should fail due to non-existent path /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:307 STEP: Creating local PVC and PV Oct 5 12:11:49.940: INFO: Creating a PV followed by a PVC Oct 5 12:11:49.951: INFO: Waiting for PV local-pvg5qbz to bind to PVC pvc-7cjw8 Oct 5 12:11:49.951: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-7cjw8] to have phase Bound Oct 5 12:11:49.954: INFO: PersistentVolumeClaim pvc-7cjw8 found but phase is Pending instead of Bound. Oct 5 12:11:51.958: INFO: PersistentVolumeClaim pvc-7cjw8 found and phase=Bound (2.006985293s) Oct 5 12:11:51.958: INFO: Waiting up to 3m0s for PersistentVolume local-pvg5qbz to have phase Bound Oct 5 12:11:51.961: INFO: PersistentVolume local-pvg5qbz found and phase=Bound (3.099531ms) STEP: Creating a pod STEP: Cleaning up PVC and PV Oct 5 12:21:51.998: INFO: Deleting PersistentVolumeClaim "pvc-7cjw8" Oct 5 12:21:52.004: INFO: Deleting PersistentVolume "local-pvg5qbz" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 5 12:21:52.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4614" for this suite. • [SLOW TEST:602.120 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Local volume that cannot be mounted [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:304 should fail due to non-existent path /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:307 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Local volume that cannot be mounted [Slow] should fail due to non-existent path","total":-1,"completed":8,"skipped":353,"failed":0} Oct 5 12:21:52.021: INFO: Running AfterSuite actions on all nodes Oct 5 12:21:52.021: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2 Oct 5 12:21:52.021: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Oct 5 12:21:52.021: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Oct 5 12:21:52.021: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Oct 5 12:21:52.021: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Oct 5 12:21:52.021: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Oct 5 12:21:52.021: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathCharDev","total":-1,"completed":14,"skipped":416,"failed":0} Oct 5 12:16:34.862: INFO: Running AfterSuite actions on all nodes Oct 5 12:16:34.862: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2 Oct 5 12:16:34.862: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Oct 5 12:16:34.862: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 Oct 5 12:16:34.862: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Oct 5 12:16:34.862: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Oct 5 12:16:34.862: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Oct 5 12:16:34.862: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 Oct 5 12:21:52.072: INFO: Running AfterSuite actions on node 1 Oct 5 12:21:52.072: INFO: Skipping dumping logs from cluster Summarizing 7 Failures: [Fail] [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] [BeforeEach] One pod requesting one prebound PVC should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:133 [Fail] [sig-storage] PersistentVolumes-local [Volume type: block] [BeforeEach] Set fsGroup for local volume should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:133 [Fail] [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] [BeforeEach] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:133 [Fail] [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] [BeforeEach] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:133 [Fail] [sig-storage] PersistentVolumes-local [Volume type: block] [BeforeEach] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:133 [Fail] [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:749 [Fail] [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] [BeforeEach] One pod requesting one prebound PVC should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:133 Ran 171 of 6444 Specs in 1125.280 seconds FAIL! -- 164 Passed | 7 Failed | 0 Pending | 6273 Skipped Ginkgo ran 1 suite in 18m48.552523404s Test Suite Failed