Running Suite: Kubernetes e2e suite =================================== Random Seed: 1655510477 - Will randomize all specs Will run 5773 specs Running in parallel across 10 nodes Jun 18 00:01:18.828: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:01:18.833: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 18 00:01:18.861: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 18 00:01:18.921: INFO: The status of Pod cmk-init-discover-node1-bvmrv is Succeeded, skipping waiting Jun 18 00:01:18.921: INFO: The status of Pod cmk-init-discover-node2-z2vgz is Succeeded, skipping waiting Jun 18 00:01:18.921: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 18 00:01:18.921: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Jun 18 00:01:18.921: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 18 00:01:18.940: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Jun 18 00:01:18.940: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Jun 18 00:01:18.940: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Jun 18 00:01:18.940: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Jun 18 00:01:18.940: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Jun 18 00:01:18.940: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Jun 18 00:01:18.940: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Jun 18 00:01:18.940: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 18 00:01:18.940: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Jun 18 00:01:18.940: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Jun 18 00:01:18.940: INFO: e2e test version: v1.21.9 Jun 18 00:01:18.942: INFO: kube-apiserver version: v1.21.1 Jun 18 00:01:18.942: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:01:18.948: INFO: Cluster IP family: ipv4 SSSSSSSS ------------------------------ Jun 18 00:01:18.944: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:01:18.965: INFO: Cluster IP family: ipv4 Jun 18 00:01:18.946: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:01:18.967: INFO: Cluster IP family: ipv4 SSSSSS ------------------------------ Jun 18 00:01:18.953: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:01:18.974: INFO: Cluster IP family: ipv4 SSSSSS ------------------------------ Jun 18 00:01:18.958: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:01:18.980: INFO: Cluster IP family: ipv4 SSSSSSS ------------------------------ Jun 18 00:01:18.969: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:01:18.991: INFO: Cluster IP family: ipv4 Jun 18 00:01:18.968: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:01:18.991: INFO: Cluster IP family: ipv4 SSSSSSSSS ------------------------------ Jun 18 00:01:18.977: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:01:18.999: INFO: Cluster IP family: ipv4 SS ------------------------------ Jun 18 00:01:18.979: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:01:19.001: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSS ------------------------------ Jun 18 00:01:18.989: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:01:19.011: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:01:18.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv W0618 00:01:19.006615 40 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 18 00:01:19.006: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 18 00:01:19.010: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 18 00:01:19.012: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:01:19.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-2422" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.046 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create volume metrics in Volume Manager [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:292 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:01:19.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning W0618 00:01:19.046600 38 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 18 00:01:19.046: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 18 00:01:19.048: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should provision storage with non-default reclaim policy Retain /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:403 Jun 18 00:01:19.050: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:01:19.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-2712" for this suite. S [SKIPPING] [0.036 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 DynamicProvisioner [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:152 should provision storage with non-default reclaim policy Retain [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:403 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:404 ------------------------------ SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:01:19.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks W0618 00:01:19.051090 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 18 00:01:19.051: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 18 00:01:19.053: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Jun 18 00:01:19.055: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:01:19.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-2874" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.041 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 schedule a pod w/ RW PD(s) mounted to 1 or more containers, write to PD, verify content, delete pod, and repeat in rapid succession [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:231 using 1 containers and 2 PDs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:254 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:01:19.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename mounted-volume-expand W0618 00:01:19.062936 24 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 18 00:01:19.063: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 18 00:01:19.064: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:61 Jun 18 00:01:19.066: INFO: Only supported for providers [aws gce] (not local) [AfterEach] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:01:19.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "mounted-volume-expand-4216" for this suite. [AfterEach] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:108 Jun 18 00:01:19.084: INFO: AfterEach: Cleaning up resources for mounted volume resize S [SKIPPING] in Spec Setup (BeforeEach) [0.043 seconds] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should verify mounted devices can be resized [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:122 Only supported for providers [aws gce] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:62 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:01:19.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks W0618 00:01:19.156677 37 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 18 00:01:19.157: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 18 00:01:19.158: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Jun 18 00:01:19.160: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:01:19.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-8292" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.038 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 schedule a pod w/ RW PD(s) mounted to 1 or more containers, write to PD, verify content, delete pod, and repeat in rapid succession [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:231 using 4 containers and 1 PDs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:254 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:01:19.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] nonexistent volume subPath should have the correct mode and owner using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:63 STEP: Creating a pod to test emptydir subpath on tmpfs Jun 18 00:01:19.148: INFO: Waiting up to 5m0s for pod "pod-d5a307c7-2a86-47e2-8de1-c041922e5fbc" in namespace "emptydir-2640" to be "Succeeded or Failed" Jun 18 00:01:19.151: INFO: Pod "pod-d5a307c7-2a86-47e2-8de1-c041922e5fbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.282064ms Jun 18 00:01:21.155: INFO: Pod "pod-d5a307c7-2a86-47e2-8de1-c041922e5fbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006360656s Jun 18 00:01:23.158: INFO: Pod "pod-d5a307c7-2a86-47e2-8de1-c041922e5fbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009735161s STEP: Saw pod success Jun 18 00:01:23.158: INFO: Pod "pod-d5a307c7-2a86-47e2-8de1-c041922e5fbc" satisfied condition "Succeeded or Failed" Jun 18 00:01:23.160: INFO: Trying to get logs from node node2 pod pod-d5a307c7-2a86-47e2-8de1-c041922e5fbc container test-container: STEP: delete the pod Jun 18 00:01:23.177: INFO: Waiting for pod pod-d5a307c7-2a86-47e2-8de1-c041922e5fbc to disappear Jun 18 00:01:23.179: INFO: Pod pod-d5a307c7-2a86-47e2-8de1-c041922e5fbc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:01:23.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2640" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup","total":-1,"completed":1,"skipped":20,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:01:23.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should create and delete default persistent volumes [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:692 Jun 18 00:01:23.217: INFO: Only supported for providers [openstack gce aws gke vsphere azure] (not local) [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:01:23.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-4515" for this suite. S [SKIPPING] [0.029 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 DynamicProvisioner Default /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:691 should create and delete default persistent volumes [Slow] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:692 Only supported for providers [openstack gce aws gke vsphere azure] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:693 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:01:18.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test W0618 00:01:19.013331 29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 18 00:01:19.013: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 18 00:01:19.015: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Jun 18 00:01:25.049: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-9428 PodName:hostexec-node1-n6xq5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:01:25.049: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:01:25.162: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Jun 18 00:01:25.162: INFO: exec node1: stdout: "0\n" Jun 18 00:01:25.162: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Jun 18 00:01:25.162: INFO: exec node1: exit code: 0 Jun 18 00:01:25.162: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:01:25.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9428" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [6.195 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1254 ------------------------------ SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:01:18.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file W0618 00:01:19.016463 30 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 18 00:01:19.016: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 18 00:01:19.018: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Jun 18 00:01:19.035: INFO: The status of Pod test-hostpath-type-hw22p is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:01:21.039: INFO: The status of Pod test-hostpath-type-hw22p is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:01:23.041: INFO: The status of Pod test-hostpath-type-hw22p is Running (Ready = true) STEP: running on node node2 STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should fail on mounting file 'afile' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:156 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:01:31.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-4483" for this suite. • [SLOW TEST:12.125 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting file 'afile' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:156 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathSocket","total":-1,"completed":1,"skipped":6,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:01:19.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test W0618 00:01:19.120770 34 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 18 00:01:19.121: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 18 00:01:19.123: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-7eda3d09-11a6-487f-a1ca-7aff45de8757" Jun 18 00:01:23.150: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-7eda3d09-11a6-487f-a1ca-7aff45de8757 && dd if=/dev/zero of=/tmp/local-volume-test-7eda3d09-11a6-487f-a1ca-7aff45de8757/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-7eda3d09-11a6-487f-a1ca-7aff45de8757/file] Namespace:persistent-local-volumes-test-6791 PodName:hostexec-node1-6995b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:01:23.150: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:01:23.879: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-7eda3d09-11a6-487f-a1ca-7aff45de8757/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6791 PodName:hostexec-node1-6995b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:01:23.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:01:23.992: INFO: Creating a PV followed by a PVC Jun 18 00:01:23.998: INFO: Waiting for PV local-pvrd9fv to bind to PVC pvc-djf78 Jun 18 00:01:23.998: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-djf78] to have phase Bound Jun 18 00:01:24.000: INFO: PersistentVolumeClaim pvc-djf78 found but phase is Pending instead of Bound. Jun 18 00:01:26.005: INFO: PersistentVolumeClaim pvc-djf78 found but phase is Pending instead of Bound. Jun 18 00:01:28.009: INFO: PersistentVolumeClaim pvc-djf78 found and phase=Bound (4.010791423s) Jun 18 00:01:28.009: INFO: Waiting up to 3m0s for PersistentVolume local-pvrd9fv to have phase Bound Jun 18 00:01:28.011: INFO: PersistentVolume local-pvrd9fv found and phase=Bound (2.531624ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Jun 18 00:01:34.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-6791 exec pod-419d18d5-51b8-4f92-8810-f9d3a17dad99 --namespace=persistent-local-volumes-test-6791 -- stat -c %g /mnt/volume1' Jun 18 00:01:34.291: INFO: stderr: "" Jun 18 00:01:34.291: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-419d18d5-51b8-4f92-8810-f9d3a17dad99 in namespace persistent-local-volumes-test-6791 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:01:34.296: INFO: Deleting PersistentVolumeClaim "pvc-djf78" Jun 18 00:01:34.301: INFO: Deleting PersistentVolume "local-pvrd9fv" Jun 18 00:01:34.305: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-7eda3d09-11a6-487f-a1ca-7aff45de8757/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6791 PodName:hostexec-node1-6995b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:01:34.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node1" at path /tmp/local-volume-test-7eda3d09-11a6-487f-a1ca-7aff45de8757/file Jun 18 00:01:34.435: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-6791 PodName:hostexec-node1-6995b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:01:34.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-7eda3d09-11a6-487f-a1ca-7aff45de8757 Jun 18 00:01:34.551: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-7eda3d09-11a6-487f-a1ca-7aff45de8757] Namespace:persistent-local-volumes-test-6791 PodName:hostexec-node1-6995b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:01:34.551: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:01:34.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6791" for this suite. • [SLOW TEST:15.588 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":1,"skipped":41,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:01:34.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename regional-pd STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:68 Jun 18 00:01:34.727: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:01:34.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "regional-pd-5777" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 RegionalPD [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:76 should provision storage in the allowedTopologies [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:86 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:01:19.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Jun 18 00:01:19.668: INFO: The status of Pod test-hostpath-type-mw9l2 is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:01:21.671: INFO: The status of Pod test-hostpath-type-mw9l2 is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:01:23.673: INFO: The status of Pod test-hostpath-type-mw9l2 is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:01:25.673: INFO: The status of Pod test-hostpath-type-mw9l2 is Running (Ready = true) STEP: running on node node1 STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should fail on mounting non-existent file 'does-not-exist-file' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:137 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:01:35.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-2490" for this suite. • [SLOW TEST:16.548 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting non-existent file 'does-not-exist-file' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:137 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should fail on mounting non-existent file 'does-not-exist-file' when HostPathType is HostPathFile","total":-1,"completed":1,"skipped":53,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:01:19.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test W0618 00:01:19.750458 32 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 18 00:01:19.750: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 18 00:01:19.752: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 18 00:01:25.783: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-1dbdeb2a-db06-492f-8f4b-28cb7c9a3c31] Namespace:persistent-local-volumes-test-3329 PodName:hostexec-node1-kkct9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:01:25.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:01:25.890: INFO: Creating a PV followed by a PVC Jun 18 00:01:25.897: INFO: Waiting for PV local-pvqgzbm to bind to PVC pvc-4zr4c Jun 18 00:01:25.897: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-4zr4c] to have phase Bound Jun 18 00:01:25.900: INFO: PersistentVolumeClaim pvc-4zr4c found but phase is Pending instead of Bound. Jun 18 00:01:27.903: INFO: PersistentVolumeClaim pvc-4zr4c found and phase=Bound (2.00521778s) Jun 18 00:01:27.903: INFO: Waiting up to 3m0s for PersistentVolume local-pvqgzbm to have phase Bound Jun 18 00:01:27.905: INFO: PersistentVolume local-pvqgzbm found and phase=Bound (1.855131ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Jun 18 00:01:35.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-3329 exec pod-ad7438be-aa37-4e70-a524-60f606d954d8 --namespace=persistent-local-volumes-test-3329 -- stat -c %g /mnt/volume1' Jun 18 00:01:36.295: INFO: stderr: "" Jun 18 00:01:36.295: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-ad7438be-aa37-4e70-a524-60f606d954d8 in namespace persistent-local-volumes-test-3329 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:01:36.299: INFO: Deleting PersistentVolumeClaim "pvc-4zr4c" Jun 18 00:01:36.302: INFO: Deleting PersistentVolume "local-pvqgzbm" STEP: Removing the test directory Jun 18 00:01:36.306: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-1dbdeb2a-db06-492f-8f4b-28cb7c9a3c31] Namespace:persistent-local-volumes-test-3329 PodName:hostexec-node1-kkct9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:01:36.306: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:01:36.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3329" for this suite. • [SLOW TEST:17.421 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":1,"skipped":85,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:01:36.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename multi-az STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:39 Jun 18 00:01:36.683: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:01:36.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "multi-az-3661" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should schedule pods in the same zones as statically provisioned PVs [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:50 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:40 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:01:18.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test W0618 00:01:19.008314 27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 18 00:01:19.008: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 18 00:01:19.010: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-05251a6a-e03d-469d-8fee-2ccdf2995948" Jun 18 00:01:23.052: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-05251a6a-e03d-469d-8fee-2ccdf2995948" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-05251a6a-e03d-469d-8fee-2ccdf2995948" "/tmp/local-volume-test-05251a6a-e03d-469d-8fee-2ccdf2995948"] Namespace:persistent-local-volumes-test-9388 PodName:hostexec-node2-cd97b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:01:23.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:01:23.149: INFO: Creating a PV followed by a PVC Jun 18 00:01:23.159: INFO: Waiting for PV local-pv29gk9 to bind to PVC pvc-p5899 Jun 18 00:01:23.159: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-p5899] to have phase Bound Jun 18 00:01:23.161: INFO: PersistentVolumeClaim pvc-p5899 found but phase is Pending instead of Bound. Jun 18 00:01:25.165: INFO: PersistentVolumeClaim pvc-p5899 found and phase=Bound (2.005767883s) Jun 18 00:01:25.165: INFO: Waiting up to 3m0s for PersistentVolume local-pv29gk9 to have phase Bound Jun 18 00:01:25.167: INFO: PersistentVolume local-pv29gk9 found and phase=Bound (2.200562ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Jun 18 00:01:31.195: INFO: pod "pod-69e8b81b-c721-4633-af4f-976f3c558778" created on Node "node2" STEP: Writing in pod1 Jun 18 00:01:31.195: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9388 PodName:pod-69e8b81b-c721-4633-af4f-976f3c558778 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:01:31.195: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:01:31.299: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Jun 18 00:01:31.299: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9388 PodName:pod-69e8b81b-c721-4633-af4f-976f3c558778 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:01:31.299: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:01:31.386: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-69e8b81b-c721-4633-af4f-976f3c558778 in namespace persistent-local-volumes-test-9388 STEP: Creating pod2 STEP: Creating a pod Jun 18 00:01:39.411: INFO: pod "pod-aa67c9a1-13eb-4d1d-b3d8-1f027bebcfe3" created on Node "node2" STEP: Reading in pod2 Jun 18 00:01:39.411: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9388 PodName:pod-aa67c9a1-13eb-4d1d-b3d8-1f027bebcfe3 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:01:39.411: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:01:39.558: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-aa67c9a1-13eb-4d1d-b3d8-1f027bebcfe3 in namespace persistent-local-volumes-test-9388 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:01:39.565: INFO: Deleting PersistentVolumeClaim "pvc-p5899" Jun 18 00:01:39.569: INFO: Deleting PersistentVolume "local-pv29gk9" STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-05251a6a-e03d-469d-8fee-2ccdf2995948" Jun 18 00:01:39.574: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-05251a6a-e03d-469d-8fee-2ccdf2995948"] Namespace:persistent-local-volumes-test-9388 PodName:hostexec-node2-cd97b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:01:39.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:01:39.674: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-05251a6a-e03d-469d-8fee-2ccdf2995948] Namespace:persistent-local-volumes-test-9388 PodName:hostexec-node2-cd97b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:01:39.674: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:01:39.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9388" for this suite. • [SLOW TEST:20.830 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":11,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:01:19.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 18 00:01:25.110: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-e106f77f-8393-495c-aa0c-dd67a54b2e4a] Namespace:persistent-local-volumes-test-6438 PodName:hostexec-node1-qmlsp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:01:25.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:01:25.263: INFO: Creating a PV followed by a PVC Jun 18 00:01:25.274: INFO: Waiting for PV local-pvj2l42 to bind to PVC pvc-ch5jm Jun 18 00:01:25.274: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-ch5jm] to have phase Bound Jun 18 00:01:25.276: INFO: PersistentVolumeClaim pvc-ch5jm found but phase is Pending instead of Bound. Jun 18 00:01:27.280: INFO: PersistentVolumeClaim pvc-ch5jm found and phase=Bound (2.005988589s) Jun 18 00:01:27.280: INFO: Waiting up to 3m0s for PersistentVolume local-pvj2l42 to have phase Bound Jun 18 00:01:27.283: INFO: PersistentVolume local-pvj2l42 found and phase=Bound (2.537035ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Jun 18 00:01:33.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-6438 exec pod-82240884-ec23-4cd5-9c61-cec094200620 --namespace=persistent-local-volumes-test-6438 -- stat -c %g /mnt/volume1' Jun 18 00:01:33.684: INFO: stderr: "" Jun 18 00:01:33.684: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Jun 18 00:01:39.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-6438 exec pod-3f42e8ff-35f4-47f5-91fb-59fb4ca7e773 --namespace=persistent-local-volumes-test-6438 -- stat -c %g /mnt/volume1' Jun 18 00:01:40.018: INFO: stderr: "" Jun 18 00:01:40.018: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod pod-82240884-ec23-4cd5-9c61-cec094200620 in namespace persistent-local-volumes-test-6438 STEP: Deleting second pod STEP: Deleting pod pod-3f42e8ff-35f4-47f5-91fb-59fb4ca7e773 in namespace persistent-local-volumes-test-6438 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:01:40.034: INFO: Deleting PersistentVolumeClaim "pvc-ch5jm" Jun 18 00:01:40.041: INFO: Deleting PersistentVolume "local-pvj2l42" STEP: Removing the test directory Jun 18 00:01:40.045: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e106f77f-8393-495c-aa0c-dd67a54b2e4a] Namespace:persistent-local-volumes-test-6438 PodName:hostexec-node1-qmlsp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:01:40.045: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:01:40.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6438" for this suite. • [SLOW TEST:21.139 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":1,"skipped":20,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:01:31.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Jun 18 00:01:31.168: INFO: The status of Pod test-hostpath-type-5fpbt is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:01:33.172: INFO: The status of Pod test-hostpath-type-5fpbt is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:01:35.173: INFO: The status of Pod test-hostpath-type-5fpbt is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:01:37.171: INFO: The status of Pod test-hostpath-type-5fpbt is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:01:39.172: INFO: The status of Pod test-hostpath-type-5fpbt is Running (Ready = true) STEP: running on node node2 STEP: Create a block device for further testing Jun 18 00:01:39.174: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-6187 PodName:test-hostpath-type-5fpbt ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:01:39.174: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting non-existent block device 'does-not-exist-blk-dev' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:340 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:01:41.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-6187" for this suite. • [SLOW TEST:10.470 seconds] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting non-existent block device 'does-not-exist-blk-dev' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:340 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should fail on mounting non-existent block device 'does-not-exist-blk-dev' when HostPathType is HostPathBlockDev","total":-1,"completed":2,"skipped":14,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:01:40.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91 STEP: Creating a pod to test downward API volume plugin Jun 18 00:01:40.261: INFO: Waiting up to 5m0s for pod "metadata-volume-af93ca90-f3b0-42fb-ba8d-6d540c956b7d" in namespace "projected-7076" to be "Succeeded or Failed" Jun 18 00:01:40.263: INFO: Pod "metadata-volume-af93ca90-f3b0-42fb-ba8d-6d540c956b7d": Phase="Pending", Reason="", readiness=false. Elapsed: 1.998039ms Jun 18 00:01:42.267: INFO: Pod "metadata-volume-af93ca90-f3b0-42fb-ba8d-6d540c956b7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005947369s Jun 18 00:01:44.271: INFO: Pod "metadata-volume-af93ca90-f3b0-42fb-ba8d-6d540c956b7d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010027188s Jun 18 00:01:46.276: INFO: Pod "metadata-volume-af93ca90-f3b0-42fb-ba8d-6d540c956b7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014424853s STEP: Saw pod success Jun 18 00:01:46.276: INFO: Pod "metadata-volume-af93ca90-f3b0-42fb-ba8d-6d540c956b7d" satisfied condition "Succeeded or Failed" Jun 18 00:01:46.278: INFO: Trying to get logs from node node2 pod metadata-volume-af93ca90-f3b0-42fb-ba8d-6d540c956b7d container client-container: STEP: delete the pod Jun 18 00:01:46.296: INFO: Waiting for pod metadata-volume-af93ca90-f3b0-42fb-ba8d-6d540c956b7d to disappear Jun 18 00:01:46.298: INFO: Pod metadata-volume-af93ca90-f3b0-42fb-ba8d-6d540c956b7d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:01:46.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7076" for this suite. • [SLOW TEST:6.077 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":2,"skipped":29,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:01:46.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Jun 18 00:01:46.348: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:01:46.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-9300" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should be able to delete a non-existent PD without error [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:449 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ SS ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:01:36.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Jun 18 00:01:36.782: INFO: The status of Pod test-hostpath-type-mq69j is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:01:38.785: INFO: The status of Pod test-hostpath-type-mq69j is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:01:40.787: INFO: The status of Pod test-hostpath-type-mq69j is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:01:42.786: INFO: The status of Pod test-hostpath-type-mq69j is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:01:44.787: INFO: The status of Pod test-hostpath-type-mq69j is Running (Ready = true) STEP: running on node node2 STEP: Create a character device for further testing Jun 18 00:01:44.790: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-4430 PodName:test-hostpath-type-mq69j ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:01:44.790: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting character device 'achardev' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:290 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:01:46.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-4430" for this suite. • [SLOW TEST:10.171 seconds] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting character device 'achardev' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:290 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathFile","total":-1,"completed":2,"skipped":132,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:01:41.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Jun 18 00:01:41.711: INFO: The status of Pod test-hostpath-type-mjjqh is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:01:43.714: INFO: The status of Pod test-hostpath-type-mjjqh is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:01:45.715: INFO: The status of Pod test-hostpath-type-mjjqh is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:01:47.715: INFO: The status of Pod test-hostpath-type-mjjqh is Running (Ready = true) STEP: running on node node2 STEP: Create a block device for further testing Jun 18 00:01:47.717: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-403 PodName:test-hostpath-type-mjjqh ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:01:47.717: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:369 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:01:49.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-403" for this suite. • [SLOW TEST:8.172 seconds] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting block device 'ablkdev' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:369 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathCharDev","total":-1,"completed":3,"skipped":46,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:01:39.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-6feb305b-56e6-411e-b510-7937b3612120" Jun 18 00:01:45.955: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-6feb305b-56e6-411e-b510-7937b3612120 && dd if=/dev/zero of=/tmp/local-volume-test-6feb305b-56e6-411e-b510-7937b3612120/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-6feb305b-56e6-411e-b510-7937b3612120/file] Namespace:persistent-local-volumes-test-8497 PodName:hostexec-node1-hnnnr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:01:45.956: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:01:46.121: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-6feb305b-56e6-411e-b510-7937b3612120/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-8497 PodName:hostexec-node1-hnnnr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:01:46.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:01:46.220: INFO: Creating a PV followed by a PVC Jun 18 00:01:46.227: INFO: Waiting for PV local-pvl647n to bind to PVC pvc-frxgw Jun 18 00:01:46.227: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-frxgw] to have phase Bound Jun 18 00:01:46.229: INFO: PersistentVolumeClaim pvc-frxgw found but phase is Pending instead of Bound. Jun 18 00:01:48.234: INFO: PersistentVolumeClaim pvc-frxgw found and phase=Bound (2.007415859s) Jun 18 00:01:48.234: INFO: Waiting up to 3m0s for PersistentVolume local-pvl647n to have phase Bound Jun 18 00:01:48.237: INFO: PersistentVolume local-pvl647n found and phase=Bound (2.665459ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jun 18 00:01:56.263: INFO: pod "pod-52458f8b-d555-486c-8dae-128534eef5ae" created on Node "node1" STEP: Writing in pod1 Jun 18 00:01:56.263: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8497 PodName:pod-52458f8b-d555-486c-8dae-128534eef5ae ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:01:56.263: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:01:56.501: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n18 bytes (18B) copied, 0.000160 seconds, 109.9KB/s", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Jun 18 00:01:56.501: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-8497 PodName:pod-52458f8b-d555-486c-8dae-128534eef5ae ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:01:56.501: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:01:56.583: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-52458f8b-d555-486c-8dae-128534eef5ae in namespace persistent-local-volumes-test-8497 [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:01:56.588: INFO: Deleting PersistentVolumeClaim "pvc-frxgw" Jun 18 00:01:56.591: INFO: Deleting PersistentVolume "local-pvl647n" Jun 18 00:01:56.595: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-6feb305b-56e6-411e-b510-7937b3612120/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-8497 PodName:hostexec-node1-hnnnr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:01:56.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node1" at path /tmp/local-volume-test-6feb305b-56e6-411e-b510-7937b3612120/file Jun 18 00:01:56.683: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-8497 PodName:hostexec-node1-hnnnr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:01:56.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-6feb305b-56e6-411e-b510-7937b3612120 Jun 18 00:01:56.937: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6feb305b-56e6-411e-b510-7937b3612120] Namespace:persistent-local-volumes-test-8497 PodName:hostexec-node1-hnnnr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:01:56.937: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:01:57.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8497" for this suite. • [SLOW TEST:17.160 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":2,"skipped":56,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:01:34.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 18 00:01:40.853: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-4d37c704-55ea-41a2-9680-143ebbdded36] Namespace:persistent-local-volumes-test-1645 PodName:hostexec-node1-xwt2m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:01:40.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:01:41.105: INFO: Creating a PV followed by a PVC Jun 18 00:01:41.112: INFO: Waiting for PV local-pvszcqk to bind to PVC pvc-hblt2 Jun 18 00:01:41.112: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-hblt2] to have phase Bound Jun 18 00:01:41.115: INFO: PersistentVolumeClaim pvc-hblt2 found but phase is Pending instead of Bound. Jun 18 00:01:43.119: INFO: PersistentVolumeClaim pvc-hblt2 found and phase=Bound (2.007090929s) Jun 18 00:01:43.120: INFO: Waiting up to 3m0s for PersistentVolume local-pvszcqk to have phase Bound Jun 18 00:01:43.122: INFO: PersistentVolume local-pvszcqk found and phase=Bound (2.364604ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Jun 18 00:01:53.147: INFO: pod "pod-bbd0901e-3c80-4629-bc50-22d13f6b3287" created on Node "node1" STEP: Writing in pod1 Jun 18 00:01:53.147: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1645 PodName:pod-bbd0901e-3c80-4629-bc50-22d13f6b3287 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:01:53.147: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:01:53.257: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Jun 18 00:01:53.257: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1645 PodName:pod-bbd0901e-3c80-4629-bc50-22d13f6b3287 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:01:53.257: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:01:53.337: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-bbd0901e-3c80-4629-bc50-22d13f6b3287 in namespace persistent-local-volumes-test-1645 STEP: Creating pod2 STEP: Creating a pod Jun 18 00:02:01.367: INFO: pod "pod-8911d453-49b8-46d4-bd9f-03211606a6e5" created on Node "node1" STEP: Reading in pod2 Jun 18 00:02:01.367: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1645 PodName:pod-8911d453-49b8-46d4-bd9f-03211606a6e5 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:02:01.367: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:02:01.471: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-8911d453-49b8-46d4-bd9f-03211606a6e5 in namespace persistent-local-volumes-test-1645 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:02:01.476: INFO: Deleting PersistentVolumeClaim "pvc-hblt2" Jun 18 00:02:01.480: INFO: Deleting PersistentVolume "local-pvszcqk" STEP: Removing the test directory Jun 18 00:02:01.484: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4d37c704-55ea-41a2-9680-143ebbdded36] Namespace:persistent-local-volumes-test-1645 PodName:hostexec-node1-xwt2m ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:02:01.484: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:02:01.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1645" for this suite. • [SLOW TEST:26.790 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":2,"skipped":74,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:01:57.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Jun 18 00:01:57.129: INFO: The status of Pod test-hostpath-type-wg6d5 is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:01:59.132: INFO: The status of Pod test-hostpath-type-wg6d5 is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:02:01.133: INFO: The status of Pod test-hostpath-type-wg6d5 is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:02:03.134: INFO: The status of Pod test-hostpath-type-wg6d5 is Running (Ready = true) STEP: running on node node2 [It] Should be able to mount socket 'asocket' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:212 [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:02:09.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-6306" for this suite. • [SLOW TEST:12.080 seconds] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount socket 'asocket' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:212 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should be able to mount socket 'asocket' successfully when HostPathType is HostPathUnset","total":-1,"completed":3,"skipped":65,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:02:09.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pvc-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:72 Jun 18 00:02:09.211: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PVC Jun 18 00:02:09.216: INFO: error finding default storageClass : No default storage class found [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:02:09.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pvc-protection-7086" for this suite. [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:108 S [SKIPPING] in Spec Setup (BeforeEach) [0.041 seconds] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Verify that PVC in active use by a pod is not removed immediately [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:126 error finding default storageClass : No default storage class found /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pv/pv.go:819 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:01:46.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 18 00:01:54.970: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-de8fe9ee-cbfd-49c1-baf3-6b36c5eb3a33 && mount --bind /tmp/local-volume-test-de8fe9ee-cbfd-49c1-baf3-6b36c5eb3a33 /tmp/local-volume-test-de8fe9ee-cbfd-49c1-baf3-6b36c5eb3a33] Namespace:persistent-local-volumes-test-1618 PodName:hostexec-node2-vlpbq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:01:54.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:01:55.064: INFO: Creating a PV followed by a PVC Jun 18 00:01:55.071: INFO: Waiting for PV local-pvtrr5p to bind to PVC pvc-4l628 Jun 18 00:01:55.071: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-4l628] to have phase Bound Jun 18 00:01:55.074: INFO: PersistentVolumeClaim pvc-4l628 found but phase is Pending instead of Bound. Jun 18 00:01:57.078: INFO: PersistentVolumeClaim pvc-4l628 found and phase=Bound (2.006829996s) Jun 18 00:01:57.078: INFO: Waiting up to 3m0s for PersistentVolume local-pvtrr5p to have phase Bound Jun 18 00:01:57.080: INFO: PersistentVolume local-pvtrr5p found and phase=Bound (2.014691ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Jun 18 00:02:03.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-1618 exec pod-35b0a330-4bc5-4695-92a1-0972ede1c2d3 --namespace=persistent-local-volumes-test-1618 -- stat -c %g /mnt/volume1' Jun 18 00:02:03.358: INFO: stderr: "" Jun 18 00:02:03.359: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Jun 18 00:02:13.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-1618 exec pod-b74f7728-f826-4ac6-854a-bca9643a9ca1 --namespace=persistent-local-volumes-test-1618 -- stat -c %g /mnt/volume1' Jun 18 00:02:13.637: INFO: stderr: "" Jun 18 00:02:13.637: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod pod-35b0a330-4bc5-4695-92a1-0972ede1c2d3 in namespace persistent-local-volumes-test-1618 STEP: Deleting second pod STEP: Deleting pod pod-b74f7728-f826-4ac6-854a-bca9643a9ca1 in namespace persistent-local-volumes-test-1618 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:02:13.650: INFO: Deleting PersistentVolumeClaim "pvc-4l628" Jun 18 00:02:13.655: INFO: Deleting PersistentVolume "local-pvtrr5p" STEP: Removing the test directory Jun 18 00:02:13.660: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-de8fe9ee-cbfd-49c1-baf3-6b36c5eb3a33 && rm -r /tmp/local-volume-test-de8fe9ee-cbfd-49c1-baf3-6b36c5eb3a33] Namespace:persistent-local-volumes-test-1618 PodName:hostexec-node2-vlpbq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:02:13.660: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:02:13.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1618" for this suite. • [SLOW TEST:27.000 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":3,"skipped":134,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:01:49.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 18 00:01:56.027: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-9cc639ea-6d06-43fe-9f8b-a81e3ee2ae2d-backend && ln -s /tmp/local-volume-test-9cc639ea-6d06-43fe-9f8b-a81e3ee2ae2d-backend /tmp/local-volume-test-9cc639ea-6d06-43fe-9f8b-a81e3ee2ae2d] Namespace:persistent-local-volumes-test-4855 PodName:hostexec-node2-sbrsq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:01:56.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:01:56.225: INFO: Creating a PV followed by a PVC Jun 18 00:01:56.233: INFO: Waiting for PV local-pvxb492 to bind to PVC pvc-4k45f Jun 18 00:01:56.233: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-4k45f] to have phase Bound Jun 18 00:01:56.235: INFO: PersistentVolumeClaim pvc-4k45f found but phase is Pending instead of Bound. Jun 18 00:01:58.239: INFO: PersistentVolumeClaim pvc-4k45f found and phase=Bound (2.005914567s) Jun 18 00:01:58.239: INFO: Waiting up to 3m0s for PersistentVolume local-pvxb492 to have phase Bound Jun 18 00:01:58.241: INFO: PersistentVolume local-pvxb492 found and phase=Bound (1.926018ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Jun 18 00:02:04.266: INFO: pod "pod-1c41acf7-2f17-4976-afc3-49d8daea2e8c" created on Node "node2" STEP: Writing in pod1 Jun 18 00:02:04.266: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4855 PodName:pod-1c41acf7-2f17-4976-afc3-49d8daea2e8c ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:02:04.266: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:02:04.346: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Jun 18 00:02:04.346: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4855 PodName:pod-1c41acf7-2f17-4976-afc3-49d8daea2e8c ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:02:04.346: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:02:04.431: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-1c41acf7-2f17-4976-afc3-49d8daea2e8c in namespace persistent-local-volumes-test-4855 STEP: Creating pod2 STEP: Creating a pod Jun 18 00:02:14.460: INFO: pod "pod-e7c5a1d9-d1e0-4e82-b9d4-87cba3f908c0" created on Node "node2" STEP: Reading in pod2 Jun 18 00:02:14.460: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4855 PodName:pod-e7c5a1d9-d1e0-4e82-b9d4-87cba3f908c0 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:02:14.460: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:02:14.760: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-e7c5a1d9-d1e0-4e82-b9d4-87cba3f908c0 in namespace persistent-local-volumes-test-4855 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:02:14.764: INFO: Deleting PersistentVolumeClaim "pvc-4k45f" Jun 18 00:02:14.768: INFO: Deleting PersistentVolume "local-pvxb492" STEP: Removing the test directory Jun 18 00:02:14.771: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-9cc639ea-6d06-43fe-9f8b-a81e3ee2ae2d && rm -r /tmp/local-volume-test-9cc639ea-6d06-43fe-9f8b-a81e3ee2ae2d-backend] Namespace:persistent-local-volumes-test-4855 PodName:hostexec-node2-sbrsq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:02:14.771: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:02:14.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4855" for this suite. • [SLOW TEST:24.948 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":4,"skipped":106,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:02:09.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:59 STEP: Creating configMap with name projected-configmap-test-volume-0322760f-152b-4ecf-ab3c-a587cec58485 STEP: Creating a pod to test consume configMaps Jun 18 00:02:09.335: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-32633442-5654-4ced-ae0d-4d0e35dfa1fa" in namespace "projected-1713" to be "Succeeded or Failed" Jun 18 00:02:09.338: INFO: Pod "pod-projected-configmaps-32633442-5654-4ced-ae0d-4d0e35dfa1fa": Phase="Pending", Reason="", readiness=false. Elapsed: 3.086349ms Jun 18 00:02:11.342: INFO: Pod "pod-projected-configmaps-32633442-5654-4ced-ae0d-4d0e35dfa1fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007450424s Jun 18 00:02:13.347: INFO: Pod "pod-projected-configmaps-32633442-5654-4ced-ae0d-4d0e35dfa1fa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01236508s Jun 18 00:02:15.351: INFO: Pod "pod-projected-configmaps-32633442-5654-4ced-ae0d-4d0e35dfa1fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016193516s STEP: Saw pod success Jun 18 00:02:15.351: INFO: Pod "pod-projected-configmaps-32633442-5654-4ced-ae0d-4d0e35dfa1fa" satisfied condition "Succeeded or Failed" Jun 18 00:02:15.353: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-32633442-5654-4ced-ae0d-4d0e35dfa1fa container agnhost-container: STEP: delete the pod Jun 18 00:02:15.363: INFO: Waiting for pod pod-projected-configmaps-32633442-5654-4ced-ae0d-4d0e35dfa1fa to disappear Jun 18 00:02:15.365: INFO: Pod pod-projected-configmaps-32633442-5654-4ced-ae0d-4d0e35dfa1fa no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:02:15.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1713" for this suite. • [SLOW TEST:6.072 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:59 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":4,"skipped":106,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:01:46.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-6f4d5725-61f0-44a7-8910-c8e2dcf6d3c3" Jun 18 00:01:52.422: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-6f4d5725-61f0-44a7-8910-c8e2dcf6d3c3 && dd if=/dev/zero of=/tmp/local-volume-test-6f4d5725-61f0-44a7-8910-c8e2dcf6d3c3/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-6f4d5725-61f0-44a7-8910-c8e2dcf6d3c3/file] Namespace:persistent-local-volumes-test-2059 PodName:hostexec-node1-zc5bg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:01:52.422: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:01:52.627: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-6f4d5725-61f0-44a7-8910-c8e2dcf6d3c3/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-2059 PodName:hostexec-node1-zc5bg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:01:52.627: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:01:52.720: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop2 && mount -t ext4 /dev/loop2 /tmp/local-volume-test-6f4d5725-61f0-44a7-8910-c8e2dcf6d3c3 && chmod o+rwx /tmp/local-volume-test-6f4d5725-61f0-44a7-8910-c8e2dcf6d3c3] Namespace:persistent-local-volumes-test-2059 PodName:hostexec-node1-zc5bg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:01:52.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:01:53.081: INFO: Creating a PV followed by a PVC Jun 18 00:01:53.089: INFO: Waiting for PV local-pvknvhx to bind to PVC pvc-sdz2k Jun 18 00:01:53.089: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-sdz2k] to have phase Bound Jun 18 00:01:53.092: INFO: PersistentVolumeClaim pvc-sdz2k found but phase is Pending instead of Bound. Jun 18 00:01:55.095: INFO: PersistentVolumeClaim pvc-sdz2k found but phase is Pending instead of Bound. Jun 18 00:01:57.099: INFO: PersistentVolumeClaim pvc-sdz2k found and phase=Bound (4.009541258s) Jun 18 00:01:57.099: INFO: Waiting up to 3m0s for PersistentVolume local-pvknvhx to have phase Bound Jun 18 00:01:57.102: INFO: PersistentVolume local-pvknvhx found and phase=Bound (2.684497ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Jun 18 00:02:03.126: INFO: pod "pod-8bd03161-40e1-477b-a0aa-d3a088378556" created on Node "node1" STEP: Writing in pod1 Jun 18 00:02:03.126: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2059 PodName:pod-8bd03161-40e1-477b-a0aa-d3a088378556 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:02:03.126: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:02:03.244: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Jun 18 00:02:03.244: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2059 PodName:pod-8bd03161-40e1-477b-a0aa-d3a088378556 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:02:03.244: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:02:03.433: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-8bd03161-40e1-477b-a0aa-d3a088378556 in namespace persistent-local-volumes-test-2059 STEP: Creating pod2 STEP: Creating a pod Jun 18 00:02:15.462: INFO: pod "pod-34916837-ffed-4737-ba34-7c2cf42fd1fe" created on Node "node1" STEP: Reading in pod2 Jun 18 00:02:15.462: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2059 PodName:pod-34916837-ffed-4737-ba34-7c2cf42fd1fe ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:02:15.462: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:02:15.562: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-34916837-ffed-4737-ba34-7c2cf42fd1fe in namespace persistent-local-volumes-test-2059 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:02:15.567: INFO: Deleting PersistentVolumeClaim "pvc-sdz2k" Jun 18 00:02:15.571: INFO: Deleting PersistentVolume "local-pvknvhx" Jun 18 00:02:15.574: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-6f4d5725-61f0-44a7-8910-c8e2dcf6d3c3] Namespace:persistent-local-volumes-test-2059 PodName:hostexec-node1-zc5bg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:02:15.574: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:02:15.687: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-6f4d5725-61f0-44a7-8910-c8e2dcf6d3c3/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-2059 PodName:hostexec-node1-zc5bg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:02:15.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop2" on node "node1" at path /tmp/local-volume-test-6f4d5725-61f0-44a7-8910-c8e2dcf6d3c3/file Jun 18 00:02:15.828: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop2] Namespace:persistent-local-volumes-test-2059 PodName:hostexec-node1-zc5bg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:02:15.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-6f4d5725-61f0-44a7-8910-c8e2dcf6d3c3 Jun 18 00:02:15.991: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6f4d5725-61f0-44a7-8910-c8e2dcf6d3c3] Namespace:persistent-local-volumes-test-2059 PodName:hostexec-node1-zc5bg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:02:15.991: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:02:16.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2059" for this suite. • [SLOW TEST:29.872 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":3,"skipped":40,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:02:14.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Jun 18 00:02:14.969: INFO: The status of Pod test-hostpath-type-hlwmh is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:02:16.974: INFO: The status of Pod test-hostpath-type-hlwmh is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:02:18.973: INFO: The status of Pod test-hostpath-type-hlwmh is Running (Ready = true) STEP: running on node node1 [It] Should fail on mounting socket 'asocket' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:226 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:02:21.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-943" for this suite. • [SLOW TEST:6.076 seconds] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting socket 'asocket' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:226 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathCharDev","total":-1,"completed":5,"skipped":111,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:02:21.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Jun 18 00:02:21.085: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:02:21.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-35" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 schedule pods each with a PD, delete pod and verify detach [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:93 for RW PD with pod delete grace period of "immediate (0s)" /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:135 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ SSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:02:15.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-4bc5a3f4-a3cb-42c2-beb7-bb4855f7c81c" Jun 18 00:02:19.469: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-4bc5a3f4-a3cb-42c2-beb7-bb4855f7c81c && dd if=/dev/zero of=/tmp/local-volume-test-4bc5a3f4-a3cb-42c2-beb7-bb4855f7c81c/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-4bc5a3f4-a3cb-42c2-beb7-bb4855f7c81c/file] Namespace:persistent-local-volumes-test-334 PodName:hostexec-node1-8gflk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:02:19.469: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:02:19.597: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-4bc5a3f4-a3cb-42c2-beb7-bb4855f7c81c/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-334 PodName:hostexec-node1-8gflk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:02:19.598: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:02:19.684: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-4bc5a3f4-a3cb-42c2-beb7-bb4855f7c81c && chmod o+rwx /tmp/local-volume-test-4bc5a3f4-a3cb-42c2-beb7-bb4855f7c81c] Namespace:persistent-local-volumes-test-334 PodName:hostexec-node1-8gflk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:02:19.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:02:19.928: INFO: Creating a PV followed by a PVC Jun 18 00:02:19.936: INFO: Waiting for PV local-pvdpwkn to bind to PVC pvc-s4n4n Jun 18 00:02:19.936: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-s4n4n] to have phase Bound Jun 18 00:02:19.938: INFO: PersistentVolumeClaim pvc-s4n4n found but phase is Pending instead of Bound. Jun 18 00:02:21.941: INFO: PersistentVolumeClaim pvc-s4n4n found and phase=Bound (2.005724526s) Jun 18 00:02:21.941: INFO: Waiting up to 3m0s for PersistentVolume local-pvdpwkn to have phase Bound Jun 18 00:02:21.944: INFO: PersistentVolume local-pvdpwkn found and phase=Bound (2.530527ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jun 18 00:02:25.970: INFO: pod "pod-f4aed093-2d63-48a4-9883-7c14953a748e" created on Node "node1" STEP: Writing in pod1 Jun 18 00:02:25.970: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-334 PodName:pod-f4aed093-2d63-48a4-9883-7c14953a748e ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:02:25.970: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:02:26.070: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Jun 18 00:02:26.070: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-334 PodName:pod-f4aed093-2d63-48a4-9883-7c14953a748e ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:02:26.071: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:02:26.233: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Jun 18 00:02:26.233: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-4bc5a3f4-a3cb-42c2-beb7-bb4855f7c81c > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-334 PodName:pod-f4aed093-2d63-48a4-9883-7c14953a748e ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:02:26.233: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:02:26.313: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-4bc5a3f4-a3cb-42c2-beb7-bb4855f7c81c > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-f4aed093-2d63-48a4-9883-7c14953a748e in namespace persistent-local-volumes-test-334 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:02:26.320: INFO: Deleting PersistentVolumeClaim "pvc-s4n4n" Jun 18 00:02:26.325: INFO: Deleting PersistentVolume "local-pvdpwkn" Jun 18 00:02:26.329: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-4bc5a3f4-a3cb-42c2-beb7-bb4855f7c81c] Namespace:persistent-local-volumes-test-334 PodName:hostexec-node1-8gflk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:02:26.329: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:02:26.422: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-4bc5a3f4-a3cb-42c2-beb7-bb4855f7c81c/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-334 PodName:hostexec-node1-8gflk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:02:26.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node1" at path /tmp/local-volume-test-4bc5a3f4-a3cb-42c2-beb7-bb4855f7c81c/file Jun 18 00:02:27.180: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-334 PodName:hostexec-node1-8gflk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:02:27.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-4bc5a3f4-a3cb-42c2-beb7-bb4855f7c81c Jun 18 00:02:27.275: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4bc5a3f4-a3cb-42c2-beb7-bb4855f7c81c] Namespace:persistent-local-volumes-test-334 PodName:hostexec-node1-8gflk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:02:27.275: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:02:27.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-334" for this suite. • [SLOW TEST:11.987 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":5,"skipped":127,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:01:25.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should require VolumeAttach for drivers with attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 STEP: Building a driver namespace object, basename csi-mock-volumes-9641 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 18 00:01:25.281: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9641-756/csi-attacher Jun 18 00:01:25.283: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9641 Jun 18 00:01:25.283: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-9641 Jun 18 00:01:25.286: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9641 Jun 18 00:01:25.289: INFO: creating *v1.Role: csi-mock-volumes-9641-756/external-attacher-cfg-csi-mock-volumes-9641 Jun 18 00:01:25.292: INFO: creating *v1.RoleBinding: csi-mock-volumes-9641-756/csi-attacher-role-cfg Jun 18 00:01:25.295: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9641-756/csi-provisioner Jun 18 00:01:25.297: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9641 Jun 18 00:01:25.297: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-9641 Jun 18 00:01:25.300: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9641 Jun 18 00:01:25.303: INFO: creating *v1.Role: csi-mock-volumes-9641-756/external-provisioner-cfg-csi-mock-volumes-9641 Jun 18 00:01:25.305: INFO: creating *v1.RoleBinding: csi-mock-volumes-9641-756/csi-provisioner-role-cfg Jun 18 00:01:25.308: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9641-756/csi-resizer Jun 18 00:01:25.311: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9641 Jun 18 00:01:25.311: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-9641 Jun 18 00:01:25.314: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9641 Jun 18 00:01:25.317: INFO: creating *v1.Role: csi-mock-volumes-9641-756/external-resizer-cfg-csi-mock-volumes-9641 Jun 18 00:01:25.320: INFO: creating *v1.RoleBinding: csi-mock-volumes-9641-756/csi-resizer-role-cfg Jun 18 00:01:25.323: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9641-756/csi-snapshotter Jun 18 00:01:25.326: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9641 Jun 18 00:01:25.326: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-9641 Jun 18 00:01:25.328: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9641 Jun 18 00:01:25.331: INFO: creating *v1.Role: csi-mock-volumes-9641-756/external-snapshotter-leaderelection-csi-mock-volumes-9641 Jun 18 00:01:25.333: INFO: creating *v1.RoleBinding: csi-mock-volumes-9641-756/external-snapshotter-leaderelection Jun 18 00:01:25.336: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9641-756/csi-mock Jun 18 00:01:25.338: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9641 Jun 18 00:01:25.341: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9641 Jun 18 00:01:25.344: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9641 Jun 18 00:01:25.346: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9641 Jun 18 00:01:25.349: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9641 Jun 18 00:01:25.352: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9641 Jun 18 00:01:25.355: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9641 Jun 18 00:01:25.358: INFO: creating *v1.StatefulSet: csi-mock-volumes-9641-756/csi-mockplugin Jun 18 00:01:25.363: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-9641 Jun 18 00:01:25.366: INFO: creating *v1.StatefulSet: csi-mock-volumes-9641-756/csi-mockplugin-attacher Jun 18 00:01:25.371: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-9641" Jun 18 00:01:25.373: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-9641 to register on node node2 STEP: Creating pod Jun 18 00:01:41.643: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 18 00:01:41.648: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-kk4l4] to have phase Bound Jun 18 00:01:41.649: INFO: PersistentVolumeClaim pvc-kk4l4 found but phase is Pending instead of Bound. Jun 18 00:01:43.657: INFO: PersistentVolumeClaim pvc-kk4l4 found and phase=Bound (2.009049s) STEP: Checking if VolumeAttachment was created for the pod STEP: Deleting pod pvc-volume-tester-4phc5 Jun 18 00:02:01.685: INFO: Deleting pod "pvc-volume-tester-4phc5" in namespace "csi-mock-volumes-9641" Jun 18 00:02:01.690: INFO: Wait up to 5m0s for pod "pvc-volume-tester-4phc5" to be fully deleted STEP: Deleting claim pvc-kk4l4 Jun 18 00:02:07.702: INFO: Waiting up to 2m0s for PersistentVolume pvc-d54b0988-8f85-427e-9310-71f9b839fce8 to get deleted Jun 18 00:02:07.704: INFO: PersistentVolume pvc-d54b0988-8f85-427e-9310-71f9b839fce8 found and phase=Bound (2.009647ms) Jun 18 00:02:09.708: INFO: PersistentVolume pvc-d54b0988-8f85-427e-9310-71f9b839fce8 found and phase=Released (2.005545481s) Jun 18 00:02:11.712: INFO: PersistentVolume pvc-d54b0988-8f85-427e-9310-71f9b839fce8 found and phase=Released (4.00971894s) Jun 18 00:02:13.717: INFO: PersistentVolume pvc-d54b0988-8f85-427e-9310-71f9b839fce8 found and phase=Released (6.014198168s) Jun 18 00:02:15.729: INFO: PersistentVolume pvc-d54b0988-8f85-427e-9310-71f9b839fce8 was removed STEP: Deleting storageclass csi-mock-volumes-9641-sc544tz STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-9641 STEP: Waiting for namespaces [csi-mock-volumes-9641] to vanish STEP: uninstalling csi mock driver Jun 18 00:02:21.741: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9641-756/csi-attacher Jun 18 00:02:21.745: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9641 Jun 18 00:02:21.748: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9641 Jun 18 00:02:21.752: INFO: deleting *v1.Role: csi-mock-volumes-9641-756/external-attacher-cfg-csi-mock-volumes-9641 Jun 18 00:02:21.755: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9641-756/csi-attacher-role-cfg Jun 18 00:02:21.758: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9641-756/csi-provisioner Jun 18 00:02:21.763: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9641 Jun 18 00:02:21.766: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9641 Jun 18 00:02:21.769: INFO: deleting *v1.Role: csi-mock-volumes-9641-756/external-provisioner-cfg-csi-mock-volumes-9641 Jun 18 00:02:21.772: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9641-756/csi-provisioner-role-cfg Jun 18 00:02:21.776: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9641-756/csi-resizer Jun 18 00:02:21.780: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9641 Jun 18 00:02:21.783: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9641 Jun 18 00:02:21.787: INFO: deleting *v1.Role: csi-mock-volumes-9641-756/external-resizer-cfg-csi-mock-volumes-9641 Jun 18 00:02:21.790: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9641-756/csi-resizer-role-cfg Jun 18 00:02:21.794: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9641-756/csi-snapshotter Jun 18 00:02:21.797: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9641 Jun 18 00:02:21.800: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9641 Jun 18 00:02:21.803: INFO: deleting *v1.Role: csi-mock-volumes-9641-756/external-snapshotter-leaderelection-csi-mock-volumes-9641 Jun 18 00:02:21.806: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9641-756/external-snapshotter-leaderelection Jun 18 00:02:21.809: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9641-756/csi-mock Jun 18 00:02:21.812: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9641 Jun 18 00:02:21.815: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9641 Jun 18 00:02:21.819: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9641 Jun 18 00:02:21.822: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9641 Jun 18 00:02:21.825: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9641 Jun 18 00:02:21.829: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9641 Jun 18 00:02:21.832: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9641 Jun 18 00:02:21.836: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9641-756/csi-mockplugin Jun 18 00:02:21.840: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-9641 Jun 18 00:02:21.843: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9641-756/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-9641-756 STEP: Waiting for namespaces [csi-mock-volumes-9641-756] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:02:27.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:62.652 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI attach test using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:316 should require VolumeAttach for drivers with attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment","total":-1,"completed":1,"skipped":12,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:02:27.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Jun 18 00:02:28.005: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:02:28.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-3663" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 schedule pods each with a PD, delete pod and verify detach [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:93 for read-only PD with pod delete grace period of "immediate (0s)" /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:135 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:02:21.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 18 00:02:25.160: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-5c7940c9-feaa-4296-89a8-20eae7e9740e-backend && mount --bind /tmp/local-volume-test-5c7940c9-feaa-4296-89a8-20eae7e9740e-backend /tmp/local-volume-test-5c7940c9-feaa-4296-89a8-20eae7e9740e-backend && ln -s /tmp/local-volume-test-5c7940c9-feaa-4296-89a8-20eae7e9740e-backend /tmp/local-volume-test-5c7940c9-feaa-4296-89a8-20eae7e9740e] Namespace:persistent-local-volumes-test-9312 PodName:hostexec-node1-nw2sh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:02:25.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:02:25.252: INFO: Creating a PV followed by a PVC Jun 18 00:02:25.259: INFO: Waiting for PV local-pvl99rl to bind to PVC pvc-5hncm Jun 18 00:02:25.259: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-5hncm] to have phase Bound Jun 18 00:02:25.261: INFO: PersistentVolumeClaim pvc-5hncm found but phase is Pending instead of Bound. Jun 18 00:02:27.265: INFO: PersistentVolumeClaim pvc-5hncm found and phase=Bound (2.005514009s) Jun 18 00:02:27.265: INFO: Waiting up to 3m0s for PersistentVolume local-pvl99rl to have phase Bound Jun 18 00:02:27.268: INFO: PersistentVolume local-pvl99rl found and phase=Bound (2.929199ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jun 18 00:02:33.296: INFO: pod "pod-550bff53-d54c-4fe4-adca-cce79de5ed15" created on Node "node1" STEP: Writing in pod1 Jun 18 00:02:33.296: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9312 PodName:pod-550bff53-d54c-4fe4-adca-cce79de5ed15 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:02:33.297: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:02:33.394: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Jun 18 00:02:33.394: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9312 PodName:pod-550bff53-d54c-4fe4-adca-cce79de5ed15 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:02:33.394: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:02:33.474: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Jun 18 00:02:33.475: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-5c7940c9-feaa-4296-89a8-20eae7e9740e > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9312 PodName:pod-550bff53-d54c-4fe4-adca-cce79de5ed15 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:02:33.475: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:02:33.557: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-5c7940c9-feaa-4296-89a8-20eae7e9740e > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-550bff53-d54c-4fe4-adca-cce79de5ed15 in namespace persistent-local-volumes-test-9312 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:02:33.562: INFO: Deleting PersistentVolumeClaim "pvc-5hncm" Jun 18 00:02:33.566: INFO: Deleting PersistentVolume "local-pvl99rl" STEP: Removing the test directory Jun 18 00:02:33.570: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-5c7940c9-feaa-4296-89a8-20eae7e9740e && umount /tmp/local-volume-test-5c7940c9-feaa-4296-89a8-20eae7e9740e-backend && rm -r /tmp/local-volume-test-5c7940c9-feaa-4296-89a8-20eae7e9740e-backend] Namespace:persistent-local-volumes-test-9312 PodName:hostexec-node1-nw2sh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:02:33.570: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:02:33.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9312" for this suite. • [SLOW TEST:12.572 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":6,"skipped":143,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:02:33.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Jun 18 00:02:37.744: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-9462 PodName:hostexec-node1-qq6fp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:02:37.744: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:02:37.835: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Jun 18 00:02:37.835: INFO: exec node1: stdout: "0\n" Jun 18 00:02:37.835: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Jun 18 00:02:37.835: INFO: exec node1: exit code: 0 Jun 18 00:02:37.835: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:02:37.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9462" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [4.155 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1254 ------------------------------ SSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:02:28.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Jun 18 00:02:28.071: INFO: The status of Pod test-hostpath-type-kxhdj is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:02:30.076: INFO: The status of Pod test-hostpath-type-kxhdj is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:02:32.078: INFO: The status of Pod test-hostpath-type-kxhdj is Running (Ready = true) STEP: running on node node2 STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should fail on mounting directory 'adir' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:89 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:02:38.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-288" for this suite. • [SLOW TEST:10.104 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting directory 'adir' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:89 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathSocket","total":-1,"completed":2,"skipped":79,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:01:19.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should be passed when podInfoOnMount=true /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 STEP: Building a driver namespace object, basename csi-mock-volumes-3190 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 18 00:01:19.930: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3190-4192/csi-attacher Jun 18 00:01:19.932: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3190 Jun 18 00:01:19.932: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-3190 Jun 18 00:01:19.935: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3190 Jun 18 00:01:19.938: INFO: creating *v1.Role: csi-mock-volumes-3190-4192/external-attacher-cfg-csi-mock-volumes-3190 Jun 18 00:01:19.941: INFO: creating *v1.RoleBinding: csi-mock-volumes-3190-4192/csi-attacher-role-cfg Jun 18 00:01:19.945: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3190-4192/csi-provisioner Jun 18 00:01:19.947: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3190 Jun 18 00:01:19.947: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-3190 Jun 18 00:01:19.951: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3190 Jun 18 00:01:19.953: INFO: creating *v1.Role: csi-mock-volumes-3190-4192/external-provisioner-cfg-csi-mock-volumes-3190 Jun 18 00:01:19.956: INFO: creating *v1.RoleBinding: csi-mock-volumes-3190-4192/csi-provisioner-role-cfg Jun 18 00:01:19.960: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3190-4192/csi-resizer Jun 18 00:01:19.962: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3190 Jun 18 00:01:19.962: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-3190 Jun 18 00:01:19.965: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3190 Jun 18 00:01:19.967: INFO: creating *v1.Role: csi-mock-volumes-3190-4192/external-resizer-cfg-csi-mock-volumes-3190 Jun 18 00:01:19.970: INFO: creating *v1.RoleBinding: csi-mock-volumes-3190-4192/csi-resizer-role-cfg Jun 18 00:01:19.973: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3190-4192/csi-snapshotter Jun 18 00:01:19.976: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3190 Jun 18 00:01:19.976: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-3190 Jun 18 00:01:19.980: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3190 Jun 18 00:01:19.982: INFO: creating *v1.Role: csi-mock-volumes-3190-4192/external-snapshotter-leaderelection-csi-mock-volumes-3190 Jun 18 00:01:19.986: INFO: creating *v1.RoleBinding: csi-mock-volumes-3190-4192/external-snapshotter-leaderelection Jun 18 00:01:19.988: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3190-4192/csi-mock Jun 18 00:01:19.991: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3190 Jun 18 00:01:19.993: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3190 Jun 18 00:01:19.996: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3190 Jun 18 00:01:19.999: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3190 Jun 18 00:01:20.001: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3190 Jun 18 00:01:20.004: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3190 Jun 18 00:01:20.007: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3190 Jun 18 00:01:20.012: INFO: creating *v1.StatefulSet: csi-mock-volumes-3190-4192/csi-mockplugin Jun 18 00:01:20.016: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-3190 Jun 18 00:01:20.019: INFO: creating *v1.StatefulSet: csi-mock-volumes-3190-4192/csi-mockplugin-attacher Jun 18 00:01:20.023: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-3190" Jun 18 00:01:20.025: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-3190 to register on node node1 STEP: Creating pod Jun 18 00:01:46.424: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 18 00:01:46.428: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-khcw9] to have phase Bound Jun 18 00:01:46.430: INFO: PersistentVolumeClaim pvc-khcw9 found but phase is Pending instead of Bound. Jun 18 00:01:48.435: INFO: PersistentVolumeClaim pvc-khcw9 found and phase=Bound (2.007142072s) STEP: checking for CSIInlineVolumes feature Jun 18 00:01:58.474: INFO: Pod inline-volume-lx8zv has the following logs: Jun 18 00:01:58.479: INFO: Deleting pod "inline-volume-lx8zv" in namespace "csi-mock-volumes-3190" Jun 18 00:01:58.483: INFO: Wait up to 5m0s for pod "inline-volume-lx8zv" to be fully deleted STEP: Deleting the previously created pod Jun 18 00:02:02.490: INFO: Deleting pod "pvc-volume-tester-zhf7j" in namespace "csi-mock-volumes-3190" Jun 18 00:02:02.495: INFO: Wait up to 5m0s for pod "pvc-volume-tester-zhf7j" to be fully deleted STEP: Checking CSI driver logs Jun 18 00:02:20.525: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-zhf7j Jun 18 00:02:20.525: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-3190 Jun 18 00:02:20.525: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 07a45bd1-369c-43c5-b197-1fec66e459bf Jun 18 00:02:20.525: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default Jun 18 00:02:20.525: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: false Jun 18 00:02:20.525: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/07a45bd1-369c-43c5-b197-1fec66e459bf/volumes/kubernetes.io~csi/pvc-6db0b7fa-a199-45c3-914c-195faa611dd5/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-zhf7j Jun 18 00:02:20.525: INFO: Deleting pod "pvc-volume-tester-zhf7j" in namespace "csi-mock-volumes-3190" STEP: Deleting claim pvc-khcw9 Jun 18 00:02:20.533: INFO: Waiting up to 2m0s for PersistentVolume pvc-6db0b7fa-a199-45c3-914c-195faa611dd5 to get deleted Jun 18 00:02:20.535: INFO: PersistentVolume pvc-6db0b7fa-a199-45c3-914c-195faa611dd5 found and phase=Bound (1.820652ms) Jun 18 00:02:22.538: INFO: PersistentVolume pvc-6db0b7fa-a199-45c3-914c-195faa611dd5 was removed STEP: Deleting storageclass csi-mock-volumes-3190-scd9knd STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-3190 STEP: Waiting for namespaces [csi-mock-volumes-3190] to vanish STEP: uninstalling csi mock driver Jun 18 00:02:28.551: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3190-4192/csi-attacher Jun 18 00:02:28.555: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3190 Jun 18 00:02:28.558: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3190 Jun 18 00:02:28.562: INFO: deleting *v1.Role: csi-mock-volumes-3190-4192/external-attacher-cfg-csi-mock-volumes-3190 Jun 18 00:02:28.565: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3190-4192/csi-attacher-role-cfg Jun 18 00:02:28.569: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3190-4192/csi-provisioner Jun 18 00:02:28.572: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3190 Jun 18 00:02:28.576: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3190 Jun 18 00:02:28.579: INFO: deleting *v1.Role: csi-mock-volumes-3190-4192/external-provisioner-cfg-csi-mock-volumes-3190 Jun 18 00:02:28.582: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3190-4192/csi-provisioner-role-cfg Jun 18 00:02:28.586: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3190-4192/csi-resizer Jun 18 00:02:28.590: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3190 Jun 18 00:02:28.593: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3190 Jun 18 00:02:28.597: INFO: deleting *v1.Role: csi-mock-volumes-3190-4192/external-resizer-cfg-csi-mock-volumes-3190 Jun 18 00:02:28.600: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3190-4192/csi-resizer-role-cfg Jun 18 00:02:28.603: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3190-4192/csi-snapshotter Jun 18 00:02:28.607: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3190 Jun 18 00:02:28.610: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3190 Jun 18 00:02:28.613: INFO: deleting *v1.Role: csi-mock-volumes-3190-4192/external-snapshotter-leaderelection-csi-mock-volumes-3190 Jun 18 00:02:28.616: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3190-4192/external-snapshotter-leaderelection Jun 18 00:02:28.619: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3190-4192/csi-mock Jun 18 00:02:28.623: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3190 Jun 18 00:02:28.626: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3190 Jun 18 00:02:28.629: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3190 Jun 18 00:02:28.632: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3190 Jun 18 00:02:28.635: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3190 Jun 18 00:02:28.638: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3190 Jun 18 00:02:28.641: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3190 Jun 18 00:02:28.645: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3190-4192/csi-mockplugin Jun 18 00:02:28.648: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-3190 Jun 18 00:02:28.651: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3190-4192/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-3190-4192 STEP: Waiting for namespaces [csi-mock-volumes-3190-4192] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:02:40.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:81.483 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443 should be passed when podInfoOnMount=true /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true","total":-1,"completed":1,"skipped":59,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:01:23.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should modify fsGroup if fsGroupPolicy=default /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1582 STEP: Building a driver namespace object, basename csi-mock-volumes-8421 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 18 00:01:23.408: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8421-209/csi-attacher Jun 18 00:01:23.410: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8421 Jun 18 00:01:23.410: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-8421 Jun 18 00:01:23.414: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8421 Jun 18 00:01:23.417: INFO: creating *v1.Role: csi-mock-volumes-8421-209/external-attacher-cfg-csi-mock-volumes-8421 Jun 18 00:01:23.419: INFO: creating *v1.RoleBinding: csi-mock-volumes-8421-209/csi-attacher-role-cfg Jun 18 00:01:23.422: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8421-209/csi-provisioner Jun 18 00:01:23.427: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8421 Jun 18 00:01:23.427: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-8421 Jun 18 00:01:23.430: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8421 Jun 18 00:01:23.433: INFO: creating *v1.Role: csi-mock-volumes-8421-209/external-provisioner-cfg-csi-mock-volumes-8421 Jun 18 00:01:23.437: INFO: creating *v1.RoleBinding: csi-mock-volumes-8421-209/csi-provisioner-role-cfg Jun 18 00:01:23.440: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8421-209/csi-resizer Jun 18 00:01:23.442: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8421 Jun 18 00:01:23.442: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-8421 Jun 18 00:01:23.445: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8421 Jun 18 00:01:23.449: INFO: creating *v1.Role: csi-mock-volumes-8421-209/external-resizer-cfg-csi-mock-volumes-8421 Jun 18 00:01:23.452: INFO: creating *v1.RoleBinding: csi-mock-volumes-8421-209/csi-resizer-role-cfg Jun 18 00:01:23.454: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8421-209/csi-snapshotter Jun 18 00:01:23.457: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8421 Jun 18 00:01:23.457: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-8421 Jun 18 00:01:23.461: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8421 Jun 18 00:01:23.464: INFO: creating *v1.Role: csi-mock-volumes-8421-209/external-snapshotter-leaderelection-csi-mock-volumes-8421 Jun 18 00:01:23.468: INFO: creating *v1.RoleBinding: csi-mock-volumes-8421-209/external-snapshotter-leaderelection Jun 18 00:01:23.471: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8421-209/csi-mock Jun 18 00:01:23.474: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8421 Jun 18 00:01:23.477: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8421 Jun 18 00:01:23.479: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8421 Jun 18 00:01:23.482: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8421 Jun 18 00:01:23.484: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8421 Jun 18 00:01:23.487: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8421 Jun 18 00:01:23.489: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8421 Jun 18 00:01:23.492: INFO: creating *v1.StatefulSet: csi-mock-volumes-8421-209/csi-mockplugin Jun 18 00:01:23.497: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-8421 Jun 18 00:01:23.499: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-8421" Jun 18 00:01:23.501: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-8421 to register on node node2 STEP: Creating pod with fsGroup Jun 18 00:01:44.790: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 18 00:01:44.796: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-ndjt2] to have phase Bound Jun 18 00:01:44.798: INFO: PersistentVolumeClaim pvc-ndjt2 found but phase is Pending instead of Bound. Jun 18 00:01:46.802: INFO: PersistentVolumeClaim pvc-ndjt2 found and phase=Bound (2.006094348s) Jun 18 00:01:54.888: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir /mnt/test/csi-mock-volumes-8421] Namespace:csi-mock-volumes-8421 PodName:pvc-volume-tester-w7m8q ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:01:54.889: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:01:54.970: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'filecontents' > '/mnt/test/csi-mock-volumes-8421/csi-mock-volumes-8421'; sync] Namespace:csi-mock-volumes-8421 PodName:pvc-volume-tester-w7m8q ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:01:54.970: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:01:56.970: INFO: ExecWithOptions {Command:[/bin/sh -c ls -l /mnt/test/csi-mock-volumes-8421/csi-mock-volumes-8421] Namespace:csi-mock-volumes-8421 PodName:pvc-volume-tester-w7m8q ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:01:56.970: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:01:57.205: INFO: pod csi-mock-volumes-8421/pvc-volume-tester-w7m8q exec for cmd ls -l /mnt/test/csi-mock-volumes-8421/csi-mock-volumes-8421, stdout: -rw-r--r-- 1 root 9046 13 Jun 18 00:01 /mnt/test/csi-mock-volumes-8421/csi-mock-volumes-8421, stderr: Jun 18 00:01:57.205: INFO: ExecWithOptions {Command:[/bin/sh -c rm -fr /mnt/test/csi-mock-volumes-8421] Namespace:csi-mock-volumes-8421 PodName:pvc-volume-tester-w7m8q ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:01:57.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod pvc-volume-tester-w7m8q Jun 18 00:01:57.289: INFO: Deleting pod "pvc-volume-tester-w7m8q" in namespace "csi-mock-volumes-8421" Jun 18 00:01:57.293: INFO: Wait up to 5m0s for pod "pvc-volume-tester-w7m8q" to be fully deleted STEP: Deleting claim pvc-ndjt2 Jun 18 00:02:31.310: INFO: Waiting up to 2m0s for PersistentVolume pvc-3025c7d9-8c8c-4489-92a7-c0ab4fb3a324 to get deleted Jun 18 00:02:31.313: INFO: PersistentVolume pvc-3025c7d9-8c8c-4489-92a7-c0ab4fb3a324 found and phase=Bound (2.266169ms) Jun 18 00:02:33.319: INFO: PersistentVolume pvc-3025c7d9-8c8c-4489-92a7-c0ab4fb3a324 was removed STEP: Deleting storageclass csi-mock-volumes-8421-scnvqpv STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-8421 STEP: Waiting for namespaces [csi-mock-volumes-8421] to vanish STEP: uninstalling csi mock driver Jun 18 00:02:39.332: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8421-209/csi-attacher Jun 18 00:02:39.337: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8421 Jun 18 00:02:39.340: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8421 Jun 18 00:02:39.344: INFO: deleting *v1.Role: csi-mock-volumes-8421-209/external-attacher-cfg-csi-mock-volumes-8421 Jun 18 00:02:39.347: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8421-209/csi-attacher-role-cfg Jun 18 00:02:39.350: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8421-209/csi-provisioner Jun 18 00:02:39.354: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8421 Jun 18 00:02:39.358: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8421 Jun 18 00:02:39.361: INFO: deleting *v1.Role: csi-mock-volumes-8421-209/external-provisioner-cfg-csi-mock-volumes-8421 Jun 18 00:02:39.365: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8421-209/csi-provisioner-role-cfg Jun 18 00:02:39.368: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8421-209/csi-resizer Jun 18 00:02:39.371: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8421 Jun 18 00:02:39.376: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8421 Jun 18 00:02:39.379: INFO: deleting *v1.Role: csi-mock-volumes-8421-209/external-resizer-cfg-csi-mock-volumes-8421 Jun 18 00:02:39.383: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8421-209/csi-resizer-role-cfg Jun 18 00:02:39.387: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8421-209/csi-snapshotter Jun 18 00:02:39.390: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8421 Jun 18 00:02:39.394: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8421 Jun 18 00:02:39.397: INFO: deleting *v1.Role: csi-mock-volumes-8421-209/external-snapshotter-leaderelection-csi-mock-volumes-8421 Jun 18 00:02:39.400: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8421-209/external-snapshotter-leaderelection Jun 18 00:02:39.404: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8421-209/csi-mock Jun 18 00:02:39.408: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8421 Jun 18 00:02:39.418: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8421 Jun 18 00:02:39.428: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8421 Jun 18 00:02:39.433: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8421 Jun 18 00:02:39.436: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8421 Jun 18 00:02:39.439: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8421 Jun 18 00:02:39.443: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8421 Jun 18 00:02:39.446: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8421-209/csi-mockplugin Jun 18 00:02:39.451: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-8421 STEP: deleting the driver namespace: csi-mock-volumes-8421-209 STEP: Waiting for namespaces [csi-mock-volumes-8421-209] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:02:45.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:82.144 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI FSGroupPolicy [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1558 should modify fsGroup if fsGroupPolicy=default /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1582 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","total":-1,"completed":2,"skipped":72,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:02:45.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:77 Jun 18 00:02:45.508: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:02:45.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-8161" for this suite. [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:111 Jun 18 00:02:45.517: INFO: AfterEach: Cleaning up test resources Jun 18 00:02:45.517: INFO: pvc is nil Jun 18 00:02:45.517: INFO: pv is nil S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:127 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:02:38.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Jun 18 00:02:38.203: INFO: The status of Pod test-hostpath-type-cbk6b is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:02:40.207: INFO: The status of Pod test-hostpath-type-cbk6b is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:02:42.207: INFO: The status of Pod test-hostpath-type-cbk6b is Running (Ready = true) STEP: running on node node1 STEP: Create a block device for further testing Jun 18 00:02:42.209: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-2138 PodName:test-hostpath-type-cbk6b ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:02:42.209: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:346 [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:02:46.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-2138" for this suite. • [SLOW TEST:8.180 seconds] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:346 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathBlockDev","total":-1,"completed":3,"skipped":92,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:02:46.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] volume on tmpfs should have the correct mode using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:75 STEP: Creating a pod to test emptydir volume type on tmpfs Jun 18 00:02:46.437: INFO: Waiting up to 5m0s for pod "pod-2fc566e8-519e-4feb-a145-7e0c98926e8e" in namespace "emptydir-9004" to be "Succeeded or Failed" Jun 18 00:02:46.439: INFO: Pod "pod-2fc566e8-519e-4feb-a145-7e0c98926e8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04811ms Jun 18 00:02:48.444: INFO: Pod "pod-2fc566e8-519e-4feb-a145-7e0c98926e8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006998988s Jun 18 00:02:50.449: INFO: Pod "pod-2fc566e8-519e-4feb-a145-7e0c98926e8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012151475s STEP: Saw pod success Jun 18 00:02:50.449: INFO: Pod "pod-2fc566e8-519e-4feb-a145-7e0c98926e8e" satisfied condition "Succeeded or Failed" Jun 18 00:02:50.452: INFO: Trying to get logs from node node1 pod pod-2fc566e8-519e-4feb-a145-7e0c98926e8e container test-container: STEP: delete the pod Jun 18 00:02:50.475: INFO: Waiting for pod pod-2fc566e8-519e-4feb-a145-7e0c98926e8e to disappear Jun 18 00:02:50.477: INFO: Pod pod-2fc566e8-519e-4feb-a145-7e0c98926e8e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:02:50.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9004" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup","total":-1,"completed":4,"skipped":115,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:02:50.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:51 Jun 18 00:02:50.532: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PV STEP: Waiting for PV to enter phase Available Jun 18 00:02:50.538: INFO: Waiting up to 30s for PersistentVolume hostpath-2pjmp to have phase Available Jun 18 00:02:50.540: INFO: PersistentVolume hostpath-2pjmp found but phase is Pending instead of Available. Jun 18 00:02:51.543: INFO: PersistentVolume hostpath-2pjmp found and phase=Available (1.005056485s) STEP: Checking that PV Protection finalizer is set [It] Verify that PV bound to a PVC is not removed immediately /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:107 STEP: Creating a PVC STEP: Waiting for PVC to become Bound Jun 18 00:02:51.550: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-r9lwt] to have phase Bound Jun 18 00:02:51.552: INFO: PersistentVolumeClaim pvc-r9lwt found but phase is Pending instead of Bound. Jun 18 00:02:53.557: INFO: PersistentVolumeClaim pvc-r9lwt found and phase=Bound (2.007198155s) STEP: Deleting the PV, however, the PV must not be removed from the system as it's bound to a PVC STEP: Checking that the PV status is Terminating STEP: Deleting the PVC that is bound to the PV STEP: Checking that the PV is automatically removed from the system because it's no longer bound to a PVC Jun 18 00:02:53.569: INFO: Waiting up to 3m0s for PersistentVolume hostpath-2pjmp to get deleted Jun 18 00:02:53.571: INFO: PersistentVolume hostpath-2pjmp found and phase=Bound (2.136518ms) Jun 18 00:02:55.579: INFO: PersistentVolume hostpath-2pjmp was removed [AfterEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:02:55.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-protection-3715" for this suite. [AfterEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:92 Jun 18 00:02:55.588: INFO: AfterEach: Cleaning up test resources. Jun 18 00:02:55.588: INFO: Deleting PersistentVolumeClaim "pvc-r9lwt" Jun 18 00:02:55.590: INFO: Deleting PersistentVolume "hostpath-2pjmp" • [SLOW TEST:5.081 seconds] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Verify that PV bound to a PVC is not removed immediately /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:107 ------------------------------ {"msg":"PASSED [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately","total":-1,"completed":5,"skipped":125,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:02:45.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-f22097da-5d70-4a31-b253-ae4b13f4830f" Jun 18 00:02:49.711: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-f22097da-5d70-4a31-b253-ae4b13f4830f" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-f22097da-5d70-4a31-b253-ae4b13f4830f" "/tmp/local-volume-test-f22097da-5d70-4a31-b253-ae4b13f4830f"] Namespace:persistent-local-volumes-test-7187 PodName:hostexec-node1-sg4fg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:02:49.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:02:49.810: INFO: Creating a PV followed by a PVC Jun 18 00:02:49.818: INFO: Waiting for PV local-pvjdpfc to bind to PVC pvc-rc8cd Jun 18 00:02:49.818: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-rc8cd] to have phase Bound Jun 18 00:02:49.819: INFO: PersistentVolumeClaim pvc-rc8cd found but phase is Pending instead of Bound. Jun 18 00:02:51.824: INFO: PersistentVolumeClaim pvc-rc8cd found and phase=Bound (2.006443128s) Jun 18 00:02:51.824: INFO: Waiting up to 3m0s for PersistentVolume local-pvjdpfc to have phase Bound Jun 18 00:02:51.826: INFO: PersistentVolume local-pvjdpfc found and phase=Bound (2.095589ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jun 18 00:02:55.889: INFO: pod "pod-290d63cd-f930-4c00-896f-163cad9c761b" created on Node "node1" STEP: Writing in pod1 Jun 18 00:02:55.889: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7187 PodName:pod-290d63cd-f930-4c00-896f-163cad9c761b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:02:55.889: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:02:55.981: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Jun 18 00:02:55.981: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7187 PodName:pod-290d63cd-f930-4c00-896f-163cad9c761b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:02:55.981: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:02:56.186: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Jun 18 00:02:56.186: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-f22097da-5d70-4a31-b253-ae4b13f4830f > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7187 PodName:pod-290d63cd-f930-4c00-896f-163cad9c761b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:02:56.186: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:02:56.319: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-f22097da-5d70-4a31-b253-ae4b13f4830f > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-290d63cd-f930-4c00-896f-163cad9c761b in namespace persistent-local-volumes-test-7187 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:02:56.325: INFO: Deleting PersistentVolumeClaim "pvc-rc8cd" Jun 18 00:02:56.328: INFO: Deleting PersistentVolume "local-pvjdpfc" STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-f22097da-5d70-4a31-b253-ae4b13f4830f" Jun 18 00:02:56.332: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-f22097da-5d70-4a31-b253-ae4b13f4830f"] Namespace:persistent-local-volumes-test-7187 PodName:hostexec-node1-sg4fg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:02:56.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:02:57.169: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f22097da-5d70-4a31-b253-ae4b13f4830f] Namespace:persistent-local-volumes-test-7187 PodName:hostexec-node1-sg4fg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:02:57.169: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:02:57.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7187" for this suite. • [SLOW TEST:11.764 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":3,"skipped":148,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:02:27.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not modify fsGroup if fsGroupPolicy=None /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1582 STEP: Building a driver namespace object, basename csi-mock-volumes-6463 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 18 00:02:27.494: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6463-7267/csi-attacher Jun 18 00:02:27.497: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6463 Jun 18 00:02:27.497: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-6463 Jun 18 00:02:27.500: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6463 Jun 18 00:02:27.503: INFO: creating *v1.Role: csi-mock-volumes-6463-7267/external-attacher-cfg-csi-mock-volumes-6463 Jun 18 00:02:27.509: INFO: creating *v1.RoleBinding: csi-mock-volumes-6463-7267/csi-attacher-role-cfg Jun 18 00:02:27.514: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6463-7267/csi-provisioner Jun 18 00:02:27.520: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6463 Jun 18 00:02:27.520: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-6463 Jun 18 00:02:27.526: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6463 Jun 18 00:02:27.531: INFO: creating *v1.Role: csi-mock-volumes-6463-7267/external-provisioner-cfg-csi-mock-volumes-6463 Jun 18 00:02:27.537: INFO: creating *v1.RoleBinding: csi-mock-volumes-6463-7267/csi-provisioner-role-cfg Jun 18 00:02:27.539: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6463-7267/csi-resizer Jun 18 00:02:27.542: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6463 Jun 18 00:02:27.542: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-6463 Jun 18 00:02:27.545: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6463 Jun 18 00:02:27.548: INFO: creating *v1.Role: csi-mock-volumes-6463-7267/external-resizer-cfg-csi-mock-volumes-6463 Jun 18 00:02:27.552: INFO: creating *v1.RoleBinding: csi-mock-volumes-6463-7267/csi-resizer-role-cfg Jun 18 00:02:27.554: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6463-7267/csi-snapshotter Jun 18 00:02:27.557: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6463 Jun 18 00:02:27.557: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-6463 Jun 18 00:02:27.560: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6463 Jun 18 00:02:27.563: INFO: creating *v1.Role: csi-mock-volumes-6463-7267/external-snapshotter-leaderelection-csi-mock-volumes-6463 Jun 18 00:02:27.565: INFO: creating *v1.RoleBinding: csi-mock-volumes-6463-7267/external-snapshotter-leaderelection Jun 18 00:02:27.568: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6463-7267/csi-mock Jun 18 00:02:27.570: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6463 Jun 18 00:02:27.573: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6463 Jun 18 00:02:27.575: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6463 Jun 18 00:02:27.578: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6463 Jun 18 00:02:27.581: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6463 Jun 18 00:02:27.583: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6463 Jun 18 00:02:27.586: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6463 Jun 18 00:02:27.589: INFO: creating *v1.StatefulSet: csi-mock-volumes-6463-7267/csi-mockplugin Jun 18 00:02:27.593: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-6463 Jun 18 00:02:27.596: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-6463" Jun 18 00:02:27.598: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-6463 to register on node node2 STEP: Creating pod with fsGroup Jun 18 00:02:37.612: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 18 00:02:37.617: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-9ksqg] to have phase Bound Jun 18 00:02:37.619: INFO: PersistentVolumeClaim pvc-9ksqg found but phase is Pending instead of Bound. Jun 18 00:02:39.622: INFO: PersistentVolumeClaim pvc-9ksqg found and phase=Bound (2.00577257s) Jun 18 00:02:49.643: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir /mnt/test/csi-mock-volumes-6463] Namespace:csi-mock-volumes-6463 PodName:pvc-volume-tester-sqxpc ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:02:49.643: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:02:49.735: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'filecontents' > '/mnt/test/csi-mock-volumes-6463/csi-mock-volumes-6463'; sync] Namespace:csi-mock-volumes-6463 PodName:pvc-volume-tester-sqxpc ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:02:49.735: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:02:51.804: INFO: ExecWithOptions {Command:[/bin/sh -c ls -l /mnt/test/csi-mock-volumes-6463/csi-mock-volumes-6463] Namespace:csi-mock-volumes-6463 PodName:pvc-volume-tester-sqxpc ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:02:51.804: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:02:51.899: INFO: pod csi-mock-volumes-6463/pvc-volume-tester-sqxpc exec for cmd ls -l /mnt/test/csi-mock-volumes-6463/csi-mock-volumes-6463, stdout: -rw-r--r-- 1 root root 13 Jun 18 00:02 /mnt/test/csi-mock-volumes-6463/csi-mock-volumes-6463, stderr: Jun 18 00:02:51.899: INFO: ExecWithOptions {Command:[/bin/sh -c rm -fr /mnt/test/csi-mock-volumes-6463] Namespace:csi-mock-volumes-6463 PodName:pvc-volume-tester-sqxpc ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:02:51.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod pvc-volume-tester-sqxpc Jun 18 00:02:51.977: INFO: Deleting pod "pvc-volume-tester-sqxpc" in namespace "csi-mock-volumes-6463" Jun 18 00:02:51.981: INFO: Wait up to 5m0s for pod "pvc-volume-tester-sqxpc" to be fully deleted STEP: Deleting claim pvc-9ksqg Jun 18 00:03:26.033: INFO: Waiting up to 2m0s for PersistentVolume pvc-ba37aef5-a5e1-48f1-a01b-29c028845ea8 to get deleted Jun 18 00:03:26.036: INFO: PersistentVolume pvc-ba37aef5-a5e1-48f1-a01b-29c028845ea8 found and phase=Bound (2.251895ms) Jun 18 00:03:28.040: INFO: PersistentVolume pvc-ba37aef5-a5e1-48f1-a01b-29c028845ea8 was removed STEP: Deleting storageclass csi-mock-volumes-6463-scg4hwc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-6463 STEP: Waiting for namespaces [csi-mock-volumes-6463] to vanish STEP: uninstalling csi mock driver Jun 18 00:03:34.053: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6463-7267/csi-attacher Jun 18 00:03:34.059: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6463 Jun 18 00:03:34.063: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6463 Jun 18 00:03:34.066: INFO: deleting *v1.Role: csi-mock-volumes-6463-7267/external-attacher-cfg-csi-mock-volumes-6463 Jun 18 00:03:34.069: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6463-7267/csi-attacher-role-cfg Jun 18 00:03:34.074: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6463-7267/csi-provisioner Jun 18 00:03:34.077: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6463 Jun 18 00:03:34.080: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6463 Jun 18 00:03:34.086: INFO: deleting *v1.Role: csi-mock-volumes-6463-7267/external-provisioner-cfg-csi-mock-volumes-6463 Jun 18 00:03:34.093: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6463-7267/csi-provisioner-role-cfg Jun 18 00:03:34.101: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6463-7267/csi-resizer Jun 18 00:03:34.108: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6463 Jun 18 00:03:34.111: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6463 Jun 18 00:03:34.114: INFO: deleting *v1.Role: csi-mock-volumes-6463-7267/external-resizer-cfg-csi-mock-volumes-6463 Jun 18 00:03:34.118: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6463-7267/csi-resizer-role-cfg Jun 18 00:03:34.121: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6463-7267/csi-snapshotter Jun 18 00:03:34.124: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6463 Jun 18 00:03:34.128: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6463 Jun 18 00:03:34.131: INFO: deleting *v1.Role: csi-mock-volumes-6463-7267/external-snapshotter-leaderelection-csi-mock-volumes-6463 Jun 18 00:03:34.135: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6463-7267/external-snapshotter-leaderelection Jun 18 00:03:34.138: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6463-7267/csi-mock Jun 18 00:03:34.142: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6463 Jun 18 00:03:34.145: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6463 Jun 18 00:03:34.149: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6463 Jun 18 00:03:34.153: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6463 Jun 18 00:03:34.157: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6463 Jun 18 00:03:34.161: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6463 Jun 18 00:03:34.165: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6463 Jun 18 00:03:34.169: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6463-7267/csi-mockplugin Jun 18 00:03:34.173: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-6463 STEP: deleting the driver namespace: csi-mock-volumes-6463-7267 STEP: Waiting for namespaces [csi-mock-volumes-6463-7267] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:03:40.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:72.771 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI FSGroupPolicy [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1558 should not modify fsGroup if fsGroupPolicy=None /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1582 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should not modify fsGroup if fsGroupPolicy=None","total":-1,"completed":6,"skipped":132,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:02:13.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] two pods: should call NodeStage after previous NodeUnstage final error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:961 STEP: Building a driver namespace object, basename csi-mock-volumes-4261 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Jun 18 00:02:14.033: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4261-9975/csi-attacher Jun 18 00:02:14.035: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4261 Jun 18 00:02:14.035: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4261 Jun 18 00:02:14.038: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4261 Jun 18 00:02:14.040: INFO: creating *v1.Role: csi-mock-volumes-4261-9975/external-attacher-cfg-csi-mock-volumes-4261 Jun 18 00:02:14.043: INFO: creating *v1.RoleBinding: csi-mock-volumes-4261-9975/csi-attacher-role-cfg Jun 18 00:02:14.045: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4261-9975/csi-provisioner Jun 18 00:02:14.048: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4261 Jun 18 00:02:14.048: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4261 Jun 18 00:02:14.051: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4261 Jun 18 00:02:14.053: INFO: creating *v1.Role: csi-mock-volumes-4261-9975/external-provisioner-cfg-csi-mock-volumes-4261 Jun 18 00:02:14.056: INFO: creating *v1.RoleBinding: csi-mock-volumes-4261-9975/csi-provisioner-role-cfg Jun 18 00:02:14.059: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4261-9975/csi-resizer Jun 18 00:02:14.062: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4261 Jun 18 00:02:14.062: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4261 Jun 18 00:02:14.065: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4261 Jun 18 00:02:14.068: INFO: creating *v1.Role: csi-mock-volumes-4261-9975/external-resizer-cfg-csi-mock-volumes-4261 Jun 18 00:02:14.071: INFO: creating *v1.RoleBinding: csi-mock-volumes-4261-9975/csi-resizer-role-cfg Jun 18 00:02:14.074: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4261-9975/csi-snapshotter Jun 18 00:02:14.077: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4261 Jun 18 00:02:14.077: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4261 Jun 18 00:02:14.080: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4261 Jun 18 00:02:14.082: INFO: creating *v1.Role: csi-mock-volumes-4261-9975/external-snapshotter-leaderelection-csi-mock-volumes-4261 Jun 18 00:02:14.085: INFO: creating *v1.RoleBinding: csi-mock-volumes-4261-9975/external-snapshotter-leaderelection Jun 18 00:02:14.088: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4261-9975/csi-mock Jun 18 00:02:14.091: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4261 Jun 18 00:02:14.094: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4261 Jun 18 00:02:14.097: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4261 Jun 18 00:02:14.101: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4261 Jun 18 00:02:14.103: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4261 Jun 18 00:02:14.106: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4261 Jun 18 00:02:14.109: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4261 Jun 18 00:02:14.111: INFO: creating *v1.StatefulSet: csi-mock-volumes-4261-9975/csi-mockplugin Jun 18 00:02:14.116: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4261 Jun 18 00:02:14.119: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4261" Jun 18 00:02:14.121: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4261 to register on node node2 I0618 00:02:20.190864 32 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I0618 00:02:20.229487 32 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-4261","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0618 00:02:20.231395 32 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I0618 00:02:20.233492 32 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I0618 00:02:20.291202 32 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-4261","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0618 00:02:21.296752 32 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-4261"},"Error":"","FullError":null} STEP: Creating pod Jun 18 00:02:23.637: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 18 00:02:23.643: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-jp6tj] to have phase Bound Jun 18 00:02:23.646: INFO: PersistentVolumeClaim pvc-jp6tj found but phase is Pending instead of Bound. I0618 00:02:23.654595 32 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-9802597a-12d4-4f38-b29e-3abec514667e","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-9802597a-12d4-4f38-b29e-3abec514667e"}}},"Error":"","FullError":null} Jun 18 00:02:25.650: INFO: PersistentVolumeClaim pvc-jp6tj found and phase=Bound (2.006499418s) Jun 18 00:02:25.666: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-jp6tj] to have phase Bound Jun 18 00:02:25.668: INFO: PersistentVolumeClaim pvc-jp6tj found and phase=Bound (2.399791ms) I0618 00:02:25.815343 32 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Jun 18 00:02:25.818: INFO: >>> kubeConfig: /root/.kube/config I0618 00:02:25.918746 32 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9802597a-12d4-4f38-b29e-3abec514667e/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-9802597a-12d4-4f38-b29e-3abec514667e","storage.kubernetes.io/csiProvisionerIdentity":"1655510540234-8081-csi-mock-csi-mock-volumes-4261"}},"Response":{},"Error":"","FullError":null} I0618 00:02:25.923286 32 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Jun 18 00:02:25.925: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:02:26.004: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:02:26.093: INFO: >>> kubeConfig: /root/.kube/config I0618 00:02:26.190022 32 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9802597a-12d4-4f38-b29e-3abec514667e/globalmount","target_path":"/var/lib/kubelet/pods/7c7bedbd-591e-4c7e-bf0f-a4f498aa2fb0/volumes/kubernetes.io~csi/pvc-9802597a-12d4-4f38-b29e-3abec514667e/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-9802597a-12d4-4f38-b29e-3abec514667e","storage.kubernetes.io/csiProvisionerIdentity":"1655510540234-8081-csi-mock-csi-mock-volumes-4261"}},"Response":{},"Error":"","FullError":null} Jun 18 00:02:29.674: INFO: Deleting pod "pvc-volume-tester-hh9gf" in namespace "csi-mock-volumes-4261" Jun 18 00:02:29.681: INFO: Wait up to 5m0s for pod "pvc-volume-tester-hh9gf" to be fully deleted Jun 18 00:02:32.721: INFO: >>> kubeConfig: /root/.kube/config I0618 00:02:32.805296 32 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/7c7bedbd-591e-4c7e-bf0f-a4f498aa2fb0/volumes/kubernetes.io~csi/pvc-9802597a-12d4-4f38-b29e-3abec514667e/mount"},"Response":{},"Error":"","FullError":null} I0618 00:02:32.824253 32 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0618 00:02:32.826100 32 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9802597a-12d4-4f38-b29e-3abec514667e/globalmount"},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake final error","FullError":{"code":3,"message":"fake final error"}} I0618 00:02:33.428710 32 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0618 00:02:33.430298 32 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9802597a-12d4-4f38-b29e-3abec514667e/globalmount"},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake final error","FullError":{"code":3,"message":"fake final error"}} I0618 00:02:34.437899 32 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0618 00:02:34.439565 32 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9802597a-12d4-4f38-b29e-3abec514667e/globalmount"},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake final error","FullError":{"code":3,"message":"fake final error"}} I0618 00:02:36.457683 32 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0618 00:02:36.459699 32 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9802597a-12d4-4f38-b29e-3abec514667e/globalmount"},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake final error","FullError":{"code":3,"message":"fake final error"}} I0618 00:02:40.466013 32 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0618 00:02:40.467824 32 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9802597a-12d4-4f38-b29e-3abec514667e/globalmount"},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake final error","FullError":{"code":3,"message":"fake final error"}} I0618 00:02:40.943196 32 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Jun 18 00:02:40.944: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:02:41.092: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:02:41.309: INFO: >>> kubeConfig: /root/.kube/config I0618 00:02:41.397671 32 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9802597a-12d4-4f38-b29e-3abec514667e/globalmount","target_path":"/var/lib/kubelet/pods/f7f9ee32-569f-426c-b5ca-0645d1c0f08d/volumes/kubernetes.io~csi/pvc-9802597a-12d4-4f38-b29e-3abec514667e/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-9802597a-12d4-4f38-b29e-3abec514667e","storage.kubernetes.io/csiProvisionerIdentity":"1655510540234-8081-csi-mock-csi-mock-volumes-4261"}},"Response":{},"Error":"","FullError":null} Jun 18 00:02:49.702: INFO: Deleting pod "pvc-volume-tester-zkmq2" in namespace "csi-mock-volumes-4261" Jun 18 00:02:49.707: INFO: Wait up to 5m0s for pod "pvc-volume-tester-zkmq2" to be fully deleted Jun 18 00:02:51.493: INFO: >>> kubeConfig: /root/.kube/config I0618 00:02:51.670996 32 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/f7f9ee32-569f-426c-b5ca-0645d1c0f08d/volumes/kubernetes.io~csi/pvc-9802597a-12d4-4f38-b29e-3abec514667e/mount"},"Response":{},"Error":"","FullError":null} I0618 00:02:51.696735 32 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0618 00:02:51.699015 32 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9802597a-12d4-4f38-b29e-3abec514667e/globalmount"},"Response":{},"Error":"","FullError":null} STEP: Waiting for all remaining expected CSI calls Jun 18 00:02:54.714: FAIL: while waiting for all CSI calls Unexpected error: <*errors.errorString | 0xc003d119c0>: { s: "Unexpected CSI call 2: expected NodeStageVolume (0), got NodeUnstageVolume (0)", } Unexpected CSI call 2: expected NodeStageVolume (0), got NodeUnstageVolume (0) occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func1.13.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1017 +0x79e k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001783380) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001783380) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001783380, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 STEP: Deleting pod pvc-volume-tester-hh9gf Jun 18 00:02:54.715: INFO: Deleting pod "pvc-volume-tester-hh9gf" in namespace "csi-mock-volumes-4261" STEP: Deleting pod pvc-volume-tester-zkmq2 Jun 18 00:02:54.717: INFO: Deleting pod "pvc-volume-tester-zkmq2" in namespace "csi-mock-volumes-4261" STEP: Deleting claim pvc-jp6tj Jun 18 00:02:54.726: INFO: Waiting up to 2m0s for PersistentVolume pvc-9802597a-12d4-4f38-b29e-3abec514667e to get deleted Jun 18 00:02:54.728: INFO: PersistentVolume pvc-9802597a-12d4-4f38-b29e-3abec514667e found and phase=Bound (1.986087ms) I0618 00:02:54.763441 32 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} Jun 18 00:02:56.731: INFO: PersistentVolume pvc-9802597a-12d4-4f38-b29e-3abec514667e was removed STEP: Deleting storageclass csi-mock-volumes-4261-sc8nd4c STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4261 STEP: Waiting for namespaces [csi-mock-volumes-4261] to vanish STEP: uninstalling csi mock driver Jun 18 00:03:02.754: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4261-9975/csi-attacher Jun 18 00:03:02.759: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4261 Jun 18 00:03:02.762: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4261 Jun 18 00:03:02.765: INFO: deleting *v1.Role: csi-mock-volumes-4261-9975/external-attacher-cfg-csi-mock-volumes-4261 Jun 18 00:03:02.769: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4261-9975/csi-attacher-role-cfg Jun 18 00:03:02.773: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4261-9975/csi-provisioner Jun 18 00:03:02.776: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4261 Jun 18 00:03:02.779: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4261 Jun 18 00:03:02.782: INFO: deleting *v1.Role: csi-mock-volumes-4261-9975/external-provisioner-cfg-csi-mock-volumes-4261 Jun 18 00:03:02.786: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4261-9975/csi-provisioner-role-cfg Jun 18 00:03:02.789: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4261-9975/csi-resizer Jun 18 00:03:02.793: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4261 Jun 18 00:03:02.796: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4261 Jun 18 00:03:02.800: INFO: deleting *v1.Role: csi-mock-volumes-4261-9975/external-resizer-cfg-csi-mock-volumes-4261 Jun 18 00:03:02.803: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4261-9975/csi-resizer-role-cfg Jun 18 00:03:02.809: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4261-9975/csi-snapshotter Jun 18 00:03:02.812: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4261 Jun 18 00:03:02.815: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4261 Jun 18 00:03:02.819: INFO: deleting *v1.Role: csi-mock-volumes-4261-9975/external-snapshotter-leaderelection-csi-mock-volumes-4261 Jun 18 00:03:02.822: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4261-9975/external-snapshotter-leaderelection Jun 18 00:03:02.827: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4261-9975/csi-mock Jun 18 00:03:02.831: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4261 Jun 18 00:03:02.835: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4261 Jun 18 00:03:02.839: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4261 Jun 18 00:03:02.842: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4261 Jun 18 00:03:02.845: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4261 Jun 18 00:03:02.850: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4261 Jun 18 00:03:02.853: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4261 Jun 18 00:03:02.857: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4261-9975/csi-mockplugin Jun 18 00:03:02.862: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-4261 STEP: deleting the driver namespace: csi-mock-volumes-4261-9975 STEP: Waiting for namespaces [csi-mock-volumes-4261-9975] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:03:46.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • Failure [92.936 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI NodeUnstage error cases [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900 two pods: should call NodeStage after previous NodeUnstage final error [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:961 Jun 18 00:02:54.714: while waiting for all CSI calls Unexpected error: <*errors.errorString | 0xc003d119c0>: { s: "Unexpected CSI call 2: expected NodeStageVolume (0), got NodeUnstageVolume (0)", } Unexpected CSI call 2: expected NodeStageVolume (0), got NodeUnstageVolume (0) occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1017 ------------------------------ {"msg":"FAILED [sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage final error","total":-1,"completed":3,"skipped":140,"failed":1,"failures":["[sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage final error"]} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:02:37.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should report attach limit when limit is bigger than 0 [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:529 STEP: Building a driver namespace object, basename csi-mock-volumes-1075 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 18 00:02:37.942: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1075-5404/csi-attacher Jun 18 00:02:37.946: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1075 Jun 18 00:02:37.946: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-1075 Jun 18 00:02:37.949: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1075 Jun 18 00:02:37.952: INFO: creating *v1.Role: csi-mock-volumes-1075-5404/external-attacher-cfg-csi-mock-volumes-1075 Jun 18 00:02:37.955: INFO: creating *v1.RoleBinding: csi-mock-volumes-1075-5404/csi-attacher-role-cfg Jun 18 00:02:37.958: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1075-5404/csi-provisioner Jun 18 00:02:37.960: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1075 Jun 18 00:02:37.960: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-1075 Jun 18 00:02:37.964: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1075 Jun 18 00:02:37.967: INFO: creating *v1.Role: csi-mock-volumes-1075-5404/external-provisioner-cfg-csi-mock-volumes-1075 Jun 18 00:02:37.970: INFO: creating *v1.RoleBinding: csi-mock-volumes-1075-5404/csi-provisioner-role-cfg Jun 18 00:02:37.973: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1075-5404/csi-resizer Jun 18 00:02:37.976: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1075 Jun 18 00:02:37.976: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-1075 Jun 18 00:02:37.978: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1075 Jun 18 00:02:37.982: INFO: creating *v1.Role: csi-mock-volumes-1075-5404/external-resizer-cfg-csi-mock-volumes-1075 Jun 18 00:02:37.984: INFO: creating *v1.RoleBinding: csi-mock-volumes-1075-5404/csi-resizer-role-cfg Jun 18 00:02:37.987: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1075-5404/csi-snapshotter Jun 18 00:02:37.990: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1075 Jun 18 00:02:37.990: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-1075 Jun 18 00:02:37.993: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1075 Jun 18 00:02:37.996: INFO: creating *v1.Role: csi-mock-volumes-1075-5404/external-snapshotter-leaderelection-csi-mock-volumes-1075 Jun 18 00:02:37.999: INFO: creating *v1.RoleBinding: csi-mock-volumes-1075-5404/external-snapshotter-leaderelection Jun 18 00:02:38.001: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1075-5404/csi-mock Jun 18 00:02:38.004: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1075 Jun 18 00:02:38.007: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1075 Jun 18 00:02:38.010: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1075 Jun 18 00:02:38.014: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1075 Jun 18 00:02:38.019: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1075 Jun 18 00:02:38.024: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1075 Jun 18 00:02:38.029: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1075 Jun 18 00:02:38.034: INFO: creating *v1.StatefulSet: csi-mock-volumes-1075-5404/csi-mockplugin Jun 18 00:02:38.040: INFO: creating *v1.StatefulSet: csi-mock-volumes-1075-5404/csi-mockplugin-attacher Jun 18 00:02:38.044: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-1075 to register on node node2 STEP: Creating pod Jun 18 00:02:47.561: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 18 00:02:47.565: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-474cw] to have phase Bound Jun 18 00:02:47.567: INFO: PersistentVolumeClaim pvc-474cw found but phase is Pending instead of Bound. Jun 18 00:02:49.572: INFO: PersistentVolumeClaim pvc-474cw found and phase=Bound (2.007148697s) STEP: Creating pod Jun 18 00:02:57.597: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 18 00:02:57.600: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-sjjw5] to have phase Bound Jun 18 00:02:57.603: INFO: PersistentVolumeClaim pvc-sjjw5 found but phase is Pending instead of Bound. Jun 18 00:02:59.605: INFO: PersistentVolumeClaim pvc-sjjw5 found and phase=Bound (2.005697518s) STEP: Creating pod Jun 18 00:03:07.630: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 18 00:03:07.634: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-h782q] to have phase Bound Jun 18 00:03:07.636: INFO: PersistentVolumeClaim pvc-h782q found but phase is Pending instead of Bound. Jun 18 00:03:09.640: INFO: PersistentVolumeClaim pvc-h782q found and phase=Bound (2.006583262s) STEP: Deleting pod pvc-volume-tester-sx92d Jun 18 00:03:19.666: INFO: Deleting pod "pvc-volume-tester-sx92d" in namespace "csi-mock-volumes-1075" Jun 18 00:03:19.673: INFO: Wait up to 5m0s for pod "pvc-volume-tester-sx92d" to be fully deleted STEP: Deleting pod pvc-volume-tester-m8xgv Jun 18 00:03:29.679: INFO: Deleting pod "pvc-volume-tester-m8xgv" in namespace "csi-mock-volumes-1075" Jun 18 00:03:29.685: INFO: Wait up to 5m0s for pod "pvc-volume-tester-m8xgv" to be fully deleted STEP: Deleting pod pvc-volume-tester-j8hpm Jun 18 00:03:39.690: INFO: Deleting pod "pvc-volume-tester-j8hpm" in namespace "csi-mock-volumes-1075" Jun 18 00:03:39.697: INFO: Wait up to 5m0s for pod "pvc-volume-tester-j8hpm" to be fully deleted STEP: Deleting claim pvc-474cw Jun 18 00:03:43.711: INFO: Waiting up to 2m0s for PersistentVolume pvc-fd3052f6-722c-42a0-b83a-ca0f3c04c059 to get deleted Jun 18 00:03:43.713: INFO: PersistentVolume pvc-fd3052f6-722c-42a0-b83a-ca0f3c04c059 found and phase=Bound (1.997581ms) Jun 18 00:03:45.717: INFO: PersistentVolume pvc-fd3052f6-722c-42a0-b83a-ca0f3c04c059 was removed STEP: Deleting claim pvc-sjjw5 Jun 18 00:03:45.725: INFO: Waiting up to 2m0s for PersistentVolume pvc-9c8b917e-081d-402c-beb7-edfc401b9d54 to get deleted Jun 18 00:03:45.728: INFO: PersistentVolume pvc-9c8b917e-081d-402c-beb7-edfc401b9d54 found and phase=Bound (2.245517ms) Jun 18 00:03:47.732: INFO: PersistentVolume pvc-9c8b917e-081d-402c-beb7-edfc401b9d54 was removed STEP: Deleting claim pvc-h782q Jun 18 00:03:47.740: INFO: Waiting up to 2m0s for PersistentVolume pvc-be46670f-e097-41e8-a619-c03d119ebd37 to get deleted Jun 18 00:03:47.742: INFO: PersistentVolume pvc-be46670f-e097-41e8-a619-c03d119ebd37 found and phase=Bound (2.094633ms) Jun 18 00:03:49.747: INFO: PersistentVolume pvc-be46670f-e097-41e8-a619-c03d119ebd37 was removed STEP: Deleting storageclass csi-mock-volumes-1075-scpwtl7 STEP: Deleting storageclass csi-mock-volumes-1075-scrs4tw STEP: Deleting storageclass csi-mock-volumes-1075-scd7hqf STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-1075 STEP: Waiting for namespaces [csi-mock-volumes-1075] to vanish STEP: uninstalling csi mock driver Jun 18 00:03:55.773: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1075-5404/csi-attacher Jun 18 00:03:55.779: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1075 Jun 18 00:03:55.783: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1075 Jun 18 00:03:55.786: INFO: deleting *v1.Role: csi-mock-volumes-1075-5404/external-attacher-cfg-csi-mock-volumes-1075 Jun 18 00:03:55.790: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1075-5404/csi-attacher-role-cfg Jun 18 00:03:55.793: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1075-5404/csi-provisioner Jun 18 00:03:55.796: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1075 Jun 18 00:03:55.800: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1075 Jun 18 00:03:55.803: INFO: deleting *v1.Role: csi-mock-volumes-1075-5404/external-provisioner-cfg-csi-mock-volumes-1075 Jun 18 00:03:55.811: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1075-5404/csi-provisioner-role-cfg Jun 18 00:03:55.819: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1075-5404/csi-resizer Jun 18 00:03:55.827: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1075 Jun 18 00:03:55.832: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1075 Jun 18 00:03:55.836: INFO: deleting *v1.Role: csi-mock-volumes-1075-5404/external-resizer-cfg-csi-mock-volumes-1075 Jun 18 00:03:55.840: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1075-5404/csi-resizer-role-cfg Jun 18 00:03:55.843: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1075-5404/csi-snapshotter Jun 18 00:03:55.846: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1075 Jun 18 00:03:55.850: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1075 Jun 18 00:03:55.853: INFO: deleting *v1.Role: csi-mock-volumes-1075-5404/external-snapshotter-leaderelection-csi-mock-volumes-1075 Jun 18 00:03:55.856: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1075-5404/external-snapshotter-leaderelection Jun 18 00:03:55.860: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1075-5404/csi-mock Jun 18 00:03:55.863: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1075 Jun 18 00:03:55.866: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1075 Jun 18 00:03:55.870: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1075 Jun 18 00:03:55.873: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1075 Jun 18 00:03:55.876: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1075 Jun 18 00:03:55.880: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1075 Jun 18 00:03:55.883: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1075 Jun 18 00:03:55.886: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1075-5404/csi-mockplugin Jun 18 00:03:55.890: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1075-5404/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-1075-5404 STEP: Waiting for namespaces [csi-mock-volumes-1075-5404] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:04:01.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:84.038 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI volume limit information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:528 should report attach limit when limit is bigger than 0 [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:529 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI volume limit information using mock driver should report attach limit when limit is bigger than 0 [Slow]","total":-1,"completed":7,"skipped":152,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:03:40.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-cfb2c0d9-6aac-400f-ae58-e636cec03aad" Jun 18 00:03:42.254: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-cfb2c0d9-6aac-400f-ae58-e636cec03aad && dd if=/dev/zero of=/tmp/local-volume-test-cfb2c0d9-6aac-400f-ae58-e636cec03aad/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-cfb2c0d9-6aac-400f-ae58-e636cec03aad/file] Namespace:persistent-local-volumes-test-6302 PodName:hostexec-node1-62d8b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:03:42.254: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:03:42.396: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-cfb2c0d9-6aac-400f-ae58-e636cec03aad/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6302 PodName:hostexec-node1-62d8b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:03:42.396: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:03:42.497: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-cfb2c0d9-6aac-400f-ae58-e636cec03aad && chmod o+rwx /tmp/local-volume-test-cfb2c0d9-6aac-400f-ae58-e636cec03aad] Namespace:persistent-local-volumes-test-6302 PodName:hostexec-node1-62d8b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:03:42.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:03:42.787: INFO: Creating a PV followed by a PVC Jun 18 00:03:42.794: INFO: Waiting for PV local-pvbmbmh to bind to PVC pvc-4x8dj Jun 18 00:03:42.794: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-4x8dj] to have phase Bound Jun 18 00:03:42.797: INFO: PersistentVolumeClaim pvc-4x8dj found but phase is Pending instead of Bound. Jun 18 00:03:44.804: INFO: PersistentVolumeClaim pvc-4x8dj found but phase is Pending instead of Bound. Jun 18 00:03:46.808: INFO: PersistentVolumeClaim pvc-4x8dj found but phase is Pending instead of Bound. Jun 18 00:03:48.811: INFO: PersistentVolumeClaim pvc-4x8dj found but phase is Pending instead of Bound. Jun 18 00:03:50.816: INFO: PersistentVolumeClaim pvc-4x8dj found but phase is Pending instead of Bound. Jun 18 00:03:52.819: INFO: PersistentVolumeClaim pvc-4x8dj found but phase is Pending instead of Bound. Jun 18 00:03:54.824: INFO: PersistentVolumeClaim pvc-4x8dj found but phase is Pending instead of Bound. Jun 18 00:03:56.830: INFO: PersistentVolumeClaim pvc-4x8dj found and phase=Bound (14.036078106s) Jun 18 00:03:56.830: INFO: Waiting up to 3m0s for PersistentVolume local-pvbmbmh to have phase Bound Jun 18 00:03:56.832: INFO: PersistentVolume local-pvbmbmh found and phase=Bound (1.952464ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Jun 18 00:04:00.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-6302 exec pod-606e60d0-76c1-44db-86de-2e16692a2b01 --namespace=persistent-local-volumes-test-6302 -- stat -c %g /mnt/volume1' Jun 18 00:04:01.119: INFO: stderr: "" Jun 18 00:04:01.119: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Jun 18 00:04:05.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-6302 exec pod-760e90b2-f409-4ab8-913a-8e856fe7fa01 --namespace=persistent-local-volumes-test-6302 -- stat -c %g /mnt/volume1' Jun 18 00:04:05.392: INFO: stderr: "" Jun 18 00:04:05.392: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod pod-606e60d0-76c1-44db-86de-2e16692a2b01 in namespace persistent-local-volumes-test-6302 STEP: Deleting second pod STEP: Deleting pod pod-760e90b2-f409-4ab8-913a-8e856fe7fa01 in namespace persistent-local-volumes-test-6302 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:04:05.401: INFO: Deleting PersistentVolumeClaim "pvc-4x8dj" Jun 18 00:04:05.405: INFO: Deleting PersistentVolume "local-pvbmbmh" Jun 18 00:04:05.408: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-cfb2c0d9-6aac-400f-ae58-e636cec03aad] Namespace:persistent-local-volumes-test-6302 PodName:hostexec-node1-62d8b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:04:05.408: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:04:05.537: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-cfb2c0d9-6aac-400f-ae58-e636cec03aad/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6302 PodName:hostexec-node1-62d8b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:04:05.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node1" at path /tmp/local-volume-test-cfb2c0d9-6aac-400f-ae58-e636cec03aad/file Jun 18 00:04:05.656: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-6302 PodName:hostexec-node1-62d8b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:04:05.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-cfb2c0d9-6aac-400f-ae58-e636cec03aad Jun 18 00:04:05.750: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-cfb2c0d9-6aac-400f-ae58-e636cec03aad] Namespace:persistent-local-volumes-test-6302 PodName:hostexec-node1-62d8b ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:04:05.750: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:04:05.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6302" for this suite. • [SLOW TEST:25.663 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":7,"skipped":134,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:02:16.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume without restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687 STEP: Building a driver namespace object, basename csi-mock-volumes-8221 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 18 00:02:16.414: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8221-6401/csi-attacher Jun 18 00:02:16.418: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8221 Jun 18 00:02:16.418: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-8221 Jun 18 00:02:16.421: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8221 Jun 18 00:02:16.424: INFO: creating *v1.Role: csi-mock-volumes-8221-6401/external-attacher-cfg-csi-mock-volumes-8221 Jun 18 00:02:16.426: INFO: creating *v1.RoleBinding: csi-mock-volumes-8221-6401/csi-attacher-role-cfg Jun 18 00:02:16.429: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8221-6401/csi-provisioner Jun 18 00:02:16.432: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8221 Jun 18 00:02:16.432: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-8221 Jun 18 00:02:16.434: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8221 Jun 18 00:02:16.437: INFO: creating *v1.Role: csi-mock-volumes-8221-6401/external-provisioner-cfg-csi-mock-volumes-8221 Jun 18 00:02:16.440: INFO: creating *v1.RoleBinding: csi-mock-volumes-8221-6401/csi-provisioner-role-cfg Jun 18 00:02:16.443: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8221-6401/csi-resizer Jun 18 00:02:16.446: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8221 Jun 18 00:02:16.446: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-8221 Jun 18 00:02:16.449: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8221 Jun 18 00:02:16.453: INFO: creating *v1.Role: csi-mock-volumes-8221-6401/external-resizer-cfg-csi-mock-volumes-8221 Jun 18 00:02:16.456: INFO: creating *v1.RoleBinding: csi-mock-volumes-8221-6401/csi-resizer-role-cfg Jun 18 00:02:16.458: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8221-6401/csi-snapshotter Jun 18 00:02:16.461: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8221 Jun 18 00:02:16.461: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-8221 Jun 18 00:02:16.464: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8221 Jun 18 00:02:16.466: INFO: creating *v1.Role: csi-mock-volumes-8221-6401/external-snapshotter-leaderelection-csi-mock-volumes-8221 Jun 18 00:02:16.469: INFO: creating *v1.RoleBinding: csi-mock-volumes-8221-6401/external-snapshotter-leaderelection Jun 18 00:02:16.472: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8221-6401/csi-mock Jun 18 00:02:16.474: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8221 Jun 18 00:02:16.477: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8221 Jun 18 00:02:16.481: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8221 Jun 18 00:02:16.484: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8221 Jun 18 00:02:16.487: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8221 Jun 18 00:02:16.489: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8221 Jun 18 00:02:16.492: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8221 Jun 18 00:02:16.495: INFO: creating *v1.StatefulSet: csi-mock-volumes-8221-6401/csi-mockplugin Jun 18 00:02:16.499: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-8221 Jun 18 00:02:16.502: INFO: creating *v1.StatefulSet: csi-mock-volumes-8221-6401/csi-mockplugin-resizer Jun 18 00:02:16.505: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-8221" Jun 18 00:02:16.507: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-8221 to register on node node1 STEP: Creating pod Jun 18 00:02:26.024: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 18 00:02:26.028: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-k6f5f] to have phase Bound Jun 18 00:02:26.031: INFO: PersistentVolumeClaim pvc-k6f5f found but phase is Pending instead of Bound. Jun 18 00:02:28.033: INFO: PersistentVolumeClaim pvc-k6f5f found and phase=Bound (2.005473593s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-gvglq Jun 18 00:03:50.073: INFO: Deleting pod "pvc-volume-tester-gvglq" in namespace "csi-mock-volumes-8221" Jun 18 00:03:50.077: INFO: Wait up to 5m0s for pod "pvc-volume-tester-gvglq" to be fully deleted STEP: Deleting claim pvc-k6f5f Jun 18 00:04:00.095: INFO: Waiting up to 2m0s for PersistentVolume pvc-13cee8c8-d691-4ff0-ab52-eb718cee3507 to get deleted Jun 18 00:04:00.097: INFO: PersistentVolume pvc-13cee8c8-d691-4ff0-ab52-eb718cee3507 found and phase=Bound (2.687491ms) Jun 18 00:04:02.101: INFO: PersistentVolume pvc-13cee8c8-d691-4ff0-ab52-eb718cee3507 was removed STEP: Deleting storageclass csi-mock-volumes-8221-sckh22q STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-8221 STEP: Waiting for namespaces [csi-mock-volumes-8221] to vanish STEP: uninstalling csi mock driver Jun 18 00:04:08.113: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8221-6401/csi-attacher Jun 18 00:04:08.119: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8221 Jun 18 00:04:08.123: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8221 Jun 18 00:04:08.127: INFO: deleting *v1.Role: csi-mock-volumes-8221-6401/external-attacher-cfg-csi-mock-volumes-8221 Jun 18 00:04:08.130: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8221-6401/csi-attacher-role-cfg Jun 18 00:04:08.133: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8221-6401/csi-provisioner Jun 18 00:04:08.137: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8221 Jun 18 00:04:08.140: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8221 Jun 18 00:04:08.144: INFO: deleting *v1.Role: csi-mock-volumes-8221-6401/external-provisioner-cfg-csi-mock-volumes-8221 Jun 18 00:04:08.148: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8221-6401/csi-provisioner-role-cfg Jun 18 00:04:08.152: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8221-6401/csi-resizer Jun 18 00:04:08.155: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8221 Jun 18 00:04:08.158: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8221 Jun 18 00:04:08.162: INFO: deleting *v1.Role: csi-mock-volumes-8221-6401/external-resizer-cfg-csi-mock-volumes-8221 Jun 18 00:04:08.165: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8221-6401/csi-resizer-role-cfg Jun 18 00:04:08.169: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8221-6401/csi-snapshotter Jun 18 00:04:08.172: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8221 Jun 18 00:04:08.176: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8221 Jun 18 00:04:08.180: INFO: deleting *v1.Role: csi-mock-volumes-8221-6401/external-snapshotter-leaderelection-csi-mock-volumes-8221 Jun 18 00:04:08.183: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8221-6401/external-snapshotter-leaderelection Jun 18 00:04:08.189: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8221-6401/csi-mock Jun 18 00:04:08.194: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8221 Jun 18 00:04:08.197: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8221 Jun 18 00:04:08.200: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8221 Jun 18 00:04:08.204: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8221 Jun 18 00:04:08.208: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8221 Jun 18 00:04:08.212: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8221 Jun 18 00:04:08.216: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8221 Jun 18 00:04:08.219: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8221-6401/csi-mockplugin Jun 18 00:04:08.222: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-8221 Jun 18 00:04:08.226: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8221-6401/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-8221-6401 STEP: Waiting for namespaces [csi-mock-volumes-8221-6401] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:04:20.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:123.904 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI online volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:672 should expand volume without restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":4,"skipped":85,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:04:05.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-b3156291-98a7-4130-9a4a-b9df0e597dec" Jun 18 00:04:10.042: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-b3156291-98a7-4130-9a4a-b9df0e597dec && dd if=/dev/zero of=/tmp/local-volume-test-b3156291-98a7-4130-9a4a-b9df0e597dec/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-b3156291-98a7-4130-9a4a-b9df0e597dec/file] Namespace:persistent-local-volumes-test-3679 PodName:hostexec-node1-pkcgl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:04:10.042: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:04:10.181: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-b3156291-98a7-4130-9a4a-b9df0e597dec/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-3679 PodName:hostexec-node1-pkcgl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:04:10.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:04:10.276: INFO: Creating a PV followed by a PVC Jun 18 00:04:10.284: INFO: Waiting for PV local-pvlbbsv to bind to PVC pvc-8cjmn Jun 18 00:04:10.284: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-8cjmn] to have phase Bound Jun 18 00:04:10.287: INFO: PersistentVolumeClaim pvc-8cjmn found but phase is Pending instead of Bound. Jun 18 00:04:12.290: INFO: PersistentVolumeClaim pvc-8cjmn found and phase=Bound (2.005754071s) Jun 18 00:04:12.290: INFO: Waiting up to 3m0s for PersistentVolume local-pvlbbsv to have phase Bound Jun 18 00:04:12.293: INFO: PersistentVolume local-pvlbbsv found and phase=Bound (3.12814ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Jun 18 00:04:18.319: INFO: pod "pod-b18a4406-b6f3-4c74-ac9c-6b6fd89678b0" created on Node "node1" STEP: Writing in pod1 Jun 18 00:04:18.319: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3679 PodName:pod-b18a4406-b6f3-4c74-ac9c-6b6fd89678b0 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:04:18.319: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:04:18.407: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n18 bytes (18B) copied, 0.000124 seconds, 141.8KB/s", err: Jun 18 00:04:18.407: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-3679 PodName:pod-b18a4406-b6f3-4c74-ac9c-6b6fd89678b0 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:04:18.407: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:04:18.497: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-b18a4406-b6f3-4c74-ac9c-6b6fd89678b0 in namespace persistent-local-volumes-test-3679 STEP: Creating pod2 STEP: Creating a pod Jun 18 00:04:24.524: INFO: pod "pod-4620fa77-87d7-4190-b439-aa9a484577c1" created on Node "node1" STEP: Reading in pod2 Jun 18 00:04:24.524: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-3679 PodName:pod-4620fa77-87d7-4190-b439-aa9a484577c1 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:04:24.524: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:04:25.424: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-4620fa77-87d7-4190-b439-aa9a484577c1 in namespace persistent-local-volumes-test-3679 [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:04:25.430: INFO: Deleting PersistentVolumeClaim "pvc-8cjmn" Jun 18 00:04:25.434: INFO: Deleting PersistentVolume "local-pvlbbsv" Jun 18 00:04:25.438: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-b3156291-98a7-4130-9a4a-b9df0e597dec/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-3679 PodName:hostexec-node1-pkcgl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:04:25.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop1" on node "node1" at path /tmp/local-volume-test-b3156291-98a7-4130-9a4a-b9df0e597dec/file Jun 18 00:04:25.522: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop1] Namespace:persistent-local-volumes-test-3679 PodName:hostexec-node1-pkcgl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:04:25.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-b3156291-98a7-4130-9a4a-b9df0e597dec Jun 18 00:04:25.608: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-b3156291-98a7-4130-9a4a-b9df0e597dec] Namespace:persistent-local-volumes-test-3679 PodName:hostexec-node1-pkcgl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:04:25.609: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:04:25.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3679" for this suite. • [SLOW TEST:19.727 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":8,"skipped":193,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:04:25.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Jun 18 00:04:25.766: INFO: The status of Pod test-hostpath-type-6vrnd is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:04:27.769: INFO: The status of Pod test-hostpath-type-6vrnd is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:04:29.769: INFO: The status of Pod test-hostpath-type-6vrnd is Running (Ready = true) STEP: running on node node2 STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should fail on mounting directory 'adir' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:84 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:04:35.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-147" for this suite. • [SLOW TEST:10.101 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting directory 'adir' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:84 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathFile","total":-1,"completed":9,"skipped":196,"failed":0} [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:04:35.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 [It] for read-only PD with pod delete grace period of "default (30s)" /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:135 Jun 18 00:04:35.867: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:04:35.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-3220" for this suite. S [SKIPPING] [0.047 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 schedule pods each with a PD, delete pod and verify detach [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:93 for read-only PD with pod delete grace period of "default (30s)" [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:135 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:136 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:04:20.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-dcc3d808-a841-43d9-aef5-c5908b0a7349" Jun 18 00:04:24.321: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-dcc3d808-a841-43d9-aef5-c5908b0a7349" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-dcc3d808-a841-43d9-aef5-c5908b0a7349" "/tmp/local-volume-test-dcc3d808-a841-43d9-aef5-c5908b0a7349"] Namespace:persistent-local-volumes-test-5400 PodName:hostexec-node1-cx2ch ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:04:24.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:04:25.433: INFO: Creating a PV followed by a PVC Jun 18 00:04:25.439: INFO: Waiting for PV local-pvtjth5 to bind to PVC pvc-khx28 Jun 18 00:04:25.439: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-khx28] to have phase Bound Jun 18 00:04:25.444: INFO: PersistentVolumeClaim pvc-khx28 found but phase is Pending instead of Bound. Jun 18 00:04:27.448: INFO: PersistentVolumeClaim pvc-khx28 found and phase=Bound (2.008067405s) Jun 18 00:04:27.448: INFO: Waiting up to 3m0s for PersistentVolume local-pvtjth5 to have phase Bound Jun 18 00:04:27.451: INFO: PersistentVolume local-pvtjth5 found and phase=Bound (2.937461ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Jun 18 00:04:35.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-5400 exec pod-566316da-85d7-421e-bcaf-01acb5b02ffd --namespace=persistent-local-volumes-test-5400 -- stat -c %g /mnt/volume1' Jun 18 00:04:35.731: INFO: stderr: "" Jun 18 00:04:35.731: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-566316da-85d7-421e-bcaf-01acb5b02ffd in namespace persistent-local-volumes-test-5400 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:04:35.737: INFO: Deleting PersistentVolumeClaim "pvc-khx28" Jun 18 00:04:35.741: INFO: Deleting PersistentVolume "local-pvtjth5" STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-dcc3d808-a841-43d9-aef5-c5908b0a7349" Jun 18 00:04:35.745: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-dcc3d808-a841-43d9-aef5-c5908b0a7349"] Namespace:persistent-local-volumes-test-5400 PodName:hostexec-node1-cx2ch ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:04:35.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:04:35.844: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-dcc3d808-a841-43d9-aef5-c5908b0a7349] Namespace:persistent-local-volumes-test-5400 PodName:hostexec-node1-cx2ch ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:04:35.844: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:04:35.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5400" for this suite. • [SLOW TEST:15.675 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":5,"skipped":92,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:04:35.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:112 [It] should be reschedulable [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:326 Jun 18 00:04:35.955: INFO: Only supported for providers [openstack gce gke vsphere azure] (not local) [AfterEach] pods that use multiple volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:322 [AfterEach] [sig-storage] PersistentVolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:04:35.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3340" for this suite. S [SKIPPING] [0.036 seconds] [sig-storage] PersistentVolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Default StorageClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:319 pods that use multiple volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:320 should be reschedulable [Slow] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:326 Only supported for providers [openstack gce gke vsphere azure] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:328 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:04:35.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 18 00:04:35.974: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:04:35.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-8541" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.029 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create metrics for total time taken in volume operations in P/V Controller [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:261 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:04:36.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 18 00:04:36.028: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:04:36.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3110" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create none metrics for pvc controller before creating any PV or PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:481 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:04:36.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:106 STEP: Creating a pod to test downward API volume plugin Jun 18 00:04:36.044: INFO: Waiting up to 5m0s for pod "metadata-volume-146b2b3e-e1fe-46a6-bdd9-d5b9d117176a" in namespace "downward-api-3991" to be "Succeeded or Failed" Jun 18 00:04:36.046: INFO: Pod "metadata-volume-146b2b3e-e1fe-46a6-bdd9-d5b9d117176a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.677556ms Jun 18 00:04:38.051: INFO: Pod "metadata-volume-146b2b3e-e1fe-46a6-bdd9-d5b9d117176a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006895093s Jun 18 00:04:40.058: INFO: Pod "metadata-volume-146b2b3e-e1fe-46a6-bdd9-d5b9d117176a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014223879s STEP: Saw pod success Jun 18 00:04:40.058: INFO: Pod "metadata-volume-146b2b3e-e1fe-46a6-bdd9-d5b9d117176a" satisfied condition "Succeeded or Failed" Jun 18 00:04:40.060: INFO: Trying to get logs from node node2 pod metadata-volume-146b2b3e-e1fe-46a6-bdd9-d5b9d117176a container client-container: STEP: delete the pod Jun 18 00:04:40.084: INFO: Waiting for pod metadata-volume-146b2b3e-e1fe-46a6-bdd9-d5b9d117176a to disappear Jun 18 00:04:40.086: INFO: Pod metadata-volume-146b2b3e-e1fe-46a6-bdd9-d5b9d117176a no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:04:40.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3991" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":6,"skipped":111,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:04:40.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 18 00:04:40.224: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:04:40.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-114" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create prometheus metrics for volume provisioning errors [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:147 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:04:36.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Jun 18 00:04:36.160: INFO: The status of Pod test-hostpath-type-lzvxm is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:04:38.165: INFO: The status of Pod test-hostpath-type-lzvxm is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:04:40.164: INFO: The status of Pod test-hostpath-type-lzvxm is Running (Ready = true) STEP: running on node node2 [It] Should fail on mounting socket 'asocket' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:221 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:04:42.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-1064" for this suite. • [SLOW TEST:6.075 seconds] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting socket 'asocket' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:221 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathFile","total":-1,"completed":10,"skipped":279,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:04:40.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Jun 18 00:04:40.302: INFO: The status of Pod test-hostpath-type-vbqvm is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:04:42.305: INFO: The status of Pod test-hostpath-type-vbqvm is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:04:44.306: INFO: The status of Pod test-hostpath-type-vbqvm is Running (Ready = true) STEP: running on node node2 STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should be able to mount directory 'adir' successfully when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:76 [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:04:54.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-8785" for this suite. • [SLOW TEST:14.100 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount directory 'adir' successfully when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:76 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should be able to mount directory 'adir' successfully when HostPathType is HostPathDirectory","total":-1,"completed":7,"skipped":176,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:03:46.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume by restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 STEP: Building a driver namespace object, basename csi-mock-volumes-103 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 18 00:03:46.987: INFO: creating *v1.ServiceAccount: csi-mock-volumes-103-4372/csi-attacher Jun 18 00:03:46.990: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-103 Jun 18 00:03:46.990: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-103 Jun 18 00:03:46.992: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-103 Jun 18 00:03:46.996: INFO: creating *v1.Role: csi-mock-volumes-103-4372/external-attacher-cfg-csi-mock-volumes-103 Jun 18 00:03:46.999: INFO: creating *v1.RoleBinding: csi-mock-volumes-103-4372/csi-attacher-role-cfg Jun 18 00:03:47.003: INFO: creating *v1.ServiceAccount: csi-mock-volumes-103-4372/csi-provisioner Jun 18 00:03:47.006: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-103 Jun 18 00:03:47.006: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-103 Jun 18 00:03:47.008: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-103 Jun 18 00:03:47.010: INFO: creating *v1.Role: csi-mock-volumes-103-4372/external-provisioner-cfg-csi-mock-volumes-103 Jun 18 00:03:47.013: INFO: creating *v1.RoleBinding: csi-mock-volumes-103-4372/csi-provisioner-role-cfg Jun 18 00:03:47.015: INFO: creating *v1.ServiceAccount: csi-mock-volumes-103-4372/csi-resizer Jun 18 00:03:47.018: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-103 Jun 18 00:03:47.018: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-103 Jun 18 00:03:47.021: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-103 Jun 18 00:03:47.023: INFO: creating *v1.Role: csi-mock-volumes-103-4372/external-resizer-cfg-csi-mock-volumes-103 Jun 18 00:03:47.026: INFO: creating *v1.RoleBinding: csi-mock-volumes-103-4372/csi-resizer-role-cfg Jun 18 00:03:47.028: INFO: creating *v1.ServiceAccount: csi-mock-volumes-103-4372/csi-snapshotter Jun 18 00:03:47.030: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-103 Jun 18 00:03:47.030: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-103 Jun 18 00:03:47.033: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-103 Jun 18 00:03:47.035: INFO: creating *v1.Role: csi-mock-volumes-103-4372/external-snapshotter-leaderelection-csi-mock-volumes-103 Jun 18 00:03:47.038: INFO: creating *v1.RoleBinding: csi-mock-volumes-103-4372/external-snapshotter-leaderelection Jun 18 00:03:47.040: INFO: creating *v1.ServiceAccount: csi-mock-volumes-103-4372/csi-mock Jun 18 00:03:47.043: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-103 Jun 18 00:03:47.045: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-103 Jun 18 00:03:47.048: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-103 Jun 18 00:03:47.051: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-103 Jun 18 00:03:47.053: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-103 Jun 18 00:03:47.055: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-103 Jun 18 00:03:47.058: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-103 Jun 18 00:03:47.060: INFO: creating *v1.StatefulSet: csi-mock-volumes-103-4372/csi-mockplugin Jun 18 00:03:47.064: INFO: creating *v1.StatefulSet: csi-mock-volumes-103-4372/csi-mockplugin-attacher Jun 18 00:03:47.067: INFO: creating *v1.StatefulSet: csi-mock-volumes-103-4372/csi-mockplugin-resizer Jun 18 00:03:47.071: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-103 to register on node node1 STEP: Creating pod Jun 18 00:03:56.596: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 18 00:03:56.601: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-wt6pk] to have phase Bound Jun 18 00:03:56.603: INFO: PersistentVolumeClaim pvc-wt6pk found but phase is Pending instead of Bound. Jun 18 00:03:58.610: INFO: PersistentVolumeClaim pvc-wt6pk found and phase=Bound (2.008735301s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Checking for conditions on pvc STEP: Deleting the previously created pod Jun 18 00:04:14.647: INFO: Deleting pod "pvc-volume-tester-ffhpl" in namespace "csi-mock-volumes-103" Jun 18 00:04:14.651: INFO: Wait up to 5m0s for pod "pvc-volume-tester-ffhpl" to be fully deleted STEP: Creating a new pod with same volume STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-ffhpl Jun 18 00:04:34.681: INFO: Deleting pod "pvc-volume-tester-ffhpl" in namespace "csi-mock-volumes-103" STEP: Deleting pod pvc-volume-tester-42vmg Jun 18 00:04:34.683: INFO: Deleting pod "pvc-volume-tester-42vmg" in namespace "csi-mock-volumes-103" Jun 18 00:04:34.687: INFO: Wait up to 5m0s for pod "pvc-volume-tester-42vmg" to be fully deleted STEP: Deleting claim pvc-wt6pk Jun 18 00:04:40.700: INFO: Waiting up to 2m0s for PersistentVolume pvc-2c051d0f-5a08-4b78-bee2-4637078321b5 to get deleted Jun 18 00:04:40.703: INFO: PersistentVolume pvc-2c051d0f-5a08-4b78-bee2-4637078321b5 found and phase=Bound (2.368059ms) Jun 18 00:04:42.707: INFO: PersistentVolume pvc-2c051d0f-5a08-4b78-bee2-4637078321b5 found and phase=Released (2.006144762s) Jun 18 00:04:44.710: INFO: PersistentVolume pvc-2c051d0f-5a08-4b78-bee2-4637078321b5 was removed STEP: Deleting storageclass csi-mock-volumes-103-sccfqvg STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-103 STEP: Waiting for namespaces [csi-mock-volumes-103] to vanish STEP: uninstalling csi mock driver Jun 18 00:04:50.727: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-103-4372/csi-attacher Jun 18 00:04:50.731: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-103 Jun 18 00:04:50.736: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-103 Jun 18 00:04:50.739: INFO: deleting *v1.Role: csi-mock-volumes-103-4372/external-attacher-cfg-csi-mock-volumes-103 Jun 18 00:04:50.743: INFO: deleting *v1.RoleBinding: csi-mock-volumes-103-4372/csi-attacher-role-cfg Jun 18 00:04:50.746: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-103-4372/csi-provisioner Jun 18 00:04:50.749: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-103 Jun 18 00:04:50.752: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-103 Jun 18 00:04:50.756: INFO: deleting *v1.Role: csi-mock-volumes-103-4372/external-provisioner-cfg-csi-mock-volumes-103 Jun 18 00:04:50.759: INFO: deleting *v1.RoleBinding: csi-mock-volumes-103-4372/csi-provisioner-role-cfg Jun 18 00:04:50.763: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-103-4372/csi-resizer Jun 18 00:04:50.767: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-103 Jun 18 00:04:50.770: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-103 Jun 18 00:04:50.773: INFO: deleting *v1.Role: csi-mock-volumes-103-4372/external-resizer-cfg-csi-mock-volumes-103 Jun 18 00:04:50.776: INFO: deleting *v1.RoleBinding: csi-mock-volumes-103-4372/csi-resizer-role-cfg Jun 18 00:04:50.779: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-103-4372/csi-snapshotter Jun 18 00:04:50.783: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-103 Jun 18 00:04:50.787: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-103 Jun 18 00:04:50.790: INFO: deleting *v1.Role: csi-mock-volumes-103-4372/external-snapshotter-leaderelection-csi-mock-volumes-103 Jun 18 00:04:50.793: INFO: deleting *v1.RoleBinding: csi-mock-volumes-103-4372/external-snapshotter-leaderelection Jun 18 00:04:50.796: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-103-4372/csi-mock Jun 18 00:04:50.799: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-103 Jun 18 00:04:50.802: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-103 Jun 18 00:04:50.805: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-103 Jun 18 00:04:50.808: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-103 Jun 18 00:04:50.812: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-103 Jun 18 00:04:50.815: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-103 Jun 18 00:04:50.818: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-103 Jun 18 00:04:50.821: INFO: deleting *v1.StatefulSet: csi-mock-volumes-103-4372/csi-mockplugin Jun 18 00:04:50.824: INFO: deleting *v1.StatefulSet: csi-mock-volumes-103-4372/csi-mockplugin-attacher Jun 18 00:04:50.829: INFO: deleting *v1.StatefulSet: csi-mock-volumes-103-4372/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-103-4372 STEP: Waiting for namespaces [csi-mock-volumes-103-4372] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:05:02.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:75.937 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI Volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561 should expand volume by restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on","total":-1,"completed":4,"skipped":152,"failed":1,"failures":["[sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage final error"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:05:03.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Jun 18 00:05:03.040: INFO: The status of Pod test-hostpath-type-mw79j is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:05:05.043: INFO: The status of Pod test-hostpath-type-mw79j is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:05:07.043: INFO: The status of Pod test-hostpath-type-mw79j is Running (Ready = true) STEP: running on node node2 STEP: Create a character device for further testing Jun 18 00:05:07.046: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-2223 PodName:test-hostpath-type-mw79j ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:05:07.046: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting character device 'achardev' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:295 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:05:09.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-2223" for this suite. • [SLOW TEST:6.161 seconds] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting character device 'achardev' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:295 ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:04:01.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not call NodeUnstage after NodeStage final error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 STEP: Building a driver namespace object, basename csi-mock-volumes-9934 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Jun 18 00:04:02.055: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9934-7893/csi-attacher Jun 18 00:04:02.058: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9934 Jun 18 00:04:02.058: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-9934 Jun 18 00:04:02.060: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9934 Jun 18 00:04:02.063: INFO: creating *v1.Role: csi-mock-volumes-9934-7893/external-attacher-cfg-csi-mock-volumes-9934 Jun 18 00:04:02.066: INFO: creating *v1.RoleBinding: csi-mock-volumes-9934-7893/csi-attacher-role-cfg Jun 18 00:04:02.069: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9934-7893/csi-provisioner Jun 18 00:04:02.071: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9934 Jun 18 00:04:02.071: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-9934 Jun 18 00:04:02.074: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9934 Jun 18 00:04:02.077: INFO: creating *v1.Role: csi-mock-volumes-9934-7893/external-provisioner-cfg-csi-mock-volumes-9934 Jun 18 00:04:02.080: INFO: creating *v1.RoleBinding: csi-mock-volumes-9934-7893/csi-provisioner-role-cfg Jun 18 00:04:02.083: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9934-7893/csi-resizer Jun 18 00:04:02.085: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9934 Jun 18 00:04:02.085: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-9934 Jun 18 00:04:02.088: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9934 Jun 18 00:04:02.091: INFO: creating *v1.Role: csi-mock-volumes-9934-7893/external-resizer-cfg-csi-mock-volumes-9934 Jun 18 00:04:02.093: INFO: creating *v1.RoleBinding: csi-mock-volumes-9934-7893/csi-resizer-role-cfg Jun 18 00:04:02.096: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9934-7893/csi-snapshotter Jun 18 00:04:02.098: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9934 Jun 18 00:04:02.098: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-9934 Jun 18 00:04:02.100: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9934 Jun 18 00:04:02.103: INFO: creating *v1.Role: csi-mock-volumes-9934-7893/external-snapshotter-leaderelection-csi-mock-volumes-9934 Jun 18 00:04:02.106: INFO: creating *v1.RoleBinding: csi-mock-volumes-9934-7893/external-snapshotter-leaderelection Jun 18 00:04:02.109: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9934-7893/csi-mock Jun 18 00:04:02.112: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9934 Jun 18 00:04:02.114: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9934 Jun 18 00:04:02.118: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9934 Jun 18 00:04:02.120: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9934 Jun 18 00:04:02.122: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9934 Jun 18 00:04:02.125: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9934 Jun 18 00:04:02.128: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9934 Jun 18 00:04:02.130: INFO: creating *v1.StatefulSet: csi-mock-volumes-9934-7893/csi-mockplugin Jun 18 00:04:02.135: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-9934 Jun 18 00:04:02.137: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-9934" Jun 18 00:04:02.140: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-9934 to register on node node1 I0618 00:04:09.416297 30 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I0618 00:04:09.417667 30 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-9934","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0618 00:04:09.459660 30 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-9934","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0618 00:04:09.464045 30 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I0618 00:04:09.466268 30 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I0618 00:04:10.077884 30 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-9934"},"Error":"","FullError":null} STEP: Creating pod Jun 18 00:04:11.659: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 18 00:04:11.663: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-559bb] to have phase Bound Jun 18 00:04:11.666: INFO: PersistentVolumeClaim pvc-559bb found but phase is Pending instead of Bound. I0618 00:04:11.672666 30 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-0fdd5320-9837-4cea-bb2a-f26ba1253e1e","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-0fdd5320-9837-4cea-bb2a-f26ba1253e1e"}}},"Error":"","FullError":null} Jun 18 00:04:13.669: INFO: PersistentVolumeClaim pvc-559bb found and phase=Bound (2.005448963s) Jun 18 00:04:13.683: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-559bb] to have phase Bound Jun 18 00:04:13.686: INFO: PersistentVolumeClaim pvc-559bb found and phase=Bound (2.92553ms) STEP: Waiting for expected CSI calls I0618 00:04:15.428368 30 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0618 00:04:15.568350 30 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-0fdd5320-9837-4cea-bb2a-f26ba1253e1e/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-0fdd5320-9837-4cea-bb2a-f26ba1253e1e","storage.kubernetes.io/csiProvisionerIdentity":"1655510649508-8081-csi-mock-csi-mock-volumes-9934"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} STEP: Deleting the previously created pod Jun 18 00:04:15.687: INFO: Deleting pod "pvc-volume-tester-d6n9c" in namespace "csi-mock-volumes-9934" Jun 18 00:04:15.692: INFO: Wait up to 5m0s for pod "pvc-volume-tester-d6n9c" to be fully deleted I0618 00:04:16.233939 30 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0618 00:04:16.235791 30 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-0fdd5320-9837-4cea-bb2a-f26ba1253e1e/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-0fdd5320-9837-4cea-bb2a-f26ba1253e1e","storage.kubernetes.io/csiProvisionerIdentity":"1655510649508-8081-csi-mock-csi-mock-volumes-9934"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} I0618 00:04:17.251661 30 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0618 00:04:17.253775 30 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-0fdd5320-9837-4cea-bb2a-f26ba1253e1e/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-0fdd5320-9837-4cea-bb2a-f26ba1253e1e","storage.kubernetes.io/csiProvisionerIdentity":"1655510649508-8081-csi-mock-csi-mock-volumes-9934"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} I0618 00:04:19.265439 30 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0618 00:04:19.267609 30 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-0fdd5320-9837-4cea-bb2a-f26ba1253e1e/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-0fdd5320-9837-4cea-bb2a-f26ba1253e1e","storage.kubernetes.io/csiProvisionerIdentity":"1655510649508-8081-csi-mock-csi-mock-volumes-9934"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} STEP: Waiting for all remaining expected CSI calls STEP: Deleting pod pvc-volume-tester-d6n9c Jun 18 00:04:20.698: INFO: Deleting pod "pvc-volume-tester-d6n9c" in namespace "csi-mock-volumes-9934" STEP: Deleting claim pvc-559bb Jun 18 00:04:20.710: INFO: Waiting up to 2m0s for PersistentVolume pvc-0fdd5320-9837-4cea-bb2a-f26ba1253e1e to get deleted Jun 18 00:04:20.712: INFO: PersistentVolume pvc-0fdd5320-9837-4cea-bb2a-f26ba1253e1e found and phase=Bound (2.035255ms) I0618 00:04:20.723779 30 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} Jun 18 00:04:22.715: INFO: PersistentVolume pvc-0fdd5320-9837-4cea-bb2a-f26ba1253e1e was removed STEP: Deleting storageclass csi-mock-volumes-9934-scpp9tx STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-9934 STEP: Waiting for namespaces [csi-mock-volumes-9934] to vanish STEP: uninstalling csi mock driver Jun 18 00:04:28.751: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9934-7893/csi-attacher Jun 18 00:04:28.755: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9934 Jun 18 00:04:28.759: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9934 Jun 18 00:04:28.762: INFO: deleting *v1.Role: csi-mock-volumes-9934-7893/external-attacher-cfg-csi-mock-volumes-9934 Jun 18 00:04:28.766: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9934-7893/csi-attacher-role-cfg Jun 18 00:04:28.769: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9934-7893/csi-provisioner Jun 18 00:04:28.773: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9934 Jun 18 00:04:28.776: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9934 Jun 18 00:04:28.780: INFO: deleting *v1.Role: csi-mock-volumes-9934-7893/external-provisioner-cfg-csi-mock-volumes-9934 Jun 18 00:04:28.783: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9934-7893/csi-provisioner-role-cfg Jun 18 00:04:28.786: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9934-7893/csi-resizer Jun 18 00:04:28.790: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9934 Jun 18 00:04:28.793: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9934 Jun 18 00:04:28.797: INFO: deleting *v1.Role: csi-mock-volumes-9934-7893/external-resizer-cfg-csi-mock-volumes-9934 Jun 18 00:04:28.800: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9934-7893/csi-resizer-role-cfg Jun 18 00:04:28.803: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9934-7893/csi-snapshotter Jun 18 00:04:28.806: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9934 Jun 18 00:04:28.814: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9934 Jun 18 00:04:28.822: INFO: deleting *v1.Role: csi-mock-volumes-9934-7893/external-snapshotter-leaderelection-csi-mock-volumes-9934 Jun 18 00:04:28.831: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9934-7893/external-snapshotter-leaderelection Jun 18 00:04:28.835: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9934-7893/csi-mock Jun 18 00:04:28.838: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9934 Jun 18 00:04:28.841: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9934 Jun 18 00:04:28.845: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9934 Jun 18 00:04:28.848: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9934 Jun 18 00:04:28.851: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9934 Jun 18 00:04:28.854: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9934 Jun 18 00:04:28.859: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9934 Jun 18 00:04:28.863: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9934-7893/csi-mockplugin Jun 18 00:04:28.867: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-9934 STEP: deleting the driver namespace: csi-mock-volumes-9934-7893 STEP: Waiting for namespaces [csi-mock-volumes-9934-7893] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:05:12.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:70.894 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI NodeStage error cases [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:734 should not call NodeUnstage after NodeStage final error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should not call NodeUnstage after NodeStage final error","total":-1,"completed":8,"skipped":187,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:05:12.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Jun 18 00:05:12.971: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:05:12.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-6833" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 schedule pods each with a PD, delete pod and verify detach [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:93 for RW PD with pod delete grace period of "default (30s)" /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:135 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:05:13.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:51 Jun 18 00:05:13.037: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PV STEP: Waiting for PV to enter phase Available Jun 18 00:05:13.042: INFO: Waiting up to 30s for PersistentVolume hostpath-kn69m to have phase Available Jun 18 00:05:13.045: INFO: PersistentVolume hostpath-kn69m found but phase is Pending instead of Available. Jun 18 00:05:14.048: INFO: PersistentVolume hostpath-kn69m found and phase=Available (1.005618148s) STEP: Checking that PV Protection finalizer is set [It] Verify "immediate" deletion of a PV that is not bound to a PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:99 STEP: Deleting the PV Jun 18 00:05:14.055: INFO: Waiting up to 3m0s for PersistentVolume hostpath-kn69m to get deleted Jun 18 00:05:14.057: INFO: PersistentVolume hostpath-kn69m found and phase=Available (2.174693ms) Jun 18 00:05:16.062: INFO: PersistentVolume hostpath-kn69m was removed [AfterEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:05:16.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-protection-3963" for this suite. [AfterEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:92 Jun 18 00:05:16.073: INFO: AfterEach: Cleaning up test resources. Jun 18 00:05:16.073: INFO: pvc is nil Jun 18 00:05:16.073: INFO: Deleting PersistentVolume "hostpath-kn69m" • ------------------------------ {"msg":"PASSED [sig-storage] PV Protection Verify \"immediate\" deletion of a PV that is not bound to a PVC","total":-1,"completed":9,"skipped":232,"failed":0} SSSS ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathSocket","total":-1,"completed":5,"skipped":216,"failed":1,"failures":["[sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage final error"]} [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:05:09.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Jun 18 00:05:09.201: INFO: The status of Pod test-hostpath-type-4nrbh is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:05:11.206: INFO: The status of Pod test-hostpath-type-4nrbh is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:05:13.204: INFO: The status of Pod test-hostpath-type-4nrbh is Running (Ready = true) STEP: running on node node2 STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should fail on mounting directory 'adir' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:99 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:05:19.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-374" for this suite. • [SLOW TEST:10.096 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting directory 'adir' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:99 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathBlockDev","total":-1,"completed":6,"skipped":216,"failed":1,"failures":["[sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage final error"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:05:16.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Jun 18 00:05:16.126: INFO: The status of Pod test-hostpath-type-nwx5g is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:05:18.128: INFO: The status of Pod test-hostpath-type-nwx5g is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:05:20.129: INFO: The status of Pod test-hostpath-type-nwx5g is Running (Ready = true) STEP: running on node node1 STEP: Create a character device for further testing Jun 18 00:05:20.133: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-5615 PodName:test-hostpath-type-nwx5g ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:05:20.133: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to mount character device 'achardev' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:281 [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:05:24.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-5615" for this suite. • [SLOW TEST:8.155 seconds] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount character device 'achardev' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:281 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should be able to mount character device 'achardev' successfully when HostPathType is HostPathUnset","total":-1,"completed":10,"skipped":236,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:05:24.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Jun 18 00:05:24.324: INFO: The status of Pod test-hostpath-type-h8w88 is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:05:26.328: INFO: The status of Pod test-hostpath-type-h8w88 is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:05:28.328: INFO: The status of Pod test-hostpath-type-h8w88 is Running (Ready = true) STEP: running on node node1 STEP: Create a block device for further testing Jun 18 00:05:28.331: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-5380 PodName:test-hostpath-type-h8w88 ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:05:28.331: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:359 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:05:30.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-5380" for this suite. • [SLOW TEST:6.161 seconds] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting block device 'ablkdev' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:359 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathFile","total":-1,"completed":11,"skipped":253,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:05:19.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 18 00:05:23.356: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-40a95679-5843-4ccd-b3d1-7ad37463d093-backend && ln -s /tmp/local-volume-test-40a95679-5843-4ccd-b3d1-7ad37463d093-backend /tmp/local-volume-test-40a95679-5843-4ccd-b3d1-7ad37463d093] Namespace:persistent-local-volumes-test-5167 PodName:hostexec-node2-jth64 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:05:23.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:05:23.446: INFO: Creating a PV followed by a PVC Jun 18 00:05:23.454: INFO: Waiting for PV local-pvwcqp2 to bind to PVC pvc-s4skq Jun 18 00:05:23.454: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-s4skq] to have phase Bound Jun 18 00:05:23.456: INFO: PersistentVolumeClaim pvc-s4skq found but phase is Pending instead of Bound. Jun 18 00:05:25.461: INFO: PersistentVolumeClaim pvc-s4skq found but phase is Pending instead of Bound. Jun 18 00:05:27.464: INFO: PersistentVolumeClaim pvc-s4skq found and phase=Bound (4.010482469s) Jun 18 00:05:27.464: INFO: Waiting up to 3m0s for PersistentVolume local-pvwcqp2 to have phase Bound Jun 18 00:05:27.468: INFO: PersistentVolume local-pvwcqp2 found and phase=Bound (3.158494ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jun 18 00:05:31.496: INFO: pod "pod-0731e1cb-6538-4e7b-95b4-c50c38c38b55" created on Node "node2" STEP: Writing in pod1 Jun 18 00:05:31.496: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5167 PodName:pod-0731e1cb-6538-4e7b-95b4-c50c38c38b55 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:05:31.496: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:05:31.578: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Jun 18 00:05:31.578: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5167 PodName:pod-0731e1cb-6538-4e7b-95b4-c50c38c38b55 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:05:31.578: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:05:31.659: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-0731e1cb-6538-4e7b-95b4-c50c38c38b55 in namespace persistent-local-volumes-test-5167 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:05:31.665: INFO: Deleting PersistentVolumeClaim "pvc-s4skq" Jun 18 00:05:31.669: INFO: Deleting PersistentVolume "local-pvwcqp2" STEP: Removing the test directory Jun 18 00:05:31.673: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-40a95679-5843-4ccd-b3d1-7ad37463d093 && rm -r /tmp/local-volume-test-40a95679-5843-4ccd-b3d1-7ad37463d093-backend] Namespace:persistent-local-volumes-test-5167 PodName:hostexec-node2-jth64 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:05:31.673: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:05:31.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5167" for this suite. • [SLOW TEST:12.470 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":7,"skipped":236,"failed":1,"failures":["[sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage final error"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:05:31.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-349ed967-ada3-469b-b650-c4a80f36440c" Jun 18 00:05:35.850: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-349ed967-ada3-469b-b650-c4a80f36440c && dd if=/dev/zero of=/tmp/local-volume-test-349ed967-ada3-469b-b650-c4a80f36440c/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-349ed967-ada3-469b-b650-c4a80f36440c/file] Namespace:persistent-local-volumes-test-8837 PodName:hostexec-node1-v44p5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:05:35.850: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:05:36.535: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-349ed967-ada3-469b-b650-c4a80f36440c/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-8837 PodName:hostexec-node1-v44p5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:05:36.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:05:36.669: INFO: Creating a PV followed by a PVC Jun 18 00:05:36.676: INFO: Waiting for PV local-pv6qvzn to bind to PVC pvc-mk9qf Jun 18 00:05:36.676: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-mk9qf] to have phase Bound Jun 18 00:05:36.678: INFO: PersistentVolumeClaim pvc-mk9qf found but phase is Pending instead of Bound. Jun 18 00:05:38.681: INFO: PersistentVolumeClaim pvc-mk9qf found but phase is Pending instead of Bound. Jun 18 00:05:40.687: INFO: PersistentVolumeClaim pvc-mk9qf found but phase is Pending instead of Bound. Jun 18 00:05:42.692: INFO: PersistentVolumeClaim pvc-mk9qf found and phase=Bound (6.015928599s) Jun 18 00:05:42.692: INFO: Waiting up to 3m0s for PersistentVolume local-pv6qvzn to have phase Bound Jun 18 00:05:42.694: INFO: PersistentVolume local-pv6qvzn found and phase=Bound (2.193074ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jun 18 00:05:48.727: INFO: pod "pod-6011d9ba-3f6f-4c6e-b88b-98de982ce4bd" created on Node "node1" STEP: Writing in pod1 Jun 18 00:05:48.727: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8837 PodName:pod-6011d9ba-3f6f-4c6e-b88b-98de982ce4bd ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:05:48.727: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:05:48.830: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Jun 18 00:05:48.830: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8837 PodName:pod-6011d9ba-3f6f-4c6e-b88b-98de982ce4bd ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:05:48.830: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:05:48.916: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Jun 18 00:05:48.916: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /dev/loop0 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8837 PodName:pod-6011d9ba-3f6f-4c6e-b88b-98de982ce4bd ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:05:48.916: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:05:49.000: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /dev/loop0 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-6011d9ba-3f6f-4c6e-b88b-98de982ce4bd in namespace persistent-local-volumes-test-8837 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:05:49.006: INFO: Deleting PersistentVolumeClaim "pvc-mk9qf" Jun 18 00:05:49.010: INFO: Deleting PersistentVolume "local-pv6qvzn" Jun 18 00:05:49.014: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-349ed967-ada3-469b-b650-c4a80f36440c/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-8837 PodName:hostexec-node1-v44p5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:05:49.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node1" at path /tmp/local-volume-test-349ed967-ada3-469b-b650-c4a80f36440c/file Jun 18 00:05:49.120: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-8837 PodName:hostexec-node1-v44p5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:05:49.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-349ed967-ada3-469b-b650-c4a80f36440c Jun 18 00:05:49.213: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-349ed967-ada3-469b-b650-c4a80f36440c] Namespace:persistent-local-volumes-test-8837 PodName:hostexec-node1-v44p5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:05:49.213: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:05:49.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8837" for this suite. • [SLOW TEST:17.543 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":8,"skipped":244,"failed":1,"failures":["[sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage final error"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:04:42.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] exhausted, immediate binding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1080 STEP: Building a driver namespace object, basename csi-mock-volumes-4843 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Jun 18 00:04:42.306: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4843-1454/csi-attacher Jun 18 00:04:42.310: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4843 Jun 18 00:04:42.310: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4843 Jun 18 00:04:42.313: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4843 Jun 18 00:04:42.316: INFO: creating *v1.Role: csi-mock-volumes-4843-1454/external-attacher-cfg-csi-mock-volumes-4843 Jun 18 00:04:42.319: INFO: creating *v1.RoleBinding: csi-mock-volumes-4843-1454/csi-attacher-role-cfg Jun 18 00:04:42.321: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4843-1454/csi-provisioner Jun 18 00:04:42.324: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4843 Jun 18 00:04:42.324: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4843 Jun 18 00:04:42.327: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4843 Jun 18 00:04:42.331: INFO: creating *v1.Role: csi-mock-volumes-4843-1454/external-provisioner-cfg-csi-mock-volumes-4843 Jun 18 00:04:42.334: INFO: creating *v1.RoleBinding: csi-mock-volumes-4843-1454/csi-provisioner-role-cfg Jun 18 00:04:42.336: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4843-1454/csi-resizer Jun 18 00:04:42.339: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4843 Jun 18 00:04:42.339: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4843 Jun 18 00:04:42.342: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4843 Jun 18 00:04:42.344: INFO: creating *v1.Role: csi-mock-volumes-4843-1454/external-resizer-cfg-csi-mock-volumes-4843 Jun 18 00:04:42.347: INFO: creating *v1.RoleBinding: csi-mock-volumes-4843-1454/csi-resizer-role-cfg Jun 18 00:04:42.349: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4843-1454/csi-snapshotter Jun 18 00:04:42.352: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4843 Jun 18 00:04:42.352: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4843 Jun 18 00:04:42.354: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4843 Jun 18 00:04:42.357: INFO: creating *v1.Role: csi-mock-volumes-4843-1454/external-snapshotter-leaderelection-csi-mock-volumes-4843 Jun 18 00:04:42.359: INFO: creating *v1.RoleBinding: csi-mock-volumes-4843-1454/external-snapshotter-leaderelection Jun 18 00:04:42.362: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4843-1454/csi-mock Jun 18 00:04:42.365: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4843 Jun 18 00:04:42.367: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4843 Jun 18 00:04:42.370: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4843 Jun 18 00:04:42.372: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4843 Jun 18 00:04:42.375: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4843 Jun 18 00:04:42.377: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4843 Jun 18 00:04:42.380: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4843 Jun 18 00:04:42.382: INFO: creating *v1.StatefulSet: csi-mock-volumes-4843-1454/csi-mockplugin Jun 18 00:04:42.386: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4843 Jun 18 00:04:42.389: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4843" Jun 18 00:04:42.391: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4843 to register on node node2 I0618 00:04:48.459196 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I0618 00:04:48.461120 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-4843","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0618 00:04:48.462635 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I0618 00:04:48.464970 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I0618 00:04:48.561697 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-4843","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0618 00:04:48.679253 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-4843"},"Error":"","FullError":null} STEP: Creating pod Jun 18 00:04:51.909: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 18 00:04:51.913: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-mzzwd] to have phase Bound Jun 18 00:04:51.915: INFO: PersistentVolumeClaim pvc-mzzwd found but phase is Pending instead of Bound. I0618 00:04:51.922695 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-bdbb9117-79dc-4da4-be1c-c92294d37aeb","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}} I0618 00:04:51.935659 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-bdbb9117-79dc-4da4-be1c-c92294d37aeb","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-bdbb9117-79dc-4da4-be1c-c92294d37aeb"}}},"Error":"","FullError":null} Jun 18 00:04:53.919: INFO: PersistentVolumeClaim pvc-mzzwd found and phase=Bound (2.005451519s) I0618 00:04:54.174837 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Jun 18 00:04:54.176: INFO: >>> kubeConfig: /root/.kube/config I0618 00:04:54.260738 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-bdbb9117-79dc-4da4-be1c-c92294d37aeb/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-bdbb9117-79dc-4da4-be1c-c92294d37aeb","storage.kubernetes.io/csiProvisionerIdentity":"1655510688465-8081-csi-mock-csi-mock-volumes-4843"}},"Response":{},"Error":"","FullError":null} I0618 00:04:54.265749 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Jun 18 00:04:54.267: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:04:54.347: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:04:54.463: INFO: >>> kubeConfig: /root/.kube/config I0618 00:04:54.561836 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-bdbb9117-79dc-4da4-be1c-c92294d37aeb/globalmount","target_path":"/var/lib/kubelet/pods/e2153819-3388-4251-a937-2845c6c7c14b/volumes/kubernetes.io~csi/pvc-bdbb9117-79dc-4da4-be1c-c92294d37aeb/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-bdbb9117-79dc-4da4-be1c-c92294d37aeb","storage.kubernetes.io/csiProvisionerIdentity":"1655510688465-8081-csi-mock-csi-mock-volumes-4843"}},"Response":{},"Error":"","FullError":null} Jun 18 00:04:59.941: INFO: Deleting pod "pvc-volume-tester-lxf8k" in namespace "csi-mock-volumes-4843" Jun 18 00:04:59.947: INFO: Wait up to 5m0s for pod "pvc-volume-tester-lxf8k" to be fully deleted Jun 18 00:05:02.836: INFO: >>> kubeConfig: /root/.kube/config I0618 00:05:02.918898 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/e2153819-3388-4251-a937-2845c6c7c14b/volumes/kubernetes.io~csi/pvc-bdbb9117-79dc-4da4-be1c-c92294d37aeb/mount"},"Response":{},"Error":"","FullError":null} I0618 00:05:02.937735 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0618 00:05:02.939289 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-bdbb9117-79dc-4da4-be1c-c92294d37aeb/globalmount"},"Response":{},"Error":"","FullError":null} I0618 00:05:09.974886 27 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} STEP: Checking PVC events Jun 18 00:05:10.961: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-mzzwd", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4843", SelfLink:"", UID:"bdbb9117-79dc-4da4-be1c-c92294d37aeb", ResourceVersion:"91487", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63791107491, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002f737d0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002f737e8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc000c2db50), VolumeMode:(*v1.PersistentVolumeMode)(0xc000c2db70), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 18 00:05:10.961: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-mzzwd", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4843", SelfLink:"", UID:"bdbb9117-79dc-4da4-be1c-c92294d37aeb", ResourceVersion:"91488", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63791107491, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4843"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0034b66d8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0034b66f0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0034b6708), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0034b6720)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0045378e0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0045378f0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 18 00:05:10.961: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-mzzwd", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4843", SelfLink:"", UID:"bdbb9117-79dc-4da4-be1c-c92294d37aeb", ResourceVersion:"91494", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63791107491, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4843"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003540948), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003540960)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003540978), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003540990)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-bdbb9117-79dc-4da4-be1c-c92294d37aeb", StorageClassName:(*string)(0xc004839c40), VolumeMode:(*v1.PersistentVolumeMode)(0xc004839c50), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 18 00:05:10.961: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-mzzwd", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4843", SelfLink:"", UID:"bdbb9117-79dc-4da4-be1c-c92294d37aeb", ResourceVersion:"91495", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63791107491, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4843"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0035409c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0035409d8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0035409f0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003540a08)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-bdbb9117-79dc-4da4-be1c-c92294d37aeb", StorageClassName:(*string)(0xc004839c80), VolumeMode:(*v1.PersistentVolumeMode)(0xc004839c90), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 18 00:05:10.962: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-mzzwd", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4843", SelfLink:"", UID:"bdbb9117-79dc-4da4-be1c-c92294d37aeb", ResourceVersion:"91763", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63791107491, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(0xc0034b76e0), DeletionGracePeriodSeconds:(*int64)(0xc000ad2978), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4843"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0034b76f8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0034b7710)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0034b7728), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0034b7740)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-bdbb9117-79dc-4da4-be1c-c92294d37aeb", StorageClassName:(*string)(0xc00517d460), VolumeMode:(*v1.PersistentVolumeMode)(0xc00517d470), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 18 00:05:10.962: INFO: PVC event DELETED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-mzzwd", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4843", SelfLink:"", UID:"bdbb9117-79dc-4da4-be1c-c92294d37aeb", ResourceVersion:"91764", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63791107491, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(0xc0034b7770), DeletionGracePeriodSeconds:(*int64)(0xc000ad2a28), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4843"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0034b7788), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0034b77a0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0034b77b8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0034b77d0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-bdbb9117-79dc-4da4-be1c-c92294d37aeb", StorageClassName:(*string)(0xc00517d4b0), VolumeMode:(*v1.PersistentVolumeMode)(0xc00517d4c0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} STEP: Deleting pod pvc-volume-tester-lxf8k Jun 18 00:05:10.962: INFO: Deleting pod "pvc-volume-tester-lxf8k" in namespace "csi-mock-volumes-4843" STEP: Deleting claim pvc-mzzwd STEP: Deleting storageclass csi-mock-volumes-4843-sc99gfd STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4843 STEP: Waiting for namespaces [csi-mock-volumes-4843] to vanish STEP: uninstalling csi mock driver Jun 18 00:05:17.992: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4843-1454/csi-attacher Jun 18 00:05:17.995: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4843 Jun 18 00:05:17.999: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4843 Jun 18 00:05:18.003: INFO: deleting *v1.Role: csi-mock-volumes-4843-1454/external-attacher-cfg-csi-mock-volumes-4843 Jun 18 00:05:18.006: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4843-1454/csi-attacher-role-cfg Jun 18 00:05:18.010: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4843-1454/csi-provisioner Jun 18 00:05:18.014: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4843 Jun 18 00:05:18.017: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4843 Jun 18 00:05:18.020: INFO: deleting *v1.Role: csi-mock-volumes-4843-1454/external-provisioner-cfg-csi-mock-volumes-4843 Jun 18 00:05:18.023: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4843-1454/csi-provisioner-role-cfg Jun 18 00:05:18.027: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4843-1454/csi-resizer Jun 18 00:05:18.030: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4843 Jun 18 00:05:18.033: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4843 Jun 18 00:05:18.036: INFO: deleting *v1.Role: csi-mock-volumes-4843-1454/external-resizer-cfg-csi-mock-volumes-4843 Jun 18 00:05:18.039: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4843-1454/csi-resizer-role-cfg Jun 18 00:05:18.044: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4843-1454/csi-snapshotter Jun 18 00:05:18.047: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4843 Jun 18 00:05:18.051: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4843 Jun 18 00:05:18.054: INFO: deleting *v1.Role: csi-mock-volumes-4843-1454/external-snapshotter-leaderelection-csi-mock-volumes-4843 Jun 18 00:05:18.058: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4843-1454/external-snapshotter-leaderelection Jun 18 00:05:18.061: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4843-1454/csi-mock Jun 18 00:05:18.064: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4843 Jun 18 00:05:18.077: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4843 Jun 18 00:05:18.080: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4843 Jun 18 00:05:18.084: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4843 Jun 18 00:05:18.088: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4843 Jun 18 00:05:18.091: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4843 Jun 18 00:05:18.094: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4843 Jun 18 00:05:18.097: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4843-1454/csi-mockplugin Jun 18 00:05:18.100: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-4843 STEP: deleting the driver namespace: csi-mock-volumes-4843-1454 STEP: Waiting for namespaces [csi-mock-volumes-4843-1454] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:06:02.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:79.874 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 storage capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1022 exhausted, immediate binding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1080 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, immediate binding","total":-1,"completed":11,"skipped":299,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:06:02.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should test that deleting a claim before the volume is provisioned deletes the volume. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:511 Jun 18 00:06:02.192: INFO: Only supported for providers [openstack gce aws gke vsphere azure] (not local) [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:06:02.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-4626" for this suite. S [SKIPPING] [0.035 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 DynamicProvisioner [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:152 should test that deleting a claim before the volume is provisioned deletes the volume. [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:511 Only supported for providers [openstack gce aws gke vsphere azure] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:517 ------------------------------ SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:05:30.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:49 [It] should allow deletion of pod with invalid volume : configmap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 Jun 18 00:06:00.502: INFO: Deleting pod "pv-5979"/"pod-ephm-test-projected-z8tp" Jun 18 00:06:00.502: INFO: Deleting pod "pod-ephm-test-projected-z8tp" in namespace "pv-5979" Jun 18 00:06:00.509: INFO: Wait up to 5m0s for pod "pod-ephm-test-projected-z8tp" to be fully deleted [AfterEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:06:08.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-5979" for this suite. • [SLOW TEST:38.066 seconds] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 When pod refers to non-existent ephemeral storage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53 should allow deletion of pod with invalid volume : configmap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 ------------------------------ {"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap","total":-1,"completed":12,"skipped":257,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:06:08.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Jun 18 00:06:10.601: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-9425 PodName:hostexec-node1-k79wd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:06:10.601: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:06:10.690: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Jun 18 00:06:10.690: INFO: exec node1: stdout: "0\n" Jun 18 00:06:10.690: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Jun 18 00:06:10.690: INFO: exec node1: exit code: 0 Jun 18 00:06:10.690: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:06:10.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9425" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.154 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1254 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:05:49.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:391 STEP: Setting up local volumes on node "node1" STEP: Initializing test volumes Jun 18 00:05:53.454: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-a854fbf8-4fe4-47ce-925f-1dde6370b5be] Namespace:persistent-local-volumes-test-2614 PodName:hostexec-node1-k6f2t ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:05:53.454: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:05:53.542: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-e0b39e71-d47a-4f82-95bd-8fdea33867e9] Namespace:persistent-local-volumes-test-2614 PodName:hostexec-node1-k6f2t ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:05:53.542: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:05:53.640: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-ca34740d-8f9d-4c49-8dec-2445316baba0] Namespace:persistent-local-volumes-test-2614 PodName:hostexec-node1-k6f2t ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:05:53.640: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:05:53.725: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-c03e4f9b-a0c7-4f46-8a6b-5341722bc3a1] Namespace:persistent-local-volumes-test-2614 PodName:hostexec-node1-k6f2t ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:05:53.725: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:05:53.812: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-29698d14-8d19-4b6c-ab3b-7c6b7bd59a81] Namespace:persistent-local-volumes-test-2614 PodName:hostexec-node1-k6f2t ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:05:53.812: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:05:53.913: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-9246bd76-4532-4ad7-8cdc-d9ca9179151f] Namespace:persistent-local-volumes-test-2614 PodName:hostexec-node1-k6f2t ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:05:53.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:05:54.001: INFO: Creating a PV followed by a PVC Jun 18 00:05:54.009: INFO: Creating a PV followed by a PVC Jun 18 00:05:54.016: INFO: Creating a PV followed by a PVC Jun 18 00:05:54.022: INFO: Creating a PV followed by a PVC Jun 18 00:05:54.028: INFO: Creating a PV followed by a PVC Jun 18 00:05:54.034: INFO: Creating a PV followed by a PVC Jun 18 00:06:04.085: INFO: PVCs were not bound within 10s (that's good) STEP: Setting up local volumes on node "node2" STEP: Initializing test volumes Jun 18 00:06:06.105: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-0b9f3161-1265-4a32-a064-311e9b54f669] Namespace:persistent-local-volumes-test-2614 PodName:hostexec-node2-8jnm6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:06:06.105: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:06:06.200: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-4b63cb04-1126-43bc-bed4-256814760ef7] Namespace:persistent-local-volumes-test-2614 PodName:hostexec-node2-8jnm6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:06:06.200: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:06:06.285: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-3f5a34eb-807f-489f-b041-13db056acb15] Namespace:persistent-local-volumes-test-2614 PodName:hostexec-node2-8jnm6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:06:06.285: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:06:06.364: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-a99be3f9-59b6-498c-aaad-b4139fa6f882] Namespace:persistent-local-volumes-test-2614 PodName:hostexec-node2-8jnm6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:06:06.364: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:06:06.444: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-85b66288-3712-4a7e-afbe-14c4f94ada32] Namespace:persistent-local-volumes-test-2614 PodName:hostexec-node2-8jnm6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:06:06.445: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:06:06.539: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-b1a0e8bd-f5c7-4c19-9087-56c5244e0bae] Namespace:persistent-local-volumes-test-2614 PodName:hostexec-node2-8jnm6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:06:06.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:06:06.633: INFO: Creating a PV followed by a PVC Jun 18 00:06:06.641: INFO: Creating a PV followed by a PVC Jun 18 00:06:06.647: INFO: Creating a PV followed by a PVC Jun 18 00:06:06.653: INFO: Creating a PV followed by a PVC Jun 18 00:06:06.659: INFO: Creating a PV followed by a PVC Jun 18 00:06:06.664: INFO: Creating a PV followed by a PVC Jun 18 00:06:16.713: INFO: PVCs were not bound within 10s (that's good) [It] should use volumes on one node when pod management is parallel and pod has affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:434 STEP: Creating a StatefulSet with pod affinity on nodes Jun 18 00:06:16.720: INFO: Found 0 stateful pods, waiting for 3 Jun 18 00:06:26.730: INFO: Waiting for pod local-volume-statefulset-0 to enter Running - Ready=true, currently Running - Ready=true Jun 18 00:06:26.730: INFO: Waiting for pod local-volume-statefulset-1 to enter Running - Ready=true, currently Running - Ready=true Jun 18 00:06:26.730: INFO: Waiting for pod local-volume-statefulset-2 to enter Running - Ready=true, currently Running - Ready=true Jun 18 00:06:26.734: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol1-local-volume-statefulset-0] to have phase Bound Jun 18 00:06:26.737: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-0 found and phase=Bound (2.847776ms) Jun 18 00:06:26.737: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol1-local-volume-statefulset-1] to have phase Bound Jun 18 00:06:26.739: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-1 found and phase=Bound (2.084861ms) Jun 18 00:06:26.739: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol1-local-volume-statefulset-2] to have phase Bound Jun 18 00:06:26.741: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-2 found and phase=Bound (2.443613ms) [AfterEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:403 STEP: Cleaning up PVC and PV Jun 18 00:06:26.741: INFO: Deleting PersistentVolumeClaim "pvc-dchj7" Jun 18 00:06:26.747: INFO: Deleting PersistentVolume "local-pvjgf4p" STEP: Cleaning up PVC and PV Jun 18 00:06:26.752: INFO: Deleting PersistentVolumeClaim "pvc-dgzjs" Jun 18 00:06:26.756: INFO: Deleting PersistentVolume "local-pvk2zcw" STEP: Cleaning up PVC and PV Jun 18 00:06:26.759: INFO: Deleting PersistentVolumeClaim "pvc-j6rt6" Jun 18 00:06:26.763: INFO: Deleting PersistentVolume "local-pvkjs64" STEP: Cleaning up PVC and PV Jun 18 00:06:26.767: INFO: Deleting PersistentVolumeClaim "pvc-lqhnv" Jun 18 00:06:26.770: INFO: Deleting PersistentVolume "local-pv52lbd" STEP: Cleaning up PVC and PV Jun 18 00:06:26.774: INFO: Deleting PersistentVolumeClaim "pvc-x5gtl" Jun 18 00:06:26.777: INFO: Deleting PersistentVolume "local-pvj4mpc" STEP: Cleaning up PVC and PV Jun 18 00:06:26.781: INFO: Deleting PersistentVolumeClaim "pvc-7hqpm" Jun 18 00:06:26.784: INFO: Deleting PersistentVolume "local-pvj7mnc" STEP: Removing the test directory Jun 18 00:06:26.790: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a854fbf8-4fe4-47ce-925f-1dde6370b5be] Namespace:persistent-local-volumes-test-2614 PodName:hostexec-node1-k6f2t ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:06:26.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:06:26.896: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e0b39e71-d47a-4f82-95bd-8fdea33867e9] Namespace:persistent-local-volumes-test-2614 PodName:hostexec-node1-k6f2t ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:06:26.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:06:26.984: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ca34740d-8f9d-4c49-8dec-2445316baba0] Namespace:persistent-local-volumes-test-2614 PodName:hostexec-node1-k6f2t ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:06:26.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:06:27.066: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c03e4f9b-a0c7-4f46-8a6b-5341722bc3a1] Namespace:persistent-local-volumes-test-2614 PodName:hostexec-node1-k6f2t ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:06:27.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:06:27.160: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-29698d14-8d19-4b6c-ab3b-7c6b7bd59a81] Namespace:persistent-local-volumes-test-2614 PodName:hostexec-node1-k6f2t ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:06:27.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:06:27.253: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-9246bd76-4532-4ad7-8cdc-d9ca9179151f] Namespace:persistent-local-volumes-test-2614 PodName:hostexec-node1-k6f2t ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:06:27.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up PVC and PV Jun 18 00:06:27.358: INFO: Deleting PersistentVolumeClaim "pvc-58t22" Jun 18 00:06:27.362: INFO: Deleting PersistentVolume "local-pvbmdrc" STEP: Cleaning up PVC and PV Jun 18 00:06:27.367: INFO: Deleting PersistentVolumeClaim "pvc-6rnxq" Jun 18 00:06:27.370: INFO: Deleting PersistentVolume "local-pvfc6rt" STEP: Cleaning up PVC and PV Jun 18 00:06:27.375: INFO: Deleting PersistentVolumeClaim "pvc-wkkvv" Jun 18 00:06:27.378: INFO: Deleting PersistentVolume "local-pvvncws" STEP: Cleaning up PVC and PV Jun 18 00:06:27.382: INFO: Deleting PersistentVolumeClaim "pvc-m5jb4" Jun 18 00:06:27.386: INFO: Deleting PersistentVolume "local-pvjs5nn" STEP: Cleaning up PVC and PV Jun 18 00:06:27.390: INFO: Deleting PersistentVolumeClaim "pvc-pmtbl" Jun 18 00:06:27.393: INFO: Deleting PersistentVolume "local-pvzs99v" STEP: Cleaning up PVC and PV Jun 18 00:06:27.397: INFO: Deleting PersistentVolumeClaim "pvc-th584" Jun 18 00:06:27.401: INFO: Deleting PersistentVolume "local-pvznc9h" STEP: Removing the test directory Jun 18 00:06:27.404: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0b9f3161-1265-4a32-a064-311e9b54f669] Namespace:persistent-local-volumes-test-2614 PodName:hostexec-node2-8jnm6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:06:27.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:06:28.050: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4b63cb04-1126-43bc-bed4-256814760ef7] Namespace:persistent-local-volumes-test-2614 PodName:hostexec-node2-8jnm6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:06:28.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:06:28.441: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-3f5a34eb-807f-489f-b041-13db056acb15] Namespace:persistent-local-volumes-test-2614 PodName:hostexec-node2-8jnm6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:06:28.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:06:28.592: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a99be3f9-59b6-498c-aaad-b4139fa6f882] Namespace:persistent-local-volumes-test-2614 PodName:hostexec-node2-8jnm6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:06:28.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:06:28.907: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-85b66288-3712-4a7e-afbe-14c4f94ada32] Namespace:persistent-local-volumes-test-2614 PodName:hostexec-node2-8jnm6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:06:28.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:06:29.040: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-b1a0e8bd-f5c7-4c19-9087-56c5244e0bae] Namespace:persistent-local-volumes-test-2614 PodName:hostexec-node2-8jnm6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:06:29.040: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:06:29.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2614" for this suite. • [SLOW TEST:39.838 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:384 should use volumes on one node when pod management is parallel and pod has affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:434 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local StatefulSet with pod affinity [Slow] should use volumes on one node when pod management is parallel and pod has affinity","total":-1,"completed":9,"skipped":272,"failed":1,"failures":["[sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage final error"]} SS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:01:35.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to the key in the secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:421 STEP: Creating secret with name s-test-opt-create-1183f4f2-1c3d-4ee6-9edd-ed7de0193adf STEP: Creating the pod [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:06:35.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3812" for this suite. • [SLOW TEST:300.058 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to the key in the secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:421 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]","total":-1,"completed":2,"skipped":65,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:02:01.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:469 STEP: Creating configMap with name cm-test-opt-create-88d79e76-6980-4d88-b8f8-e75e2db64474 STEP: Creating the pod [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:07:01.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2275" for this suite. • [SLOW TEST:300.060 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:469 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]","total":-1,"completed":3,"skipped":131,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:07:01.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should let an external dynamic provisioner create and delete persistent volumes [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:627 Jun 18 00:07:01.995: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: creating an external dynamic provisioner pod STEP: locating the provisioner pod STEP: creating a StorageClass STEP: Creating a StorageClass Jun 18 00:07:16.135: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: creating a claim with a external provisioning annotation STEP: creating claim=&PersistentVolumeClaim{ObjectMeta:{ pvc- volume-provisioning-8793 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1572864000 0} {} 1500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*volume-provisioning-8793-externalr2xvp,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} Jun 18 00:07:16.141: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-mlq9c] to have phase Bound Jun 18 00:07:16.144: INFO: PersistentVolumeClaim pvc-mlq9c found but phase is Pending instead of Bound. Jun 18 00:07:18.149: INFO: PersistentVolumeClaim pvc-mlq9c found but phase is Pending instead of Bound. Jun 18 00:07:20.153: INFO: PersistentVolumeClaim pvc-mlq9c found and phase=Bound (4.011937124s) STEP: checking the claim STEP: checking the PV STEP: deleting claim "volume-provisioning-8793"/"pvc-mlq9c" STEP: deleting the claim's PV "pvc-03c8d3f9-e098-46b1-92be-5267d246ac37" Jun 18 00:07:20.162: INFO: Waiting up to 20m0s for PersistentVolume pvc-03c8d3f9-e098-46b1-92be-5267d246ac37 to get deleted Jun 18 00:07:20.164: INFO: PersistentVolume pvc-03c8d3f9-e098-46b1-92be-5267d246ac37 found and phase=Bound (1.998426ms) Jun 18 00:07:25.167: INFO: PersistentVolume pvc-03c8d3f9-e098-46b1-92be-5267d246ac37 was removed Jun 18 00:07:25.167: INFO: deleting claim "volume-provisioning-8793"/"pvc-mlq9c" Jun 18 00:07:25.170: INFO: deleting storage class volume-provisioning-8793-externalr2xvp STEP: Deleting pod external-provisioner-4zqtx in namespace volume-provisioning-8793 [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:07:25.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-8793" for this suite. • [SLOW TEST:23.224 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 DynamicProvisioner External /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:626 should let an external dynamic provisioner create and delete persistent volumes [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:627 ------------------------------ {"msg":"PASSED [sig-storage] Dynamic Provisioning DynamicProvisioner External should let an external dynamic provisioner create and delete persistent volumes [Slow]","total":-1,"completed":4,"skipped":222,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:06:29.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] contain ephemeral=true when using inline volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 STEP: Building a driver namespace object, basename csi-mock-volumes-5577 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 18 00:06:29.315: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5577-2688/csi-attacher Jun 18 00:06:29.318: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5577 Jun 18 00:06:29.318: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-5577 Jun 18 00:06:29.320: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5577 Jun 18 00:06:29.323: INFO: creating *v1.Role: csi-mock-volumes-5577-2688/external-attacher-cfg-csi-mock-volumes-5577 Jun 18 00:06:29.325: INFO: creating *v1.RoleBinding: csi-mock-volumes-5577-2688/csi-attacher-role-cfg Jun 18 00:06:29.328: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5577-2688/csi-provisioner Jun 18 00:06:29.330: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5577 Jun 18 00:06:29.330: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-5577 Jun 18 00:06:29.332: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5577 Jun 18 00:06:29.335: INFO: creating *v1.Role: csi-mock-volumes-5577-2688/external-provisioner-cfg-csi-mock-volumes-5577 Jun 18 00:06:29.337: INFO: creating *v1.RoleBinding: csi-mock-volumes-5577-2688/csi-provisioner-role-cfg Jun 18 00:06:29.340: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5577-2688/csi-resizer Jun 18 00:06:29.342: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5577 Jun 18 00:06:29.342: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-5577 Jun 18 00:06:29.345: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5577 Jun 18 00:06:29.347: INFO: creating *v1.Role: csi-mock-volumes-5577-2688/external-resizer-cfg-csi-mock-volumes-5577 Jun 18 00:06:29.350: INFO: creating *v1.RoleBinding: csi-mock-volumes-5577-2688/csi-resizer-role-cfg Jun 18 00:06:29.353: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5577-2688/csi-snapshotter Jun 18 00:06:29.356: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5577 Jun 18 00:06:29.356: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-5577 Jun 18 00:06:29.358: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5577 Jun 18 00:06:29.361: INFO: creating *v1.Role: csi-mock-volumes-5577-2688/external-snapshotter-leaderelection-csi-mock-volumes-5577 Jun 18 00:06:29.363: INFO: creating *v1.RoleBinding: csi-mock-volumes-5577-2688/external-snapshotter-leaderelection Jun 18 00:06:29.367: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5577-2688/csi-mock Jun 18 00:06:29.370: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5577 Jun 18 00:06:29.374: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5577 Jun 18 00:06:29.376: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5577 Jun 18 00:06:29.379: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5577 Jun 18 00:06:29.382: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5577 Jun 18 00:06:29.384: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5577 Jun 18 00:06:29.387: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5577 Jun 18 00:06:29.389: INFO: creating *v1.StatefulSet: csi-mock-volumes-5577-2688/csi-mockplugin Jun 18 00:06:29.393: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-5577 Jun 18 00:06:29.397: INFO: creating *v1.StatefulSet: csi-mock-volumes-5577-2688/csi-mockplugin-attacher Jun 18 00:06:29.401: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-5577" Jun 18 00:06:29.403: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5577 to register on node node1 STEP: Creating pod STEP: checking for CSIInlineVolumes feature Jun 18 00:06:40.454: INFO: Pod inline-volume-kkx54 has the following logs: Jun 18 00:06:40.457: INFO: Deleting pod "inline-volume-kkx54" in namespace "csi-mock-volumes-5577" Jun 18 00:06:40.460: INFO: Wait up to 5m0s for pod "inline-volume-kkx54" to be fully deleted STEP: Deleting the previously created pod Jun 18 00:06:50.466: INFO: Deleting pod "pvc-volume-tester-td5t7" in namespace "csi-mock-volumes-5577" Jun 18 00:06:50.471: INFO: Wait up to 5m0s for pod "pvc-volume-tester-td5t7" to be fully deleted STEP: Checking CSI driver logs Jun 18 00:07:00.491: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-5577 Jun 18 00:07:00.491: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 9b06397d-ed1b-49f7-9fa5-07c2eee2ed48 Jun 18 00:07:00.491: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default Jun 18 00:07:00.491: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: true Jun 18 00:07:00.491: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-td5t7 Jun 18 00:07:00.491: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"csi-13eb44e21fca6fdf8b40522457ad28ed8f032e95b944b5edc2dd211788bfe336","target_path":"/var/lib/kubelet/pods/9b06397d-ed1b-49f7-9fa5-07c2eee2ed48/volumes/kubernetes.io~csi/my-volume/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-td5t7 Jun 18 00:07:00.491: INFO: Deleting pod "pvc-volume-tester-td5t7" in namespace "csi-mock-volumes-5577" STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-5577 STEP: Waiting for namespaces [csi-mock-volumes-5577] to vanish STEP: uninstalling csi mock driver Jun 18 00:07:06.506: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5577-2688/csi-attacher Jun 18 00:07:06.512: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5577 Jun 18 00:07:06.516: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5577 Jun 18 00:07:06.519: INFO: deleting *v1.Role: csi-mock-volumes-5577-2688/external-attacher-cfg-csi-mock-volumes-5577 Jun 18 00:07:06.523: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5577-2688/csi-attacher-role-cfg Jun 18 00:07:06.526: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5577-2688/csi-provisioner Jun 18 00:07:06.530: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5577 Jun 18 00:07:06.533: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5577 Jun 18 00:07:06.536: INFO: deleting *v1.Role: csi-mock-volumes-5577-2688/external-provisioner-cfg-csi-mock-volumes-5577 Jun 18 00:07:06.539: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5577-2688/csi-provisioner-role-cfg Jun 18 00:07:06.543: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5577-2688/csi-resizer Jun 18 00:07:06.546: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5577 Jun 18 00:07:06.550: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5577 Jun 18 00:07:06.553: INFO: deleting *v1.Role: csi-mock-volumes-5577-2688/external-resizer-cfg-csi-mock-volumes-5577 Jun 18 00:07:06.557: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5577-2688/csi-resizer-role-cfg Jun 18 00:07:06.560: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5577-2688/csi-snapshotter Jun 18 00:07:06.564: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5577 Jun 18 00:07:06.567: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5577 Jun 18 00:07:06.570: INFO: deleting *v1.Role: csi-mock-volumes-5577-2688/external-snapshotter-leaderelection-csi-mock-volumes-5577 Jun 18 00:07:06.573: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5577-2688/external-snapshotter-leaderelection Jun 18 00:07:06.576: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5577-2688/csi-mock Jun 18 00:07:06.579: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5577 Jun 18 00:07:06.582: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5577 Jun 18 00:07:06.585: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5577 Jun 18 00:07:06.589: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5577 Jun 18 00:07:06.592: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5577 Jun 18 00:07:06.596: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5577 Jun 18 00:07:06.599: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5577 Jun 18 00:07:06.603: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5577-2688/csi-mockplugin Jun 18 00:07:06.607: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-5577 Jun 18 00:07:06.611: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5577-2688/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-5577-2688 STEP: Waiting for namespaces [csi-mock-volumes-5577-2688] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:07:34.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:65.378 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443 contain ephemeral=true when using inline volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","total":-1,"completed":10,"skipped":274,"failed":1,"failures":["[sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage final error"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:07:34.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:110 STEP: Creating configMap with name projected-configmap-test-volume-map-714f293d-1187-4712-8a60-5f315d5ed93e STEP: Creating a pod to test consume configMaps Jun 18 00:07:34.718: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cad81cbe-3d9d-4960-b000-7bf20635c3f9" in namespace "projected-7673" to be "Succeeded or Failed" Jun 18 00:07:34.720: INFO: Pod "pod-projected-configmaps-cad81cbe-3d9d-4960-b000-7bf20635c3f9": Phase="Pending", Reason="", readiness=false. Elapsed: 1.967187ms Jun 18 00:07:36.724: INFO: Pod "pod-projected-configmaps-cad81cbe-3d9d-4960-b000-7bf20635c3f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006122742s Jun 18 00:07:38.729: INFO: Pod "pod-projected-configmaps-cad81cbe-3d9d-4960-b000-7bf20635c3f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01119697s STEP: Saw pod success Jun 18 00:07:38.729: INFO: Pod "pod-projected-configmaps-cad81cbe-3d9d-4960-b000-7bf20635c3f9" satisfied condition "Succeeded or Failed" Jun 18 00:07:38.732: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-cad81cbe-3d9d-4960-b000-7bf20635c3f9 container agnhost-container: STEP: delete the pod Jun 18 00:07:38.746: INFO: Waiting for pod pod-projected-configmaps-cad81cbe-3d9d-4960-b000-7bf20635c3f9 to disappear Jun 18 00:07:38.748: INFO: Pod pod-projected-configmaps-cad81cbe-3d9d-4960-b000-7bf20635c3f9 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:07:38.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7673" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":11,"skipped":295,"failed":1,"failures":["[sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage final error"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:06:10.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:634 [It] all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:657 STEP: Create a PVC STEP: Create 50 pods to use this PVC STEP: Wait for all pods are running [AfterEach] Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:648 STEP: Clean PV local-pv9px46 [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:07:48.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5496" for this suite. • [SLOW TEST:97.559 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:629 all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:657 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running","total":-1,"completed":13,"skipped":332,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:06:35.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 18 00:07:49.916: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-453e45b5-a536-4f9a-9d7a-e8040551041d-backend && mount --bind /tmp/local-volume-test-453e45b5-a536-4f9a-9d7a-e8040551041d-backend /tmp/local-volume-test-453e45b5-a536-4f9a-9d7a-e8040551041d-backend && ln -s /tmp/local-volume-test-453e45b5-a536-4f9a-9d7a-e8040551041d-backend /tmp/local-volume-test-453e45b5-a536-4f9a-9d7a-e8040551041d] Namespace:persistent-local-volumes-test-9252 PodName:hostexec-node2-797rf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:07:49.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:07:50.042: INFO: Creating a PV followed by a PVC Jun 18 00:07:50.053: INFO: Waiting for PV local-pv4w782 to bind to PVC pvc-nq9pm Jun 18 00:07:50.053: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-nq9pm] to have phase Bound Jun 18 00:07:50.055: INFO: PersistentVolumeClaim pvc-nq9pm found but phase is Pending instead of Bound. Jun 18 00:07:52.060: INFO: PersistentVolumeClaim pvc-nq9pm found and phase=Bound (2.007117367s) Jun 18 00:07:52.060: INFO: Waiting up to 3m0s for PersistentVolume local-pv4w782 to have phase Bound Jun 18 00:07:52.063: INFO: PersistentVolume local-pv4w782 found and phase=Bound (3.007321ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Jun 18 00:07:52.068: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:07:52.070: INFO: Deleting PersistentVolumeClaim "pvc-nq9pm" Jun 18 00:07:52.075: INFO: Deleting PersistentVolume "local-pv4w782" STEP: Removing the test directory Jun 18 00:07:52.079: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-453e45b5-a536-4f9a-9d7a-e8040551041d && umount /tmp/local-volume-test-453e45b5-a536-4f9a-9d7a-e8040551041d-backend && rm -r /tmp/local-volume-test-453e45b5-a536-4f9a-9d7a-e8040551041d-backend] Namespace:persistent-local-volumes-test-9252 PodName:hostexec-node2-797rf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:07:52.079: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:07:52.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9252" for this suite. S [SKIPPING] [76.390 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:02:55.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to the key in the secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/secrets_volume.go:440 STEP: Creating secret with name s-test-opt-create-16bc53d9-e73f-4ceb-8aef-2a9ba191c2a5 STEP: Creating the pod [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:07:55.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1278" for this suite. • [SLOW TEST:300.072 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to the key in the secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/secrets_volume.go:440 ------------------------------ {"msg":"PASSED [sig-storage] Secrets Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]","total":-1,"completed":6,"skipped":129,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:07:55.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename regional-pd STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:68 Jun 18 00:07:55.736: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:07:55.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "regional-pd-5946" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 RegionalPD [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:76 should provision storage [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:77 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:72 ------------------------------ SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:02:40.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [It] should fail due to wrong node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:324 STEP: Initializing test volumes Jun 18 00:02:48.761: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-a53847e0-fa46-4ded-8fad-6a8ecee024f9] Namespace:persistent-local-volumes-test-5284 PodName:hostexec-node2-mbnws ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:02:48.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:02:48.854: INFO: Creating a PV followed by a PVC Jun 18 00:02:48.862: INFO: Waiting for PV local-pvw477n to bind to PVC pvc-qt2kn Jun 18 00:02:48.862: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-qt2kn] to have phase Bound Jun 18 00:02:48.865: INFO: PersistentVolumeClaim pvc-qt2kn found but phase is Pending instead of Bound. Jun 18 00:02:50.869: INFO: PersistentVolumeClaim pvc-qt2kn found but phase is Pending instead of Bound. Jun 18 00:02:52.873: INFO: PersistentVolumeClaim pvc-qt2kn found but phase is Pending instead of Bound. Jun 18 00:02:54.877: INFO: PersistentVolumeClaim pvc-qt2kn found but phase is Pending instead of Bound. Jun 18 00:02:56.882: INFO: PersistentVolumeClaim pvc-qt2kn found and phase=Bound (8.020861729s) Jun 18 00:02:56.883: INFO: Waiting up to 3m0s for PersistentVolume local-pvw477n to have phase Bound Jun 18 00:02:56.885: INFO: PersistentVolume local-pvw477n found and phase=Bound (2.62973ms) STEP: Cleaning up PVC and PV Jun 18 00:07:56.913: INFO: Deleting PersistentVolumeClaim "pvc-qt2kn" Jun 18 00:07:56.922: INFO: Deleting PersistentVolume "local-pvw477n" STEP: Removing the test directory Jun 18 00:07:56.927: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a53847e0-fa46-4ded-8fad-6a8ecee024f9] Namespace:persistent-local-volumes-test-5284 PodName:hostexec-node2-mbnws ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:07:56.927: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:07:57.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5284" for this suite. • [SLOW TEST:316.344 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Local volume that cannot be mounted [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:304 should fail due to wrong node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:324 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Local volume that cannot be mounted [Slow] should fail due to wrong node","total":-1,"completed":2,"skipped":75,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:02:57.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:548 STEP: Creating the pod [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:07:57.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1140" for this suite. • [SLOW TEST:300.054 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:548 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap Should fail non-optional pod creation due to configMap object does not exist [Slow]","total":-1,"completed":4,"skipped":155,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:07:57.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 18 00:07:57.545: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:07:57.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-8525" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create volume metrics with the correct PVC ref [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:204 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:07:55.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] new files should be created with FSGroup ownership when container is non-root /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:59 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 18 00:07:55.796: INFO: Waiting up to 5m0s for pod "pod-ad9ef7d7-02ec-4492-95d2-cd3e31e27822" in namespace "emptydir-5185" to be "Succeeded or Failed" Jun 18 00:07:55.800: INFO: Pod "pod-ad9ef7d7-02ec-4492-95d2-cd3e31e27822": Phase="Pending", Reason="", readiness=false. Elapsed: 3.552255ms Jun 18 00:07:57.803: INFO: Pod "pod-ad9ef7d7-02ec-4492-95d2-cd3e31e27822": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007182302s Jun 18 00:07:59.809: INFO: Pod "pod-ad9ef7d7-02ec-4492-95d2-cd3e31e27822": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012582322s Jun 18 00:08:01.814: INFO: Pod "pod-ad9ef7d7-02ec-4492-95d2-cd3e31e27822": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017722539s STEP: Saw pod success Jun 18 00:08:01.814: INFO: Pod "pod-ad9ef7d7-02ec-4492-95d2-cd3e31e27822" satisfied condition "Succeeded or Failed" Jun 18 00:08:01.817: INFO: Trying to get logs from node node1 pod pod-ad9ef7d7-02ec-4492-95d2-cd3e31e27822 container test-container: STEP: delete the pod Jun 18 00:08:01.831: INFO: Waiting for pod pod-ad9ef7d7-02ec-4492-95d2-cd3e31e27822 to disappear Jun 18 00:08:01.833: INFO: Pod pod-ad9ef7d7-02ec-4492-95d2-cd3e31e27822 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:08:01.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5185" for this suite. • [SLOW TEST:6.076 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48 new files should be created with FSGroup ownership when container is non-root /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:59 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root","total":-1,"completed":7,"skipped":152,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:07:52.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Jun 18 00:07:52.329: INFO: The status of Pod test-hostpath-type-zxqw2 is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:07:54.333: INFO: The status of Pod test-hostpath-type-zxqw2 is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:07:56.334: INFO: The status of Pod test-hostpath-type-zxqw2 is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:07:58.335: INFO: The status of Pod test-hostpath-type-zxqw2 is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:08:00.337: INFO: The status of Pod test-hostpath-type-zxqw2 is Running (Ready = true) STEP: running on node node1 [It] Should be able to mount socket 'asocket' successfully when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:208 [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:08:04.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-5253" for this suite. • [SLOW TEST:12.083 seconds] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount socket 'asocket' successfully when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:208 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should be able to mount socket 'asocket' successfully when HostPathType is HostPathSocket","total":-1,"completed":3,"skipped":101,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:08:04.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] deletion should be idempotent /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:557 Jun 18 00:08:04.405: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:08:04.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-7948" for this suite. S [SKIPPING] [0.033 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 DynamicProvisioner [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:152 deletion should be idempotent [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:557 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:563 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:08:01.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37 [It] should support subPath [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93 STEP: Creating a pod to test hostPath subPath Jun 18 00:08:01.945: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-3991" to be "Succeeded or Failed" Jun 18 00:08:01.947: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 1.788422ms Jun 18 00:08:03.950: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005058114s Jun 18 00:08:05.955: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009744696s STEP: Saw pod success Jun 18 00:08:05.955: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Jun 18 00:08:05.957: INFO: Trying to get logs from node node1 pod pod-host-path-test container test-container-2: STEP: delete the pod Jun 18 00:08:05.971: INFO: Waiting for pod pod-host-path-test to disappear Jun 18 00:08:05.973: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:08:05.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-3991" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] HostPath should support subPath [NodeConformance]","total":-1,"completed":8,"skipped":181,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:08:04.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Jun 18 00:08:04.471: INFO: The status of Pod test-hostpath-type-ljdzh is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:08:06.475: INFO: The status of Pod test-hostpath-type-ljdzh is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:08:08.479: INFO: The status of Pod test-hostpath-type-ljdzh is Running (Ready = true) STEP: running on node node1 STEP: Create a character device for further testing Jun 18 00:08:08.481: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-4863 PodName:test-hostpath-type-ljdzh ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:08:08.481: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting character device 'achardev' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:300 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:08:10.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-4863" for this suite. • [SLOW TEST:6.183 seconds] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting character device 'achardev' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:300 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathBlockDev","total":-1,"completed":4,"skipped":107,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:07:25.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 18 00:07:55.312: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-686e9c0b-0e2c-460a-8880-dc411494da48-backend && ln -s /tmp/local-volume-test-686e9c0b-0e2c-460a-8880-dc411494da48-backend /tmp/local-volume-test-686e9c0b-0e2c-460a-8880-dc411494da48] Namespace:persistent-local-volumes-test-287 PodName:hostexec-node2-8p8fm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:07:55.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:07:55.488: INFO: Creating a PV followed by a PVC Jun 18 00:07:55.498: INFO: Waiting for PV local-pv52tc8 to bind to PVC pvc-r8qt4 Jun 18 00:07:55.498: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-r8qt4] to have phase Bound Jun 18 00:07:55.500: INFO: PersistentVolumeClaim pvc-r8qt4 found but phase is Pending instead of Bound. Jun 18 00:07:57.504: INFO: PersistentVolumeClaim pvc-r8qt4 found and phase=Bound (2.005481566s) Jun 18 00:07:57.504: INFO: Waiting up to 3m0s for PersistentVolume local-pv52tc8 to have phase Bound Jun 18 00:07:57.506: INFO: PersistentVolume local-pv52tc8 found and phase=Bound (2.094344ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Jun 18 00:08:51.530: INFO: pod "pod-8523c93e-c0ae-4f09-bd01-1e52d9109972" created on Node "node2" STEP: Writing in pod1 Jun 18 00:08:51.530: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-287 PodName:pod-8523c93e-c0ae-4f09-bd01-1e52d9109972 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:08:51.530: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:08:51.618: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Jun 18 00:08:51.618: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-287 PodName:pod-8523c93e-c0ae-4f09-bd01-1e52d9109972 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:08:51.618: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:08:51.779: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Jun 18 00:09:09.806: INFO: pod "pod-66da2156-1e45-41c6-b21d-5c28ee902f29" created on Node "node2" Jun 18 00:09:09.806: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-287 PodName:pod-66da2156-1e45-41c6-b21d-5c28ee902f29 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:09:09.806: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:09:09.908: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Jun 18 00:09:09.908: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-686e9c0b-0e2c-460a-8880-dc411494da48 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-287 PodName:pod-66da2156-1e45-41c6-b21d-5c28ee902f29 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:09:09.908: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:09:09.994: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-686e9c0b-0e2c-460a-8880-dc411494da48 > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Jun 18 00:09:09.994: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-287 PodName:pod-8523c93e-c0ae-4f09-bd01-1e52d9109972 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:09:09.994: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:09:10.086: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-686e9c0b-0e2c-460a-8880-dc411494da48", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-8523c93e-c0ae-4f09-bd01-1e52d9109972 in namespace persistent-local-volumes-test-287 STEP: Deleting pod2 STEP: Deleting pod pod-66da2156-1e45-41c6-b21d-5c28ee902f29 in namespace persistent-local-volumes-test-287 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:09:10.097: INFO: Deleting PersistentVolumeClaim "pvc-r8qt4" Jun 18 00:09:10.101: INFO: Deleting PersistentVolume "local-pv52tc8" STEP: Removing the test directory Jun 18 00:09:10.105: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-686e9c0b-0e2c-460a-8880-dc411494da48 && rm -r /tmp/local-volume-test-686e9c0b-0e2c-460a-8880-dc411494da48-backend] Namespace:persistent-local-volumes-test-287 PodName:hostexec-node2-8p8fm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:09:10.105: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:09:10.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-287" for this suite. • [SLOW TEST:104.956 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":256,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:06:02.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that container can restart successfully after configmaps modified /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:131 STEP: Create configmap STEP: Creating pod pod-subpath-test-configmap-75z4 STEP: Failing liveness probe Jun 18 00:06:08.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=subpath-7564 exec pod-subpath-test-configmap-75z4 --container test-container-volume-configmap-75z4 -- /bin/sh -c rm /probe-volume/probe-file' Jun 18 00:06:08.550: INFO: stderr: "" Jun 18 00:06:08.550: INFO: stdout: "" Jun 18 00:06:08.550: INFO: Pod exec output: STEP: Waiting for container to restart Jun 18 00:06:08.553: INFO: Container test-container-subpath-configmap-75z4, restarts: 0 Jun 18 00:06:18.558: INFO: Container test-container-subpath-configmap-75z4, restarts: 0 Jun 18 00:06:28.558: INFO: Container test-container-subpath-configmap-75z4, restarts: 2 Jun 18 00:06:28.558: INFO: Container has restart count: 2 STEP: Fix liveness probe STEP: Waiting for container to stop restarting Jun 18 00:07:20.569: INFO: Container has restart count: 3 Jun 18 00:07:44.569: INFO: Container has restart count: 4 Jun 18 00:08:46.570: INFO: Container restart has stabilized Jun 18 00:08:46.571: INFO: Deleting pod "pod-subpath-test-configmap-75z4" in namespace "subpath-7564" Jun 18 00:08:46.578: INFO: Wait up to 5m0s for pod "pod-subpath-test-configmap-75z4" to be fully deleted [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:09:10.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7564" for this suite. • [SLOW TEST:188.364 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Container restart /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:130 should verify that container can restart successfully after configmaps modified /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:131 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Container restart should verify that container can restart successfully after configmaps modified","total":-1,"completed":12,"skipped":331,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:09:10.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 18 00:09:10.641: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:09:10.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-4427" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create bound pv/pvc count metrics for pvc controller after creating both pv and pvc /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:503 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Flexvolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:09:10.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename flexvolume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Flexvolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/flexvolume.go:169 Jun 18 00:09:10.792: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-storage] Flexvolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:09:10.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "flexvolume-1811" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds] [sig-storage] Flexvolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should be mountable when non-attachable [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/flexvolume.go:188 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/flexvolume.go:173 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:08:06.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-cdcde0a1-b1c3-49b3-a761-616b53e52561" Jun 18 00:08:48.091: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-cdcde0a1-b1c3-49b3-a761-616b53e52561 && dd if=/dev/zero of=/tmp/local-volume-test-cdcde0a1-b1c3-49b3-a761-616b53e52561/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-cdcde0a1-b1c3-49b3-a761-616b53e52561/file] Namespace:persistent-local-volumes-test-4161 PodName:hostexec-node2-fm7vm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:08:48.091: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:08:48.276: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-cdcde0a1-b1c3-49b3-a761-616b53e52561/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-4161 PodName:hostexec-node2-fm7vm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:08:48.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:08:48.385: INFO: Creating a PV followed by a PVC Jun 18 00:08:48.396: INFO: Waiting for PV local-pv5j8s6 to bind to PVC pvc-vwqd6 Jun 18 00:08:48.396: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-vwqd6] to have phase Bound Jun 18 00:08:48.398: INFO: PersistentVolumeClaim pvc-vwqd6 found but phase is Pending instead of Bound. Jun 18 00:08:50.403: INFO: PersistentVolumeClaim pvc-vwqd6 found and phase=Bound (2.007714045s) Jun 18 00:08:50.403: INFO: Waiting up to 3m0s for PersistentVolume local-pv5j8s6 to have phase Bound Jun 18 00:08:50.406: INFO: PersistentVolume local-pv5j8s6 found and phase=Bound (2.673909ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Jun 18 00:09:10.432: INFO: pod "pod-90f21bb0-f6a7-4799-95c7-899614acaa93" created on Node "node2" STEP: Writing in pod1 Jun 18 00:09:10.432: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4161 PodName:pod-90f21bb0-f6a7-4799-95c7-899614acaa93 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:09:10.432: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:09:10.561: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n18 bytes (18B) copied, 0.000149 seconds, 118.0KB/s", err: Jun 18 00:09:10.561: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-4161 PodName:pod-90f21bb0-f6a7-4799-95c7-899614acaa93 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:09:10.561: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:09:10.733: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Jun 18 00:09:22.756: INFO: pod "pod-61f4615f-0ec2-49e6-a50c-2f935b9831f7" created on Node "node2" Jun 18 00:09:22.756: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-4161 PodName:pod-61f4615f-0ec2-49e6-a50c-2f935b9831f7 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:09:22.756: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:09:22.851: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Writing in pod2 Jun 18 00:09:22.851: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo /dev/loop0 > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4161 PodName:pod-61f4615f-0ec2-49e6-a50c-2f935b9831f7 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:09:22.851: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:09:22.937: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo /dev/loop0 > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n11 bytes (11B) copied, 0.000037 seconds, 290.3KB/s", err: STEP: Reading in pod1 Jun 18 00:09:22.937: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-4161 PodName:pod-90f21bb0-f6a7-4799-95c7-899614acaa93 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:09:22.937: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:09:23.028: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "/dev/loop0.ontent...................................................................................", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-90f21bb0-f6a7-4799-95c7-899614acaa93 in namespace persistent-local-volumes-test-4161 STEP: Deleting pod2 STEP: Deleting pod pod-61f4615f-0ec2-49e6-a50c-2f935b9831f7 in namespace persistent-local-volumes-test-4161 [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:09:23.037: INFO: Deleting PersistentVolumeClaim "pvc-vwqd6" Jun 18 00:09:23.041: INFO: Deleting PersistentVolume "local-pv5j8s6" Jun 18 00:09:23.044: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-cdcde0a1-b1c3-49b3-a761-616b53e52561/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-4161 PodName:hostexec-node2-fm7vm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:09:23.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-cdcde0a1-b1c3-49b3-a761-616b53e52561/file Jun 18 00:09:23.141: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-4161 PodName:hostexec-node2-fm7vm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:09:23.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-cdcde0a1-b1c3-49b3-a761-616b53e52561 Jun 18 00:09:23.231: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-cdcde0a1-b1c3-49b3-a761-616b53e52561] Namespace:persistent-local-volumes-test-4161 PodName:hostexec-node2-fm7vm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:09:23.231: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:09:23.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4161" for this suite. • [SLOW TEST:77.306 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":9,"skipped":204,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:09:23.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:68 Jun 18 00:09:23.376: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:09:23.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-4083" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 NFSv4 [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:78 should be mountable for NFSv4 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:79 Only supported for node OS distro [gci ubuntu custom] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:69 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:07:48.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 18 00:08:58.512: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-92663087-471a-4dcd-99d8-c4745dd1376c && mount --bind /tmp/local-volume-test-92663087-471a-4dcd-99d8-c4745dd1376c /tmp/local-volume-test-92663087-471a-4dcd-99d8-c4745dd1376c] Namespace:persistent-local-volumes-test-1805 PodName:hostexec-node2-s779t ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:08:58.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:08:58.604: INFO: Creating a PV followed by a PVC Jun 18 00:08:58.613: INFO: Waiting for PV local-pvptzcf to bind to PVC pvc-89vn5 Jun 18 00:08:58.613: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-89vn5] to have phase Bound Jun 18 00:08:58.616: INFO: PersistentVolumeClaim pvc-89vn5 found but phase is Pending instead of Bound. Jun 18 00:09:00.622: INFO: PersistentVolumeClaim pvc-89vn5 found but phase is Pending instead of Bound. Jun 18 00:09:02.625: INFO: PersistentVolumeClaim pvc-89vn5 found but phase is Pending instead of Bound. Jun 18 00:09:04.632: INFO: PersistentVolumeClaim pvc-89vn5 found but phase is Pending instead of Bound. Jun 18 00:09:06.637: INFO: PersistentVolumeClaim pvc-89vn5 found but phase is Pending instead of Bound. Jun 18 00:09:08.643: INFO: PersistentVolumeClaim pvc-89vn5 found but phase is Pending instead of Bound. Jun 18 00:09:10.646: INFO: PersistentVolumeClaim pvc-89vn5 found but phase is Pending instead of Bound. Jun 18 00:09:12.650: INFO: PersistentVolumeClaim pvc-89vn5 found and phase=Bound (14.036458511s) Jun 18 00:09:12.650: INFO: Waiting up to 3m0s for PersistentVolume local-pvptzcf to have phase Bound Jun 18 00:09:12.652: INFO: PersistentVolume local-pvptzcf found and phase=Bound (2.432389ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jun 18 00:09:24.681: INFO: pod "pod-b0a9359a-cec2-4315-9a72-c73bf841fe6d" created on Node "node2" STEP: Writing in pod1 Jun 18 00:09:24.681: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1805 PodName:pod-b0a9359a-cec2-4315-9a72-c73bf841fe6d ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:09:24.681: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:09:24.792: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Jun 18 00:09:24.792: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1805 PodName:pod-b0a9359a-cec2-4315-9a72-c73bf841fe6d ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:09:24.792: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:09:24.951: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-b0a9359a-cec2-4315-9a72-c73bf841fe6d in namespace persistent-local-volumes-test-1805 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:09:24.956: INFO: Deleting PersistentVolumeClaim "pvc-89vn5" Jun 18 00:09:24.960: INFO: Deleting PersistentVolume "local-pvptzcf" STEP: Removing the test directory Jun 18 00:09:24.965: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-92663087-471a-4dcd-99d8-c4745dd1376c && rm -r /tmp/local-volume-test-92663087-471a-4dcd-99d8-c4745dd1376c] Namespace:persistent-local-volumes-test-1805 PodName:hostexec-node2-s779t ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:09:24.965: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:09:25.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1805" for this suite. • [SLOW TEST:96.659 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":14,"skipped":359,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:09:10.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-e63ac00c-4261-495f-bfd7-d31eef0752b9" Jun 18 00:09:16.858: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-e63ac00c-4261-495f-bfd7-d31eef0752b9 && dd if=/dev/zero of=/tmp/local-volume-test-e63ac00c-4261-495f-bfd7-d31eef0752b9/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-e63ac00c-4261-495f-bfd7-d31eef0752b9/file] Namespace:persistent-local-volumes-test-7804 PodName:hostexec-node2-fkh5j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:09:16.858: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:09:17.018: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-e63ac00c-4261-495f-bfd7-d31eef0752b9/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-7804 PodName:hostexec-node2-fkh5j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:09:17.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:09:17.174: INFO: Creating a PV followed by a PVC Jun 18 00:09:17.181: INFO: Waiting for PV local-pvjz8xp to bind to PVC pvc-hbc55 Jun 18 00:09:17.181: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-hbc55] to have phase Bound Jun 18 00:09:17.183: INFO: PersistentVolumeClaim pvc-hbc55 found but phase is Pending instead of Bound. Jun 18 00:09:19.187: INFO: PersistentVolumeClaim pvc-hbc55 found but phase is Pending instead of Bound. Jun 18 00:09:21.196: INFO: PersistentVolumeClaim pvc-hbc55 found but phase is Pending instead of Bound. Jun 18 00:09:23.200: INFO: PersistentVolumeClaim pvc-hbc55 found but phase is Pending instead of Bound. Jun 18 00:09:25.204: INFO: PersistentVolumeClaim pvc-hbc55 found but phase is Pending instead of Bound. Jun 18 00:09:27.207: INFO: PersistentVolumeClaim pvc-hbc55 found and phase=Bound (10.0252253s) Jun 18 00:09:27.207: INFO: Waiting up to 3m0s for PersistentVolume local-pvjz8xp to have phase Bound Jun 18 00:09:27.209: INFO: PersistentVolume local-pvjz8xp found and phase=Bound (1.867882ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 Jun 18 00:09:27.213: INFO: We don't set fsGroup on block device, skipped. [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:09:27.214: INFO: Deleting PersistentVolumeClaim "pvc-hbc55" Jun 18 00:09:27.219: INFO: Deleting PersistentVolume "local-pvjz8xp" Jun 18 00:09:27.222: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-e63ac00c-4261-495f-bfd7-d31eef0752b9/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-7804 PodName:hostexec-node2-fkh5j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:09:27.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop3" on node "node2" at path /tmp/local-volume-test-e63ac00c-4261-495f-bfd7-d31eef0752b9/file Jun 18 00:09:27.310: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop3] Namespace:persistent-local-volumes-test-7804 PodName:hostexec-node2-fkh5j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:09:27.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-e63ac00c-4261-495f-bfd7-d31eef0752b9 Jun 18 00:09:27.399: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e63ac00c-4261-495f-bfd7-d31eef0752b9] Namespace:persistent-local-volumes-test-7804 PodName:hostexec-node2-fkh5j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:09:27.399: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:09:27.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7804" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [16.692 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 We don't set fsGroup on block device, skipped. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:263 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:09:25.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:75 STEP: Creating configMap with name projected-configmap-test-volume-219d0fc0-396e-4867-b5f9-c161ace9f3d4 STEP: Creating a pod to test consume configMaps Jun 18 00:09:25.225: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2c6a3d94-cb4c-44be-859f-c1c4e1869558" in namespace "projected-3244" to be "Succeeded or Failed" Jun 18 00:09:25.228: INFO: Pod "pod-projected-configmaps-2c6a3d94-cb4c-44be-859f-c1c4e1869558": Phase="Pending", Reason="", readiness=false. Elapsed: 3.307313ms Jun 18 00:09:27.232: INFO: Pod "pod-projected-configmaps-2c6a3d94-cb4c-44be-859f-c1c4e1869558": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006553947s Jun 18 00:09:29.235: INFO: Pod "pod-projected-configmaps-2c6a3d94-cb4c-44be-859f-c1c4e1869558": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009765129s STEP: Saw pod success Jun 18 00:09:29.235: INFO: Pod "pod-projected-configmaps-2c6a3d94-cb4c-44be-859f-c1c4e1869558" satisfied condition "Succeeded or Failed" Jun 18 00:09:29.237: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-2c6a3d94-cb4c-44be-859f-c1c4e1869558 container agnhost-container: STEP: delete the pod Jun 18 00:09:29.249: INFO: Waiting for pod pod-projected-configmaps-2c6a3d94-cb4c-44be-859f-c1c4e1869558 to disappear Jun 18 00:09:29.251: INFO: Pod pod-projected-configmaps-2c6a3d94-cb4c-44be-859f-c1c4e1869558 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:09:29.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3244" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":15,"skipped":394,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:09:23.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Jun 18 00:09:23.495: INFO: The status of Pod test-hostpath-type-mxgp2 is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:09:25.499: INFO: The status of Pod test-hostpath-type-mxgp2 is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:09:27.498: INFO: The status of Pod test-hostpath-type-mxgp2 is Running (Ready = true) STEP: running on node node1 [It] Should fail on mounting socket 'asocket' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:231 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:09:29.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-4032" for this suite. • [SLOW TEST:6.084 seconds] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting socket 'asocket' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:231 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathBlockDev","total":-1,"completed":10,"skipped":244,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:09:29.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:91 STEP: Creating a pod to test downward API volume plugin Jun 18 00:09:29.616: INFO: Waiting up to 5m0s for pod "metadata-volume-2c8a0d0f-07e0-4e28-b3cb-bbaf0422adf9" in namespace "downward-api-1430" to be "Succeeded or Failed" Jun 18 00:09:29.619: INFO: Pod "metadata-volume-2c8a0d0f-07e0-4e28-b3cb-bbaf0422adf9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069698ms Jun 18 00:09:31.622: INFO: Pod "metadata-volume-2c8a0d0f-07e0-4e28-b3cb-bbaf0422adf9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005393935s Jun 18 00:09:33.627: INFO: Pod "metadata-volume-2c8a0d0f-07e0-4e28-b3cb-bbaf0422adf9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010629111s STEP: Saw pod success Jun 18 00:09:33.627: INFO: Pod "metadata-volume-2c8a0d0f-07e0-4e28-b3cb-bbaf0422adf9" satisfied condition "Succeeded or Failed" Jun 18 00:09:33.629: INFO: Trying to get logs from node node1 pod metadata-volume-2c8a0d0f-07e0-4e28-b3cb-bbaf0422adf9 container client-container: STEP: delete the pod Jun 18 00:09:33.643: INFO: Waiting for pod metadata-volume-2c8a0d0f-07e0-4e28-b3cb-bbaf0422adf9 to disappear Jun 18 00:09:33.645: INFO: Pod metadata-volume-2c8a0d0f-07e0-4e28-b3cb-bbaf0422adf9 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:09:33.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1430" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":11,"skipped":261,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:07:57.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not be passed when podInfoOnMount=false /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 STEP: Building a driver namespace object, basename csi-mock-volumes-5922 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 18 00:07:57.147: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5922-6332/csi-attacher Jun 18 00:07:57.150: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5922 Jun 18 00:07:57.150: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-5922 Jun 18 00:07:57.153: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5922 Jun 18 00:07:57.156: INFO: creating *v1.Role: csi-mock-volumes-5922-6332/external-attacher-cfg-csi-mock-volumes-5922 Jun 18 00:07:57.159: INFO: creating *v1.RoleBinding: csi-mock-volumes-5922-6332/csi-attacher-role-cfg Jun 18 00:07:57.162: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5922-6332/csi-provisioner Jun 18 00:07:57.165: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5922 Jun 18 00:07:57.165: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-5922 Jun 18 00:07:57.167: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5922 Jun 18 00:07:57.170: INFO: creating *v1.Role: csi-mock-volumes-5922-6332/external-provisioner-cfg-csi-mock-volumes-5922 Jun 18 00:07:57.173: INFO: creating *v1.RoleBinding: csi-mock-volumes-5922-6332/csi-provisioner-role-cfg Jun 18 00:07:57.175: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5922-6332/csi-resizer Jun 18 00:07:57.178: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5922 Jun 18 00:07:57.178: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-5922 Jun 18 00:07:57.181: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5922 Jun 18 00:07:57.183: INFO: creating *v1.Role: csi-mock-volumes-5922-6332/external-resizer-cfg-csi-mock-volumes-5922 Jun 18 00:07:57.185: INFO: creating *v1.RoleBinding: csi-mock-volumes-5922-6332/csi-resizer-role-cfg Jun 18 00:07:57.188: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5922-6332/csi-snapshotter Jun 18 00:07:57.191: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5922 Jun 18 00:07:57.191: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-5922 Jun 18 00:07:57.193: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5922 Jun 18 00:07:57.197: INFO: creating *v1.Role: csi-mock-volumes-5922-6332/external-snapshotter-leaderelection-csi-mock-volumes-5922 Jun 18 00:07:57.200: INFO: creating *v1.RoleBinding: csi-mock-volumes-5922-6332/external-snapshotter-leaderelection Jun 18 00:07:57.203: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5922-6332/csi-mock Jun 18 00:07:57.205: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5922 Jun 18 00:07:57.207: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5922 Jun 18 00:07:57.210: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5922 Jun 18 00:07:57.213: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5922 Jun 18 00:07:57.262: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5922 Jun 18 00:07:57.264: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5922 Jun 18 00:07:57.267: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5922 Jun 18 00:07:57.270: INFO: creating *v1.StatefulSet: csi-mock-volumes-5922-6332/csi-mockplugin Jun 18 00:07:57.274: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-5922 Jun 18 00:07:57.276: INFO: creating *v1.StatefulSet: csi-mock-volumes-5922-6332/csi-mockplugin-attacher Jun 18 00:07:57.280: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-5922" Jun 18 00:07:57.283: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5922 to register on node node2 STEP: Creating pod Jun 18 00:08:38.879: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 18 00:08:38.886: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-6mfmm] to have phase Bound Jun 18 00:08:38.888: INFO: PersistentVolumeClaim pvc-6mfmm found but phase is Pending instead of Bound. Jun 18 00:08:40.893: INFO: PersistentVolumeClaim pvc-6mfmm found and phase=Bound (2.006969976s) STEP: Deleting the previously created pod Jun 18 00:09:16.919: INFO: Deleting pod "pvc-volume-tester-vzqlg" in namespace "csi-mock-volumes-5922" Jun 18 00:09:16.924: INFO: Wait up to 5m0s for pod "pvc-volume-tester-vzqlg" to be fully deleted STEP: Checking CSI driver logs Jun 18 00:09:20.945: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/3d6913c7-fcd8-4f97-8923-2451e9e90492/volumes/kubernetes.io~csi/pvc-11cfd8b7-a15a-4bf2-b548-61c3138ef5c9/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-vzqlg Jun 18 00:09:20.945: INFO: Deleting pod "pvc-volume-tester-vzqlg" in namespace "csi-mock-volumes-5922" STEP: Deleting claim pvc-6mfmm Jun 18 00:09:20.954: INFO: Waiting up to 2m0s for PersistentVolume pvc-11cfd8b7-a15a-4bf2-b548-61c3138ef5c9 to get deleted Jun 18 00:09:20.956: INFO: PersistentVolume pvc-11cfd8b7-a15a-4bf2-b548-61c3138ef5c9 found and phase=Bound (2.40255ms) Jun 18 00:09:22.961: INFO: PersistentVolume pvc-11cfd8b7-a15a-4bf2-b548-61c3138ef5c9 was removed STEP: Deleting storageclass csi-mock-volumes-5922-sc522wc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-5922 STEP: Waiting for namespaces [csi-mock-volumes-5922] to vanish STEP: uninstalling csi mock driver Jun 18 00:09:28.972: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5922-6332/csi-attacher Jun 18 00:09:28.976: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5922 Jun 18 00:09:28.979: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5922 Jun 18 00:09:28.984: INFO: deleting *v1.Role: csi-mock-volumes-5922-6332/external-attacher-cfg-csi-mock-volumes-5922 Jun 18 00:09:28.987: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5922-6332/csi-attacher-role-cfg Jun 18 00:09:28.991: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5922-6332/csi-provisioner Jun 18 00:09:28.994: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5922 Jun 18 00:09:28.998: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5922 Jun 18 00:09:29.001: INFO: deleting *v1.Role: csi-mock-volumes-5922-6332/external-provisioner-cfg-csi-mock-volumes-5922 Jun 18 00:09:29.005: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5922-6332/csi-provisioner-role-cfg Jun 18 00:09:29.009: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5922-6332/csi-resizer Jun 18 00:09:29.012: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5922 Jun 18 00:09:29.016: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5922 Jun 18 00:09:29.021: INFO: deleting *v1.Role: csi-mock-volumes-5922-6332/external-resizer-cfg-csi-mock-volumes-5922 Jun 18 00:09:29.024: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5922-6332/csi-resizer-role-cfg Jun 18 00:09:29.028: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5922-6332/csi-snapshotter Jun 18 00:09:29.031: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5922 Jun 18 00:09:29.035: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5922 Jun 18 00:09:29.039: INFO: deleting *v1.Role: csi-mock-volumes-5922-6332/external-snapshotter-leaderelection-csi-mock-volumes-5922 Jun 18 00:09:29.043: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5922-6332/external-snapshotter-leaderelection Jun 18 00:09:29.046: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5922-6332/csi-mock Jun 18 00:09:29.049: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5922 Jun 18 00:09:29.053: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5922 Jun 18 00:09:29.057: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5922 Jun 18 00:09:29.061: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5922 Jun 18 00:09:29.065: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5922 Jun 18 00:09:29.069: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5922 Jun 18 00:09:29.073: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5922 Jun 18 00:09:29.076: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5922-6332/csi-mockplugin Jun 18 00:09:29.080: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-5922 Jun 18 00:09:29.083: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5922-6332/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-5922-6332 STEP: Waiting for namespaces [csi-mock-volumes-5922-6332] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:09:41.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:104.019 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443 should not be passed when podInfoOnMount=false /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false","total":-1,"completed":3,"skipped":86,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:09:10.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 18 00:09:20.278: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-7accdfe4-c581-47be-9050-bdf87b70a294-backend && mount --bind /tmp/local-volume-test-7accdfe4-c581-47be-9050-bdf87b70a294-backend /tmp/local-volume-test-7accdfe4-c581-47be-9050-bdf87b70a294-backend && ln -s /tmp/local-volume-test-7accdfe4-c581-47be-9050-bdf87b70a294-backend /tmp/local-volume-test-7accdfe4-c581-47be-9050-bdf87b70a294] Namespace:persistent-local-volumes-test-2600 PodName:hostexec-node2-cfv9f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:09:20.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:09:20.372: INFO: Creating a PV followed by a PVC Jun 18 00:09:20.379: INFO: Waiting for PV local-pvd6829 to bind to PVC pvc-t449v Jun 18 00:09:20.379: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-t449v] to have phase Bound Jun 18 00:09:20.382: INFO: PersistentVolumeClaim pvc-t449v found but phase is Pending instead of Bound. Jun 18 00:09:22.385: INFO: PersistentVolumeClaim pvc-t449v found and phase=Bound (2.00555164s) Jun 18 00:09:22.385: INFO: Waiting up to 3m0s for PersistentVolume local-pvd6829 to have phase Bound Jun 18 00:09:22.387: INFO: PersistentVolume local-pvd6829 found and phase=Bound (2.208842ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Jun 18 00:09:32.412: INFO: pod "pod-72358082-dd11-4f69-a68f-a39ddf98cd0f" created on Node "node2" STEP: Writing in pod1 Jun 18 00:09:32.412: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2600 PodName:pod-72358082-dd11-4f69-a68f-a39ddf98cd0f ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:09:32.412: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:09:32.675: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Jun 18 00:09:32.675: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2600 PodName:pod-72358082-dd11-4f69-a68f-a39ddf98cd0f ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:09:32.675: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:09:32.823: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-72358082-dd11-4f69-a68f-a39ddf98cd0f in namespace persistent-local-volumes-test-2600 STEP: Creating pod2 STEP: Creating a pod Jun 18 00:09:56.851: INFO: pod "pod-d688ad2a-27e5-4d8c-93fd-75772e76d143" created on Node "node2" STEP: Reading in pod2 Jun 18 00:09:56.851: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2600 PodName:pod-d688ad2a-27e5-4d8c-93fd-75772e76d143 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:09:56.851: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:09:56.933: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-d688ad2a-27e5-4d8c-93fd-75772e76d143 in namespace persistent-local-volumes-test-2600 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:09:56.940: INFO: Deleting PersistentVolumeClaim "pvc-t449v" Jun 18 00:09:56.943: INFO: Deleting PersistentVolume "local-pvd6829" STEP: Removing the test directory Jun 18 00:09:56.947: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-7accdfe4-c581-47be-9050-bdf87b70a294 && umount /tmp/local-volume-test-7accdfe4-c581-47be-9050-bdf87b70a294-backend && rm -r /tmp/local-volume-test-7accdfe4-c581-47be-9050-bdf87b70a294-backend] Namespace:persistent-local-volumes-test-2600 PodName:hostexec-node2-cfv9f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:09:56.947: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:09:57.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2600" for this suite. • [SLOW TEST:46.846 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":6,"skipped":260,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:04:54.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should bringup pod after deploying CSIDriver attach=false [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:373 STEP: Building a driver namespace object, basename csi-mock-volumes-81 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 18 00:04:54.475: INFO: creating *v1.ServiceAccount: csi-mock-volumes-81-7702/csi-attacher Jun 18 00:04:54.478: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-81 Jun 18 00:04:54.478: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-81 Jun 18 00:04:54.481: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-81 Jun 18 00:04:54.484: INFO: creating *v1.Role: csi-mock-volumes-81-7702/external-attacher-cfg-csi-mock-volumes-81 Jun 18 00:04:54.486: INFO: creating *v1.RoleBinding: csi-mock-volumes-81-7702/csi-attacher-role-cfg Jun 18 00:04:54.489: INFO: creating *v1.ServiceAccount: csi-mock-volumes-81-7702/csi-provisioner Jun 18 00:04:54.492: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-81 Jun 18 00:04:54.492: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-81 Jun 18 00:04:54.495: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-81 Jun 18 00:04:54.498: INFO: creating *v1.Role: csi-mock-volumes-81-7702/external-provisioner-cfg-csi-mock-volumes-81 Jun 18 00:04:54.501: INFO: creating *v1.RoleBinding: csi-mock-volumes-81-7702/csi-provisioner-role-cfg Jun 18 00:04:54.504: INFO: creating *v1.ServiceAccount: csi-mock-volumes-81-7702/csi-resizer Jun 18 00:04:54.506: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-81 Jun 18 00:04:54.506: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-81 Jun 18 00:04:54.509: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-81 Jun 18 00:04:54.512: INFO: creating *v1.Role: csi-mock-volumes-81-7702/external-resizer-cfg-csi-mock-volumes-81 Jun 18 00:04:54.515: INFO: creating *v1.RoleBinding: csi-mock-volumes-81-7702/csi-resizer-role-cfg Jun 18 00:04:54.517: INFO: creating *v1.ServiceAccount: csi-mock-volumes-81-7702/csi-snapshotter Jun 18 00:04:54.519: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-81 Jun 18 00:04:54.519: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-81 Jun 18 00:04:54.522: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-81 Jun 18 00:04:54.524: INFO: creating *v1.Role: csi-mock-volumes-81-7702/external-snapshotter-leaderelection-csi-mock-volumes-81 Jun 18 00:04:54.526: INFO: creating *v1.RoleBinding: csi-mock-volumes-81-7702/external-snapshotter-leaderelection Jun 18 00:04:54.529: INFO: creating *v1.ServiceAccount: csi-mock-volumes-81-7702/csi-mock Jun 18 00:04:54.532: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-81 Jun 18 00:04:54.534: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-81 Jun 18 00:04:54.537: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-81 Jun 18 00:04:54.542: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-81 Jun 18 00:04:54.544: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-81 Jun 18 00:04:54.549: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-81 Jun 18 00:04:54.552: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-81 Jun 18 00:04:54.554: INFO: creating *v1.StatefulSet: csi-mock-volumes-81-7702/csi-mockplugin Jun 18 00:04:54.558: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-81 to register on node node2 STEP: Creating pod Jun 18 00:05:04.075: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 18 00:05:04.080: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-rcqbn] to have phase Bound Jun 18 00:05:04.082: INFO: PersistentVolumeClaim pvc-rcqbn found but phase is Pending instead of Bound. Jun 18 00:05:06.085: INFO: PersistentVolumeClaim pvc-rcqbn found and phase=Bound (2.004701177s) STEP: Checking if attaching failed and pod cannot start STEP: Checking if VolumeAttachment was created for the pod STEP: Deploy CSIDriver object with attachRequired=false Jun 18 00:07:08.115: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-81 STEP: Wait for the pod in running status STEP: Wait for the volumeattachment to be deleted up to 7m0s STEP: Deleting pod pvc-volume-tester-9np7b Jun 18 00:09:12.139: INFO: Deleting pod "pvc-volume-tester-9np7b" in namespace "csi-mock-volumes-81" Jun 18 00:09:12.145: INFO: Wait up to 5m0s for pod "pvc-volume-tester-9np7b" to be fully deleted STEP: Deleting claim pvc-rcqbn Jun 18 00:09:22.161: INFO: Waiting up to 2m0s for PersistentVolume pvc-402eccbf-7400-431e-8647-7455b9508f2c to get deleted Jun 18 00:09:22.163: INFO: PersistentVolume pvc-402eccbf-7400-431e-8647-7455b9508f2c found and phase=Bound (2.28279ms) Jun 18 00:09:24.167: INFO: PersistentVolume pvc-402eccbf-7400-431e-8647-7455b9508f2c was removed STEP: Deleting storageclass csi-mock-volumes-81-scxpg2v STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-81 STEP: Waiting for namespaces [csi-mock-volumes-81] to vanish STEP: uninstalling csi mock driver Jun 18 00:09:30.180: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-81-7702/csi-attacher Jun 18 00:09:30.189: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-81 Jun 18 00:09:30.199: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-81 Jun 18 00:09:30.206: INFO: deleting *v1.Role: csi-mock-volumes-81-7702/external-attacher-cfg-csi-mock-volumes-81 Jun 18 00:09:30.210: INFO: deleting *v1.RoleBinding: csi-mock-volumes-81-7702/csi-attacher-role-cfg Jun 18 00:09:30.214: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-81-7702/csi-provisioner Jun 18 00:09:30.218: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-81 Jun 18 00:09:30.222: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-81 Jun 18 00:09:30.226: INFO: deleting *v1.Role: csi-mock-volumes-81-7702/external-provisioner-cfg-csi-mock-volumes-81 Jun 18 00:09:30.229: INFO: deleting *v1.RoleBinding: csi-mock-volumes-81-7702/csi-provisioner-role-cfg Jun 18 00:09:30.232: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-81-7702/csi-resizer Jun 18 00:09:30.236: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-81 Jun 18 00:09:30.239: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-81 Jun 18 00:09:30.243: INFO: deleting *v1.Role: csi-mock-volumes-81-7702/external-resizer-cfg-csi-mock-volumes-81 Jun 18 00:09:30.246: INFO: deleting *v1.RoleBinding: csi-mock-volumes-81-7702/csi-resizer-role-cfg Jun 18 00:09:30.249: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-81-7702/csi-snapshotter Jun 18 00:09:30.253: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-81 Jun 18 00:09:30.256: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-81 Jun 18 00:09:30.260: INFO: deleting *v1.Role: csi-mock-volumes-81-7702/external-snapshotter-leaderelection-csi-mock-volumes-81 Jun 18 00:09:30.263: INFO: deleting *v1.RoleBinding: csi-mock-volumes-81-7702/external-snapshotter-leaderelection Jun 18 00:09:30.267: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-81-7702/csi-mock Jun 18 00:09:30.271: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-81 Jun 18 00:09:30.274: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-81 Jun 18 00:09:30.277: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-81 Jun 18 00:09:30.281: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-81 Jun 18 00:09:30.284: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-81 Jun 18 00:09:30.287: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-81 Jun 18 00:09:30.291: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-81 Jun 18 00:09:30.294: INFO: deleting *v1.StatefulSet: csi-mock-volumes-81-7702/csi-mockplugin STEP: deleting the driver namespace: csi-mock-volumes-81-7702 STEP: Waiting for namespaces [csi-mock-volumes-81-7702] to vanish Jun 18 00:09:58.309: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-81 [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:09:58.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:303.902 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI CSIDriver deployment after pod creation using non-attachable mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:372 should bringup pod after deploying CSIDriver attach=false [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:373 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI CSIDriver deployment after pod creation using non-attachable mock driver should bringup pod after deploying CSIDriver attach=false [Slow]","total":-1,"completed":8,"skipped":196,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:09:58.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-7b22e77b-38ca-4ac7-8ebc-46b06244cb99" Jun 18 00:10:00.415: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-7b22e77b-38ca-4ac7-8ebc-46b06244cb99 && dd if=/dev/zero of=/tmp/local-volume-test-7b22e77b-38ca-4ac7-8ebc-46b06244cb99/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-7b22e77b-38ca-4ac7-8ebc-46b06244cb99/file] Namespace:persistent-local-volumes-test-5270 PodName:hostexec-node1-txg75 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:10:00.415: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:10:00.557: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-7b22e77b-38ca-4ac7-8ebc-46b06244cb99/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-5270 PodName:hostexec-node1-txg75 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:10:00.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:10:00.654: INFO: Creating a PV followed by a PVC Jun 18 00:10:00.662: INFO: Waiting for PV local-pvdvcmb to bind to PVC pvc-x25f8 Jun 18 00:10:00.662: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-x25f8] to have phase Bound Jun 18 00:10:00.664: INFO: PersistentVolumeClaim pvc-x25f8 found but phase is Pending instead of Bound. Jun 18 00:10:02.668: INFO: PersistentVolumeClaim pvc-x25f8 found and phase=Bound (2.006604959s) Jun 18 00:10:02.668: INFO: Waiting up to 3m0s for PersistentVolume local-pvdvcmb to have phase Bound Jun 18 00:10:02.671: INFO: PersistentVolume local-pvdvcmb found and phase=Bound (2.307813ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jun 18 00:10:06.703: INFO: pod "pod-95c4ab21-5ce0-422c-8141-da014465ddd3" created on Node "node1" STEP: Writing in pod1 Jun 18 00:10:06.703: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5270 PodName:pod-95c4ab21-5ce0-422c-8141-da014465ddd3 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:10:06.703: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:10:06.780: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n18 bytes (18B) copied, 0.000121 seconds, 145.3KB/s", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Jun 18 00:10:06.780: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-5270 PodName:pod-95c4ab21-5ce0-422c-8141-da014465ddd3 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:10:06.780: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:10:06.856: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Writing in pod1 Jun 18 00:10:06.856: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo /dev/loop0 > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5270 PodName:pod-95c4ab21-5ce0-422c-8141-da014465ddd3 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:10:06.856: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:10:06.989: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo /dev/loop0 > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n11 bytes (11B) copied, 0.000033 seconds, 325.5KB/s", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-95c4ab21-5ce0-422c-8141-da014465ddd3 in namespace persistent-local-volumes-test-5270 [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:10:06.994: INFO: Deleting PersistentVolumeClaim "pvc-x25f8" Jun 18 00:10:06.998: INFO: Deleting PersistentVolume "local-pvdvcmb" Jun 18 00:10:07.002: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-7b22e77b-38ca-4ac7-8ebc-46b06244cb99/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-5270 PodName:hostexec-node1-txg75 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:10:07.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node1" at path /tmp/local-volume-test-7b22e77b-38ca-4ac7-8ebc-46b06244cb99/file Jun 18 00:10:07.102: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-5270 PodName:hostexec-node1-txg75 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:10:07.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-7b22e77b-38ca-4ac7-8ebc-46b06244cb99 Jun 18 00:10:07.187: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-7b22e77b-38ca-4ac7-8ebc-46b06244cb99] Namespace:persistent-local-volumes-test-5270 PodName:hostexec-node1-txg75 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:10:07.187: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:10:07.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5270" for this suite. • [SLOW TEST:8.943 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":9,"skipped":215,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:10:07.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 18 00:10:07.396: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:10:07.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-9907" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.036 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create unbound pv count metrics for pvc controller after creating pv only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:485 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:10:07.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should report an error and create no PV /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:825 Jun 18 00:10:07.487: INFO: Only supported for providers [aws] (not local) [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:10:07.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-9266" for this suite. S [SKIPPING] [0.034 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Invalid AWS KMS key /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:824 should report an error and create no PV [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:825 Only supported for providers [aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:826 ------------------------------ SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:07:57.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should call NodeUnstage after NodeStage success /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 STEP: Building a driver namespace object, basename csi-mock-volumes-462 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 18 00:07:57.654: INFO: creating *v1.ServiceAccount: csi-mock-volumes-462-7351/csi-attacher Jun 18 00:07:57.656: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-462 Jun 18 00:07:57.656: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-462 Jun 18 00:07:57.660: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-462 Jun 18 00:07:57.662: INFO: creating *v1.Role: csi-mock-volumes-462-7351/external-attacher-cfg-csi-mock-volumes-462 Jun 18 00:07:57.665: INFO: creating *v1.RoleBinding: csi-mock-volumes-462-7351/csi-attacher-role-cfg Jun 18 00:07:57.668: INFO: creating *v1.ServiceAccount: csi-mock-volumes-462-7351/csi-provisioner Jun 18 00:07:57.671: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-462 Jun 18 00:07:57.671: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-462 Jun 18 00:07:57.673: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-462 Jun 18 00:07:57.676: INFO: creating *v1.Role: csi-mock-volumes-462-7351/external-provisioner-cfg-csi-mock-volumes-462 Jun 18 00:07:57.679: INFO: creating *v1.RoleBinding: csi-mock-volumes-462-7351/csi-provisioner-role-cfg Jun 18 00:07:57.681: INFO: creating *v1.ServiceAccount: csi-mock-volumes-462-7351/csi-resizer Jun 18 00:07:57.683: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-462 Jun 18 00:07:57.683: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-462 Jun 18 00:07:57.686: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-462 Jun 18 00:07:57.688: INFO: creating *v1.Role: csi-mock-volumes-462-7351/external-resizer-cfg-csi-mock-volumes-462 Jun 18 00:07:57.690: INFO: creating *v1.RoleBinding: csi-mock-volumes-462-7351/csi-resizer-role-cfg Jun 18 00:07:57.693: INFO: creating *v1.ServiceAccount: csi-mock-volumes-462-7351/csi-snapshotter Jun 18 00:07:57.696: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-462 Jun 18 00:07:57.696: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-462 Jun 18 00:07:57.698: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-462 Jun 18 00:07:57.701: INFO: creating *v1.Role: csi-mock-volumes-462-7351/external-snapshotter-leaderelection-csi-mock-volumes-462 Jun 18 00:07:57.703: INFO: creating *v1.RoleBinding: csi-mock-volumes-462-7351/external-snapshotter-leaderelection Jun 18 00:07:57.706: INFO: creating *v1.ServiceAccount: csi-mock-volumes-462-7351/csi-mock Jun 18 00:07:57.709: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-462 Jun 18 00:07:57.711: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-462 Jun 18 00:07:57.714: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-462 Jun 18 00:07:57.716: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-462 Jun 18 00:07:57.719: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-462 Jun 18 00:07:57.724: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-462 Jun 18 00:07:57.726: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-462 Jun 18 00:07:57.729: INFO: creating *v1.StatefulSet: csi-mock-volumes-462-7351/csi-mockplugin Jun 18 00:07:57.732: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-462 Jun 18 00:07:57.735: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-462" Jun 18 00:07:57.737: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-462 to register on node node2 STEP: Creating pod Jun 18 00:08:24.138: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 18 00:08:24.143: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-ztzcm] to have phase Bound Jun 18 00:08:24.146: INFO: PersistentVolumeClaim pvc-ztzcm found but phase is Pending instead of Bound. Jun 18 00:08:26.152: INFO: PersistentVolumeClaim pvc-ztzcm found and phase=Bound (2.009060915s) Jun 18 00:08:26.168: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-ztzcm] to have phase Bound Jun 18 00:08:26.170: INFO: PersistentVolumeClaim pvc-ztzcm found and phase=Bound (1.980073ms) STEP: Waiting for expected CSI calls STEP: Waiting for pod to be running STEP: Deleting the previously created pod Jun 18 00:09:14.197: INFO: Deleting pod "pvc-volume-tester-jr75q" in namespace "csi-mock-volumes-462" Jun 18 00:09:14.203: INFO: Wait up to 5m0s for pod "pvc-volume-tester-jr75q" to be fully deleted STEP: Waiting for all remaining expected CSI calls STEP: Deleting pod pvc-volume-tester-jr75q Jun 18 00:09:35.217: INFO: Deleting pod "pvc-volume-tester-jr75q" in namespace "csi-mock-volumes-462" STEP: Deleting claim pvc-ztzcm Jun 18 00:09:35.225: INFO: Waiting up to 2m0s for PersistentVolume pvc-f1ee9ccc-f0dc-40ad-855b-1d6189926ec4 to get deleted Jun 18 00:09:35.227: INFO: PersistentVolume pvc-f1ee9ccc-f0dc-40ad-855b-1d6189926ec4 found and phase=Bound (2.04505ms) Jun 18 00:09:37.230: INFO: PersistentVolume pvc-f1ee9ccc-f0dc-40ad-855b-1d6189926ec4 was removed STEP: Deleting storageclass csi-mock-volumes-462-sczkl4p STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-462 STEP: Waiting for namespaces [csi-mock-volumes-462] to vanish STEP: uninstalling csi mock driver Jun 18 00:09:43.244: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-462-7351/csi-attacher Jun 18 00:09:43.248: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-462 Jun 18 00:09:43.252: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-462 Jun 18 00:09:43.256: INFO: deleting *v1.Role: csi-mock-volumes-462-7351/external-attacher-cfg-csi-mock-volumes-462 Jun 18 00:09:43.260: INFO: deleting *v1.RoleBinding: csi-mock-volumes-462-7351/csi-attacher-role-cfg Jun 18 00:09:43.263: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-462-7351/csi-provisioner Jun 18 00:09:43.267: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-462 Jun 18 00:09:43.270: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-462 Jun 18 00:09:43.277: INFO: deleting *v1.Role: csi-mock-volumes-462-7351/external-provisioner-cfg-csi-mock-volumes-462 Jun 18 00:09:43.284: INFO: deleting *v1.RoleBinding: csi-mock-volumes-462-7351/csi-provisioner-role-cfg Jun 18 00:09:43.292: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-462-7351/csi-resizer Jun 18 00:09:43.297: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-462 Jun 18 00:09:43.301: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-462 Jun 18 00:09:43.305: INFO: deleting *v1.Role: csi-mock-volumes-462-7351/external-resizer-cfg-csi-mock-volumes-462 Jun 18 00:09:43.308: INFO: deleting *v1.RoleBinding: csi-mock-volumes-462-7351/csi-resizer-role-cfg Jun 18 00:09:43.311: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-462-7351/csi-snapshotter Jun 18 00:09:43.314: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-462 Jun 18 00:09:43.318: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-462 Jun 18 00:09:43.321: INFO: deleting *v1.Role: csi-mock-volumes-462-7351/external-snapshotter-leaderelection-csi-mock-volumes-462 Jun 18 00:09:43.324: INFO: deleting *v1.RoleBinding: csi-mock-volumes-462-7351/external-snapshotter-leaderelection Jun 18 00:09:43.329: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-462-7351/csi-mock Jun 18 00:09:43.332: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-462 Jun 18 00:09:43.336: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-462 Jun 18 00:09:43.339: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-462 Jun 18 00:09:43.342: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-462 Jun 18 00:09:43.345: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-462 Jun 18 00:09:43.348: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-462 Jun 18 00:09:43.351: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-462 Jun 18 00:09:43.354: INFO: deleting *v1.StatefulSet: csi-mock-volumes-462-7351/csi-mockplugin Jun 18 00:09:43.358: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-462 STEP: deleting the driver namespace: csi-mock-volumes-462-7351 STEP: Waiting for namespaces [csi-mock-volumes-462-7351] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:10:11.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:133.780 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI NodeStage error cases [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:734 should call NodeUnstage after NodeStage success /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should call NodeUnstage after NodeStage success","total":-1,"completed":5,"skipped":184,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:10:11.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename regional-pd STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:68 Jun 18 00:10:11.422: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:10:11.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "regional-pd-8068" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 RegionalPD [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:76 should provision storage in the allowedTopologies with delayed binding [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:90 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:10:07.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-d905ec11-b840-49da-8b08-8714da6b9d99" Jun 18 00:10:09.563: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-d905ec11-b840-49da-8b08-8714da6b9d99 && dd if=/dev/zero of=/tmp/local-volume-test-d905ec11-b840-49da-8b08-8714da6b9d99/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-d905ec11-b840-49da-8b08-8714da6b9d99/file] Namespace:persistent-local-volumes-test-5408 PodName:hostexec-node1-mlj5d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:10:09.563: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:10:10.160: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-d905ec11-b840-49da-8b08-8714da6b9d99/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-5408 PodName:hostexec-node1-mlj5d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:10:10.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:10:10.348: INFO: Creating a PV followed by a PVC Jun 18 00:10:10.354: INFO: Waiting for PV local-pvq2v5w to bind to PVC pvc-qpr85 Jun 18 00:10:10.354: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-qpr85] to have phase Bound Jun 18 00:10:10.357: INFO: PersistentVolumeClaim pvc-qpr85 found but phase is Pending instead of Bound. Jun 18 00:10:12.361: INFO: PersistentVolumeClaim pvc-qpr85 found and phase=Bound (2.006267578s) Jun 18 00:10:12.361: INFO: Waiting up to 3m0s for PersistentVolume local-pvq2v5w to have phase Bound Jun 18 00:10:12.363: INFO: PersistentVolume local-pvq2v5w found and phase=Bound (1.825704ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 Jun 18 00:10:12.366: INFO: We don't set fsGroup on block device, skipped. [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:10:12.368: INFO: Deleting PersistentVolumeClaim "pvc-qpr85" Jun 18 00:10:12.372: INFO: Deleting PersistentVolume "local-pvq2v5w" Jun 18 00:10:12.377: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-d905ec11-b840-49da-8b08-8714da6b9d99/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-5408 PodName:hostexec-node1-mlj5d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:10:12.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop2" on node "node1" at path /tmp/local-volume-test-d905ec11-b840-49da-8b08-8714da6b9d99/file Jun 18 00:10:12.722: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop2] Namespace:persistent-local-volumes-test-5408 PodName:hostexec-node1-mlj5d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:10:12.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-d905ec11-b840-49da-8b08-8714da6b9d99 Jun 18 00:10:13.082: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-d905ec11-b840-49da-8b08-8714da6b9d99] Namespace:persistent-local-volumes-test-5408 PodName:hostexec-node1-mlj5d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:10:13.082: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:10:13.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5408" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [5.822 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 We don't set fsGroup on block device, skipped. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:263 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:10:13.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Jun 18 00:10:21.514: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-5829 PodName:hostexec-node1-p6fxh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:10:21.514: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:10:21.610: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Jun 18 00:10:21.610: INFO: exec node1: stdout: "0\n" Jun 18 00:10:21.610: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Jun 18 00:10:21.610: INFO: exec node1: exit code: 0 Jun 18 00:10:21.610: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:10:21.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5829" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [8.154 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1254 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:10:21.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pvc-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:72 Jun 18 00:10:21.733: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PVC Jun 18 00:10:21.739: INFO: error finding default storageClass : No default storage class found [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:10:21.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pvc-protection-6584" for this suite. [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:108 S [SKIPPING] in Spec Setup (BeforeEach) [0.035 seconds] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Verify "immediate" deletion of a PVC that is not in active use by a pod [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:114 error finding default storageClass : No default storage class found /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pv/pv.go:819 ------------------------------ SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:08:10.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] exhausted, late binding, no topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1080 STEP: Building a driver namespace object, basename csi-mock-volumes-5931 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Jun 18 00:08:10.756: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5931-3647/csi-attacher Jun 18 00:08:10.759: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5931 Jun 18 00:08:10.759: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-5931 Jun 18 00:08:10.762: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5931 Jun 18 00:08:10.766: INFO: creating *v1.Role: csi-mock-volumes-5931-3647/external-attacher-cfg-csi-mock-volumes-5931 Jun 18 00:08:10.768: INFO: creating *v1.RoleBinding: csi-mock-volumes-5931-3647/csi-attacher-role-cfg Jun 18 00:08:10.772: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5931-3647/csi-provisioner Jun 18 00:08:10.774: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5931 Jun 18 00:08:10.774: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-5931 Jun 18 00:08:10.795: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5931 Jun 18 00:08:10.798: INFO: creating *v1.Role: csi-mock-volumes-5931-3647/external-provisioner-cfg-csi-mock-volumes-5931 Jun 18 00:08:10.802: INFO: creating *v1.RoleBinding: csi-mock-volumes-5931-3647/csi-provisioner-role-cfg Jun 18 00:08:10.806: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5931-3647/csi-resizer Jun 18 00:08:10.809: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5931 Jun 18 00:08:10.809: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-5931 Jun 18 00:08:10.812: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5931 Jun 18 00:08:10.814: INFO: creating *v1.Role: csi-mock-volumes-5931-3647/external-resizer-cfg-csi-mock-volumes-5931 Jun 18 00:08:10.817: INFO: creating *v1.RoleBinding: csi-mock-volumes-5931-3647/csi-resizer-role-cfg Jun 18 00:08:10.820: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5931-3647/csi-snapshotter Jun 18 00:08:10.822: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5931 Jun 18 00:08:10.822: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-5931 Jun 18 00:08:10.828: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5931 Jun 18 00:08:10.835: INFO: creating *v1.Role: csi-mock-volumes-5931-3647/external-snapshotter-leaderelection-csi-mock-volumes-5931 Jun 18 00:08:10.840: INFO: creating *v1.RoleBinding: csi-mock-volumes-5931-3647/external-snapshotter-leaderelection Jun 18 00:08:10.845: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5931-3647/csi-mock Jun 18 00:08:10.851: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5931 Jun 18 00:08:10.854: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5931 Jun 18 00:08:10.856: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5931 Jun 18 00:08:10.859: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5931 Jun 18 00:08:10.862: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5931 Jun 18 00:08:10.865: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5931 Jun 18 00:08:10.867: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5931 Jun 18 00:08:10.870: INFO: creating *v1.StatefulSet: csi-mock-volumes-5931-3647/csi-mockplugin Jun 18 00:08:10.874: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-5931 Jun 18 00:08:10.880: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-5931" Jun 18 00:08:10.882: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5931 to register on node node2 I0618 00:09:12.355905 37 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-5931","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0618 00:09:12.449615 37 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I0618 00:09:12.452767 37 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-5931","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0618 00:09:12.475138 37 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I0618 00:09:12.571133 37 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I0618 00:09:13.047470 37 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-5931"},"Error":"","FullError":null} STEP: Creating pod Jun 18 00:09:15.261: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil I0618 00:09:15.295747 37 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-421a38fd-2c22-4c2c-9567-d21910ed50e6","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}} I0618 00:09:15.300161 37 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-421a38fd-2c22-4c2c-9567-d21910ed50e6","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-421a38fd-2c22-4c2c-9567-d21910ed50e6"}}},"Error":"","FullError":null} I0618 00:09:17.351461 37 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Jun 18 00:09:17.353: INFO: >>> kubeConfig: /root/.kube/config I0618 00:09:17.556744 37 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-421a38fd-2c22-4c2c-9567-d21910ed50e6/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-421a38fd-2c22-4c2c-9567-d21910ed50e6","storage.kubernetes.io/csiProvisionerIdentity":"1655510952582-8081-csi-mock-csi-mock-volumes-5931"}},"Response":{},"Error":"","FullError":null} I0618 00:09:18.193085 37 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Jun 18 00:09:18.194: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:09:18.286: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:09:18.382: INFO: >>> kubeConfig: /root/.kube/config I0618 00:09:18.492992 37 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-421a38fd-2c22-4c2c-9567-d21910ed50e6/globalmount","target_path":"/var/lib/kubelet/pods/e0d26cc8-f660-437c-a7db-faa2d92e1926/volumes/kubernetes.io~csi/pvc-421a38fd-2c22-4c2c-9567-d21910ed50e6/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-421a38fd-2c22-4c2c-9567-d21910ed50e6","storage.kubernetes.io/csiProvisionerIdentity":"1655510952582-8081-csi-mock-csi-mock-volumes-5931"}},"Response":{},"Error":"","FullError":null} Jun 18 00:09:25.282: INFO: Deleting pod "pvc-volume-tester-ddjfh" in namespace "csi-mock-volumes-5931" Jun 18 00:09:25.286: INFO: Wait up to 5m0s for pod "pvc-volume-tester-ddjfh" to be fully deleted Jun 18 00:09:30.559: INFO: >>> kubeConfig: /root/.kube/config I0618 00:09:30.872042 37 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/e0d26cc8-f660-437c-a7db-faa2d92e1926/volumes/kubernetes.io~csi/pvc-421a38fd-2c22-4c2c-9567-d21910ed50e6/mount"},"Response":{},"Error":"","FullError":null} I0618 00:09:30.960272 37 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0618 00:09:30.962085 37 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-421a38fd-2c22-4c2c-9567-d21910ed50e6/globalmount"},"Response":{},"Error":"","FullError":null} I0618 00:09:37.310252 37 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} STEP: Checking PVC events Jun 18 00:09:38.297: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-9hpbl", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5931", SelfLink:"", UID:"421a38fd-2c22-4c2c-9567-d21910ed50e6", ResourceVersion:"95430", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63791107755, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004784ff0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004785008)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0045bac80), VolumeMode:(*v1.PersistentVolumeMode)(0xc0045bac90), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 18 00:09:38.298: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-9hpbl", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5931", SelfLink:"", UID:"421a38fd-2c22-4c2c-9567-d21910ed50e6", ResourceVersion:"95439", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63791107755, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.kubernetes.io/selected-node":"node2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001c7e180), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001c7e198)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001c7e1b0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001c7e1c8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0044800d0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0044800e0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 18 00:09:38.298: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-9hpbl", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5931", SelfLink:"", UID:"421a38fd-2c22-4c2c-9567-d21910ed50e6", ResourceVersion:"95440", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63791107755, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5931", "volume.kubernetes.io/selected-node":"node2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000d33428), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000d33440)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000d33458), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000d33488)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000d334b8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000d334e8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0043ddb60), VolumeMode:(*v1.PersistentVolumeMode)(0xc0043ddb80), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 18 00:09:38.298: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-9hpbl", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5931", SelfLink:"", UID:"421a38fd-2c22-4c2c-9567-d21910ed50e6", ResourceVersion:"95451", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63791107755, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5931", "volume.kubernetes.io/selected-node":"node2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000d33548), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000d33578)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000d335a8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000d335d8)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000d33608), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000d33638)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-421a38fd-2c22-4c2c-9567-d21910ed50e6", StorageClassName:(*string)(0xc0043ddbb0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0043ddbc0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 18 00:09:38.298: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-9hpbl", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5931", SelfLink:"", UID:"421a38fd-2c22-4c2c-9567-d21910ed50e6", ResourceVersion:"95452", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63791107755, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5931", "volume.kubernetes.io/selected-node":"node2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00096bea8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0045f8000)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0045f8018), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0045f8030)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0045f8048), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0045f8060)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-421a38fd-2c22-4c2c-9567-d21910ed50e6", StorageClassName:(*string)(0xc003392d70), VolumeMode:(*v1.PersistentVolumeMode)(0xc003392d80), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 18 00:09:38.298: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-9hpbl", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5931", SelfLink:"", UID:"421a38fd-2c22-4c2c-9567-d21910ed50e6", ResourceVersion:"96242", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63791107755, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(0xc0045f8090), DeletionGracePeriodSeconds:(*int64)(0xc002d6c488), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5931", "volume.kubernetes.io/selected-node":"node2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0045f80a8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0045f80c0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0045f80d8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0045f80f0)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0045f8108), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0045f8120)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-421a38fd-2c22-4c2c-9567-d21910ed50e6", StorageClassName:(*string)(0xc003392dc0), VolumeMode:(*v1.PersistentVolumeMode)(0xc003392dd0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 18 00:09:38.298: INFO: PVC event DELETED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-9hpbl", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5931", SelfLink:"", UID:"421a38fd-2c22-4c2c-9567-d21910ed50e6", ResourceVersion:"96243", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63791107755, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(0xc0045f8150), DeletionGracePeriodSeconds:(*int64)(0xc002d6c558), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5931", "volume.kubernetes.io/selected-node":"node2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0045f8168), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0045f8180)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0045f8198), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0045f81b0)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0045f81c8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0045f81e0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-421a38fd-2c22-4c2c-9567-d21910ed50e6", StorageClassName:(*string)(0xc003392e20), VolumeMode:(*v1.PersistentVolumeMode)(0xc003392e30), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} STEP: Deleting pod pvc-volume-tester-ddjfh Jun 18 00:09:38.299: INFO: Deleting pod "pvc-volume-tester-ddjfh" in namespace "csi-mock-volumes-5931" STEP: Deleting claim pvc-9hpbl STEP: Deleting storageclass csi-mock-volumes-5931-sch66jg STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-5931 STEP: Waiting for namespaces [csi-mock-volumes-5931] to vanish STEP: uninstalling csi mock driver Jun 18 00:09:44.334: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5931-3647/csi-attacher Jun 18 00:09:44.338: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5931 Jun 18 00:09:44.342: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5931 Jun 18 00:09:44.346: INFO: deleting *v1.Role: csi-mock-volumes-5931-3647/external-attacher-cfg-csi-mock-volumes-5931 Jun 18 00:09:44.349: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5931-3647/csi-attacher-role-cfg Jun 18 00:09:44.352: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5931-3647/csi-provisioner Jun 18 00:09:44.356: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5931 Jun 18 00:09:44.359: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5931 Jun 18 00:09:44.364: INFO: deleting *v1.Role: csi-mock-volumes-5931-3647/external-provisioner-cfg-csi-mock-volumes-5931 Jun 18 00:09:44.371: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5931-3647/csi-provisioner-role-cfg Jun 18 00:09:44.378: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5931-3647/csi-resizer Jun 18 00:09:44.385: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5931 Jun 18 00:09:44.390: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5931 Jun 18 00:09:44.393: INFO: deleting *v1.Role: csi-mock-volumes-5931-3647/external-resizer-cfg-csi-mock-volumes-5931 Jun 18 00:09:44.396: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5931-3647/csi-resizer-role-cfg Jun 18 00:09:44.400: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5931-3647/csi-snapshotter Jun 18 00:09:44.403: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5931 Jun 18 00:09:44.406: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5931 Jun 18 00:09:44.409: INFO: deleting *v1.Role: csi-mock-volumes-5931-3647/external-snapshotter-leaderelection-csi-mock-volumes-5931 Jun 18 00:09:44.413: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5931-3647/external-snapshotter-leaderelection Jun 18 00:09:44.417: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5931-3647/csi-mock Jun 18 00:09:44.420: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5931 Jun 18 00:09:44.424: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5931 Jun 18 00:09:44.427: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5931 Jun 18 00:09:44.430: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5931 Jun 18 00:09:44.433: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5931 Jun 18 00:09:44.436: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5931 Jun 18 00:09:44.440: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5931 Jun 18 00:09:44.443: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5931-3647/csi-mockplugin Jun 18 00:09:44.447: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-5931 STEP: deleting the driver namespace: csi-mock-volumes-5931-3647 STEP: Waiting for namespaces [csi-mock-volumes-5931-3647] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:10:28.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:137.787 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 storage capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1022 exhausted, late binding, no topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1080 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, no topology","total":-1,"completed":5,"skipped":141,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:09:27.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] token should be plumbed down when csiServiceAccountTokenEnabled=true /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1524 STEP: Building a driver namespace object, basename csi-mock-volumes-1838 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 18 00:09:27.796: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1838-1111/csi-attacher Jun 18 00:09:27.799: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1838 Jun 18 00:09:27.799: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-1838 Jun 18 00:09:27.803: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1838 Jun 18 00:09:27.806: INFO: creating *v1.Role: csi-mock-volumes-1838-1111/external-attacher-cfg-csi-mock-volumes-1838 Jun 18 00:09:27.809: INFO: creating *v1.RoleBinding: csi-mock-volumes-1838-1111/csi-attacher-role-cfg Jun 18 00:09:27.811: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1838-1111/csi-provisioner Jun 18 00:09:27.814: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1838 Jun 18 00:09:27.814: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-1838 Jun 18 00:09:27.817: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1838 Jun 18 00:09:27.819: INFO: creating *v1.Role: csi-mock-volumes-1838-1111/external-provisioner-cfg-csi-mock-volumes-1838 Jun 18 00:09:27.824: INFO: creating *v1.RoleBinding: csi-mock-volumes-1838-1111/csi-provisioner-role-cfg Jun 18 00:09:27.830: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1838-1111/csi-resizer Jun 18 00:09:27.837: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1838 Jun 18 00:09:27.837: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-1838 Jun 18 00:09:27.845: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1838 Jun 18 00:09:27.849: INFO: creating *v1.Role: csi-mock-volumes-1838-1111/external-resizer-cfg-csi-mock-volumes-1838 Jun 18 00:09:27.851: INFO: creating *v1.RoleBinding: csi-mock-volumes-1838-1111/csi-resizer-role-cfg Jun 18 00:09:27.866: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1838-1111/csi-snapshotter Jun 18 00:09:27.869: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1838 Jun 18 00:09:27.869: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-1838 Jun 18 00:09:27.873: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1838 Jun 18 00:09:27.875: INFO: creating *v1.Role: csi-mock-volumes-1838-1111/external-snapshotter-leaderelection-csi-mock-volumes-1838 Jun 18 00:09:27.878: INFO: creating *v1.RoleBinding: csi-mock-volumes-1838-1111/external-snapshotter-leaderelection Jun 18 00:09:27.880: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1838-1111/csi-mock Jun 18 00:09:27.883: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1838 Jun 18 00:09:27.886: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1838 Jun 18 00:09:27.890: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1838 Jun 18 00:09:27.892: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1838 Jun 18 00:09:27.895: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1838 Jun 18 00:09:27.898: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1838 Jun 18 00:09:27.900: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1838 Jun 18 00:09:27.903: INFO: creating *v1.StatefulSet: csi-mock-volumes-1838-1111/csi-mockplugin Jun 18 00:09:27.907: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-1838 Jun 18 00:09:27.910: INFO: creating *v1.StatefulSet: csi-mock-volumes-1838-1111/csi-mockplugin-attacher Jun 18 00:09:27.913: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-1838" Jun 18 00:09:27.916: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-1838 to register on node node2 STEP: Creating pod Jun 18 00:09:37.431: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 18 00:09:37.435: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-kgnx2] to have phase Bound Jun 18 00:09:37.438: INFO: PersistentVolumeClaim pvc-kgnx2 found but phase is Pending instead of Bound. Jun 18 00:09:39.440: INFO: PersistentVolumeClaim pvc-kgnx2 found and phase=Bound (2.005161108s) STEP: Deleting the previously created pod Jun 18 00:09:58.463: INFO: Deleting pod "pvc-volume-tester-m7rx5" in namespace "csi-mock-volumes-1838" Jun 18 00:09:58.468: INFO: Wait up to 5m0s for pod "pvc-volume-tester-m7rx5" to be fully deleted STEP: Checking CSI driver logs Jun 18 00:10:08.490: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.tokens: {"":{"token":"eyJhbGciOiJSUzI1NiIsImtpZCI6IlZodFBLei1fcGlUYzNPSUdLbUhCOXh2RGpWQnF6NDBTWWlIeEp6aTZ2TVUifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjU1NTExNTg3LCJpYXQiOjE2NTU1MTA5ODcsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJjc2ktbW9jay12b2x1bWVzLTE4MzgiLCJwb2QiOnsibmFtZSI6InB2Yy12b2x1bWUtdGVzdGVyLW03cng1IiwidWlkIjoiMGY3MjA5YWMtNGI5Mi00NTk5LWFhYzMtZDIyZWM4NzgzNjMwIn0sInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJkZWZhdWx0IiwidWlkIjoiZmI4M2Y3MzQtYWY4OC00MDc1LWFmNDctMTE5OTY0YTRkMjMxIn19LCJuYmYiOjE2NTU1MTA5ODcsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpjc2ktbW9jay12b2x1bWVzLTE4Mzg6ZGVmYXVsdCJ9.ACI8Q_gmwi0d3Qu3ZmmAt8bCDrqQEZiEkDDsQP22qcL_zV_6-a_BUYekkbRIxKuD2rP7G0qjSlYXeRkFMQutjNiPqLh7RTbbDe4a6ccj16SGwfAtM0NkmTp6o6OILrxk_YBk7xtn1FwbAmAsz_Q4chTxH_pMM9KR6z4IiTarJqoK5_36EcodYgVCtv_O1N0xtihegbfEbqKMvHMdYq6gHNCI60hYT77mhvk-AUCtOqDuVTDo4D2HC71rVHHigjQXfcQuhPFKUEQtVoGe4GWR5c0xn1SXmsGlIb9_UGGaCwFpfQgTcDLyX-5wd4SvqsaTubLjUM-rWELwSbl7Y_o5ug","expirationTimestamp":"2022-06-18T00:19:47Z"}} Jun 18 00:10:08.490: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/0f7209ac-4b92-4599-aac3-d22ec8783630/volumes/kubernetes.io~csi/pvc-85515190-e3b0-4e46-b5b9-8f56e439fd46/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-m7rx5 Jun 18 00:10:08.490: INFO: Deleting pod "pvc-volume-tester-m7rx5" in namespace "csi-mock-volumes-1838" STEP: Deleting claim pvc-kgnx2 Jun 18 00:10:08.500: INFO: Waiting up to 2m0s for PersistentVolume pvc-85515190-e3b0-4e46-b5b9-8f56e439fd46 to get deleted Jun 18 00:10:08.502: INFO: PersistentVolume pvc-85515190-e3b0-4e46-b5b9-8f56e439fd46 found and phase=Bound (2.123278ms) Jun 18 00:10:10.506: INFO: PersistentVolume pvc-85515190-e3b0-4e46-b5b9-8f56e439fd46 found and phase=Released (2.005985864s) Jun 18 00:10:12.508: INFO: PersistentVolume pvc-85515190-e3b0-4e46-b5b9-8f56e439fd46 found and phase=Released (4.008690618s) Jun 18 00:10:14.511: INFO: PersistentVolume pvc-85515190-e3b0-4e46-b5b9-8f56e439fd46 found and phase=Released (6.011735625s) Jun 18 00:10:16.515: INFO: PersistentVolume pvc-85515190-e3b0-4e46-b5b9-8f56e439fd46 was removed STEP: Deleting storageclass csi-mock-volumes-1838-sczvkgw STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-1838 STEP: Waiting for namespaces [csi-mock-volumes-1838] to vanish STEP: uninstalling csi mock driver Jun 18 00:10:22.527: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1838-1111/csi-attacher Jun 18 00:10:22.533: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1838 Jun 18 00:10:22.537: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1838 Jun 18 00:10:22.540: INFO: deleting *v1.Role: csi-mock-volumes-1838-1111/external-attacher-cfg-csi-mock-volumes-1838 Jun 18 00:10:22.544: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1838-1111/csi-attacher-role-cfg Jun 18 00:10:22.547: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1838-1111/csi-provisioner Jun 18 00:10:22.551: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1838 Jun 18 00:10:22.554: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1838 Jun 18 00:10:22.558: INFO: deleting *v1.Role: csi-mock-volumes-1838-1111/external-provisioner-cfg-csi-mock-volumes-1838 Jun 18 00:10:22.564: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1838-1111/csi-provisioner-role-cfg Jun 18 00:10:22.571: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1838-1111/csi-resizer Jun 18 00:10:22.577: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1838 Jun 18 00:10:22.584: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1838 Jun 18 00:10:22.587: INFO: deleting *v1.Role: csi-mock-volumes-1838-1111/external-resizer-cfg-csi-mock-volumes-1838 Jun 18 00:10:22.590: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1838-1111/csi-resizer-role-cfg Jun 18 00:10:22.594: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1838-1111/csi-snapshotter Jun 18 00:10:22.597: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1838 Jun 18 00:10:22.600: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1838 Jun 18 00:10:22.604: INFO: deleting *v1.Role: csi-mock-volumes-1838-1111/external-snapshotter-leaderelection-csi-mock-volumes-1838 Jun 18 00:10:22.607: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1838-1111/external-snapshotter-leaderelection Jun 18 00:10:22.611: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1838-1111/csi-mock Jun 18 00:10:22.615: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1838 Jun 18 00:10:22.618: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1838 Jun 18 00:10:22.622: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1838 Jun 18 00:10:22.625: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1838 Jun 18 00:10:22.628: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1838 Jun 18 00:10:22.631: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1838 Jun 18 00:10:22.635: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1838 Jun 18 00:10:22.639: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1838-1111/csi-mockplugin Jun 18 00:10:22.642: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-1838 Jun 18 00:10:22.649: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1838-1111/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-1838-1111 STEP: Waiting for namespaces [csi-mock-volumes-1838-1111] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:10:34.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:66.934 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIServiceAccountToken /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1496 token should be plumbed down when csiServiceAccountTokenEnabled=true /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1524 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should be plumbed down when csiServiceAccountTokenEnabled=true","total":-1,"completed":13,"skipped":515,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:10:21.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-601a1816-32e6-4615-bad1-89308130eaba" Jun 18 00:10:25.827: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-601a1816-32e6-4615-bad1-89308130eaba && dd if=/dev/zero of=/tmp/local-volume-test-601a1816-32e6-4615-bad1-89308130eaba/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-601a1816-32e6-4615-bad1-89308130eaba/file] Namespace:persistent-local-volumes-test-690 PodName:hostexec-node2-jz8fr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:10:25.827: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:10:25.949: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-601a1816-32e6-4615-bad1-89308130eaba/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-690 PodName:hostexec-node2-jz8fr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:10:25.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:10:26.039: INFO: Creating a PV followed by a PVC Jun 18 00:10:26.048: INFO: Waiting for PV local-pvktgrc to bind to PVC pvc-4pjwf Jun 18 00:10:26.048: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-4pjwf] to have phase Bound Jun 18 00:10:26.053: INFO: PersistentVolumeClaim pvc-4pjwf found but phase is Pending instead of Bound. Jun 18 00:10:28.057: INFO: PersistentVolumeClaim pvc-4pjwf found and phase=Bound (2.008904603s) Jun 18 00:10:28.057: INFO: Waiting up to 3m0s for PersistentVolume local-pvktgrc to have phase Bound Jun 18 00:10:28.060: INFO: PersistentVolume local-pvktgrc found and phase=Bound (2.885224ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Jun 18 00:10:34.087: INFO: pod "pod-a378b653-dbc3-4ab5-a259-e234359f7241" created on Node "node2" STEP: Writing in pod1 Jun 18 00:10:34.087: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-690 PodName:pod-a378b653-dbc3-4ab5-a259-e234359f7241 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:10:34.087: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:10:34.168: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Jun 18 00:10:34.169: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-690 PodName:pod-a378b653-dbc3-4ab5-a259-e234359f7241 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:10:34.169: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:10:34.245: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Jun 18 00:10:38.268: INFO: pod "pod-a1d9a946-d84f-4e02-8e49-7b9234f6a603" created on Node "node2" Jun 18 00:10:38.268: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-690 PodName:pod-a1d9a946-d84f-4e02-8e49-7b9234f6a603 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:10:38.268: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:10:38.351: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Jun 18 00:10:38.351: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /dev/loop0 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-690 PodName:pod-a1d9a946-d84f-4e02-8e49-7b9234f6a603 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:10:38.351: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:10:38.467: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /dev/loop0 > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Jun 18 00:10:38.467: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-690 PodName:pod-a378b653-dbc3-4ab5-a259-e234359f7241 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:10:38.467: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:10:38.546: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/dev/loop0", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-a378b653-dbc3-4ab5-a259-e234359f7241 in namespace persistent-local-volumes-test-690 STEP: Deleting pod2 STEP: Deleting pod pod-a1d9a946-d84f-4e02-8e49-7b9234f6a603 in namespace persistent-local-volumes-test-690 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:10:38.557: INFO: Deleting PersistentVolumeClaim "pvc-4pjwf" Jun 18 00:10:38.560: INFO: Deleting PersistentVolume "local-pvktgrc" Jun 18 00:10:38.564: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-601a1816-32e6-4615-bad1-89308130eaba/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-690 PodName:hostexec-node2-jz8fr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:10:38.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-601a1816-32e6-4615-bad1-89308130eaba/file Jun 18 00:10:38.656: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-690 PodName:hostexec-node2-jz8fr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:10:38.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-601a1816-32e6-4615-bad1-89308130eaba Jun 18 00:10:38.738: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-601a1816-32e6-4615-bad1-89308130eaba] Namespace:persistent-local-volumes-test-690 PodName:hostexec-node2-jz8fr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:10:38.738: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:10:38.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-690" for this suite. • [SLOW TEST:17.056 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:10:34.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] volume on default medium should have the correct mode using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:71 STEP: Creating a pod to test emptydir volume type on node default medium Jun 18 00:10:34.796: INFO: Waiting up to 5m0s for pod "pod-278b6254-4267-443a-ab9a-9a21f487acd8" in namespace "emptydir-7167" to be "Succeeded or Failed" Jun 18 00:10:34.799: INFO: Pod "pod-278b6254-4267-443a-ab9a-9a21f487acd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.882109ms Jun 18 00:10:36.803: INFO: Pod "pod-278b6254-4267-443a-ab9a-9a21f487acd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006378855s Jun 18 00:10:38.807: INFO: Pod "pod-278b6254-4267-443a-ab9a-9a21f487acd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010504212s STEP: Saw pod success Jun 18 00:10:38.807: INFO: Pod "pod-278b6254-4267-443a-ab9a-9a21f487acd8" satisfied condition "Succeeded or Failed" Jun 18 00:10:38.809: INFO: Trying to get logs from node node1 pod pod-278b6254-4267-443a-ab9a-9a21f487acd8 container test-container: STEP: delete the pod Jun 18 00:10:38.826: INFO: Waiting for pod pod-278b6254-4267-443a-ab9a-9a21f487acd8 to disappear Jun 18 00:10:38.828: INFO: Pod pod-278b6254-4267-443a-ab9a-9a21f487acd8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:10:38.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7167" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":10,"skipped":391,"failed":0} S ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":-1,"completed":14,"skipped":557,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:09:57.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity used, no capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1299 STEP: Building a driver namespace object, basename csi-mock-volumes-7985 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 18 00:09:57.187: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7985-1550/csi-attacher Jun 18 00:09:57.190: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7985 Jun 18 00:09:57.190: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-7985 Jun 18 00:09:57.193: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7985 Jun 18 00:09:57.196: INFO: creating *v1.Role: csi-mock-volumes-7985-1550/external-attacher-cfg-csi-mock-volumes-7985 Jun 18 00:09:57.201: INFO: creating *v1.RoleBinding: csi-mock-volumes-7985-1550/csi-attacher-role-cfg Jun 18 00:09:57.203: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7985-1550/csi-provisioner Jun 18 00:09:57.206: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7985 Jun 18 00:09:57.206: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-7985 Jun 18 00:09:57.209: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7985 Jun 18 00:09:57.212: INFO: creating *v1.Role: csi-mock-volumes-7985-1550/external-provisioner-cfg-csi-mock-volumes-7985 Jun 18 00:09:57.215: INFO: creating *v1.RoleBinding: csi-mock-volumes-7985-1550/csi-provisioner-role-cfg Jun 18 00:09:57.217: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7985-1550/csi-resizer Jun 18 00:09:57.221: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7985 Jun 18 00:09:57.221: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-7985 Jun 18 00:09:57.223: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7985 Jun 18 00:09:57.227: INFO: creating *v1.Role: csi-mock-volumes-7985-1550/external-resizer-cfg-csi-mock-volumes-7985 Jun 18 00:09:57.229: INFO: creating *v1.RoleBinding: csi-mock-volumes-7985-1550/csi-resizer-role-cfg Jun 18 00:09:57.233: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7985-1550/csi-snapshotter Jun 18 00:09:57.235: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7985 Jun 18 00:09:57.235: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-7985 Jun 18 00:09:57.237: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7985 Jun 18 00:09:57.240: INFO: creating *v1.Role: csi-mock-volumes-7985-1550/external-snapshotter-leaderelection-csi-mock-volumes-7985 Jun 18 00:09:57.243: INFO: creating *v1.RoleBinding: csi-mock-volumes-7985-1550/external-snapshotter-leaderelection Jun 18 00:09:57.246: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7985-1550/csi-mock Jun 18 00:09:57.248: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7985 Jun 18 00:09:57.251: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7985 Jun 18 00:09:57.254: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7985 Jun 18 00:09:57.256: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7985 Jun 18 00:09:57.259: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7985 Jun 18 00:09:57.262: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7985 Jun 18 00:09:57.265: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7985 Jun 18 00:09:57.268: INFO: creating *v1.StatefulSet: csi-mock-volumes-7985-1550/csi-mockplugin Jun 18 00:09:57.273: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-7985 Jun 18 00:09:57.276: INFO: creating *v1.StatefulSet: csi-mock-volumes-7985-1550/csi-mockplugin-attacher Jun 18 00:09:57.279: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-7985" Jun 18 00:09:57.282: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-7985 to register on node node2 STEP: Creating pod Jun 18 00:10:11.803: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Deleting the previously created pod Jun 18 00:10:11.824: INFO: Deleting pod "pvc-volume-tester-zsq5w" in namespace "csi-mock-volumes-7985" Jun 18 00:10:11.829: INFO: Wait up to 5m0s for pod "pvc-volume-tester-zsq5w" to be fully deleted STEP: Deleting pod pvc-volume-tester-zsq5w Jun 18 00:10:11.832: INFO: Deleting pod "pvc-volume-tester-zsq5w" in namespace "csi-mock-volumes-7985" STEP: Deleting claim pvc-f62xk STEP: Deleting storageclass mock-csi-storage-capacity-csi-mock-volumes-7985 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-7985 STEP: Waiting for namespaces [csi-mock-volumes-7985] to vanish STEP: uninstalling csi mock driver Jun 18 00:10:17.894: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7985-1550/csi-attacher Jun 18 00:10:17.898: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7985 Jun 18 00:10:17.901: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7985 Jun 18 00:10:17.905: INFO: deleting *v1.Role: csi-mock-volumes-7985-1550/external-attacher-cfg-csi-mock-volumes-7985 Jun 18 00:10:17.908: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7985-1550/csi-attacher-role-cfg Jun 18 00:10:17.911: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7985-1550/csi-provisioner Jun 18 00:10:17.914: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7985 Jun 18 00:10:17.918: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7985 Jun 18 00:10:17.921: INFO: deleting *v1.Role: csi-mock-volumes-7985-1550/external-provisioner-cfg-csi-mock-volumes-7985 Jun 18 00:10:17.930: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7985-1550/csi-provisioner-role-cfg Jun 18 00:10:17.941: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7985-1550/csi-resizer Jun 18 00:10:17.949: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7985 Jun 18 00:10:17.952: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7985 Jun 18 00:10:17.955: INFO: deleting *v1.Role: csi-mock-volumes-7985-1550/external-resizer-cfg-csi-mock-volumes-7985 Jun 18 00:10:17.959: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7985-1550/csi-resizer-role-cfg Jun 18 00:10:17.963: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7985-1550/csi-snapshotter Jun 18 00:10:17.966: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7985 Jun 18 00:10:17.969: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7985 Jun 18 00:10:17.973: INFO: deleting *v1.Role: csi-mock-volumes-7985-1550/external-snapshotter-leaderelection-csi-mock-volumes-7985 Jun 18 00:10:17.977: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7985-1550/external-snapshotter-leaderelection Jun 18 00:10:17.981: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7985-1550/csi-mock Jun 18 00:10:17.984: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7985 Jun 18 00:10:17.987: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7985 Jun 18 00:10:17.991: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7985 Jun 18 00:10:17.994: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7985 Jun 18 00:10:17.998: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7985 Jun 18 00:10:18.001: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7985 Jun 18 00:10:18.005: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7985 Jun 18 00:10:18.008: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7985-1550/csi-mockplugin Jun 18 00:10:18.012: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-7985 Jun 18 00:10:18.016: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7985-1550/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-7985-1550 STEP: Waiting for namespaces [csi-mock-volumes-7985-1550] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:10:46.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:48.923 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1256 CSIStorageCapacity used, no capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1299 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","total":-1,"completed":7,"skipped":275,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:10:38.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Jun 18 00:10:38.922: INFO: The status of Pod test-hostpath-type-twnvz is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:10:40.926: INFO: The status of Pod test-hostpath-type-twnvz is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:10:42.925: INFO: The status of Pod test-hostpath-type-twnvz is Running (Ready = true) STEP: running on node node1 STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should fail on mounting file 'afile' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:161 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:10:48.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-15" for this suite. • [SLOW TEST:10.098 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting file 'afile' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:161 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathCharDev","total":-1,"completed":11,"skipped":417,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:10:49.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pvc-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:72 Jun 18 00:10:49.033: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PVC Jun 18 00:10:49.038: INFO: error finding default storageClass : No default storage class found [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:10:49.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pvc-protection-5955" for this suite. [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:108 S [SKIPPING] in Spec Setup (BeforeEach) [0.038 seconds] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:145 error finding default storageClass : No default storage class found /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pv/pv.go:819 ------------------------------ SSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:10:46.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:110 STEP: Creating configMap with name configmap-test-volume-map-dcc3765d-1d8c-4d86-af52-3850cbef9697 STEP: Creating a pod to test consume configMaps Jun 18 00:10:46.269: INFO: Waiting up to 5m0s for pod "pod-configmaps-f3435409-4d53-41f5-b0b1-37b7ccb97889" in namespace "configmap-4589" to be "Succeeded or Failed" Jun 18 00:10:46.272: INFO: Pod "pod-configmaps-f3435409-4d53-41f5-b0b1-37b7ccb97889": Phase="Pending", Reason="", readiness=false. Elapsed: 2.88594ms Jun 18 00:10:48.276: INFO: Pod "pod-configmaps-f3435409-4d53-41f5-b0b1-37b7ccb97889": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00611819s Jun 18 00:10:50.280: INFO: Pod "pod-configmaps-f3435409-4d53-41f5-b0b1-37b7ccb97889": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010233004s STEP: Saw pod success Jun 18 00:10:50.280: INFO: Pod "pod-configmaps-f3435409-4d53-41f5-b0b1-37b7ccb97889" satisfied condition "Succeeded or Failed" Jun 18 00:10:50.282: INFO: Trying to get logs from node node1 pod pod-configmaps-f3435409-4d53-41f5-b0b1-37b7ccb97889 container agnhost-container: STEP: delete the pod Jun 18 00:10:50.294: INFO: Waiting for pod pod-configmaps-f3435409-4d53-41f5-b0b1-37b7ccb97889 to disappear Jun 18 00:10:50.296: INFO: Pod pod-configmaps-f3435409-4d53-41f5-b0b1-37b7ccb97889 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:10:50.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4589" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":8,"skipped":375,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:10:11.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not require VolumeAttach for drivers without attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 STEP: Building a driver namespace object, basename csi-mock-volumes-5835 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 18 00:10:11.623: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5835-6944/csi-attacher Jun 18 00:10:11.625: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5835 Jun 18 00:10:11.625: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-5835 Jun 18 00:10:11.628: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5835 Jun 18 00:10:11.631: INFO: creating *v1.Role: csi-mock-volumes-5835-6944/external-attacher-cfg-csi-mock-volumes-5835 Jun 18 00:10:11.633: INFO: creating *v1.RoleBinding: csi-mock-volumes-5835-6944/csi-attacher-role-cfg Jun 18 00:10:11.636: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5835-6944/csi-provisioner Jun 18 00:10:11.638: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5835 Jun 18 00:10:11.638: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-5835 Jun 18 00:10:11.641: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5835 Jun 18 00:10:11.645: INFO: creating *v1.Role: csi-mock-volumes-5835-6944/external-provisioner-cfg-csi-mock-volumes-5835 Jun 18 00:10:11.647: INFO: creating *v1.RoleBinding: csi-mock-volumes-5835-6944/csi-provisioner-role-cfg Jun 18 00:10:11.650: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5835-6944/csi-resizer Jun 18 00:10:11.653: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5835 Jun 18 00:10:11.653: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-5835 Jun 18 00:10:11.655: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5835 Jun 18 00:10:11.658: INFO: creating *v1.Role: csi-mock-volumes-5835-6944/external-resizer-cfg-csi-mock-volumes-5835 Jun 18 00:10:11.661: INFO: creating *v1.RoleBinding: csi-mock-volumes-5835-6944/csi-resizer-role-cfg Jun 18 00:10:11.663: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5835-6944/csi-snapshotter Jun 18 00:10:11.666: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5835 Jun 18 00:10:11.666: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-5835 Jun 18 00:10:11.669: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5835 Jun 18 00:10:11.672: INFO: creating *v1.Role: csi-mock-volumes-5835-6944/external-snapshotter-leaderelection-csi-mock-volumes-5835 Jun 18 00:10:11.675: INFO: creating *v1.RoleBinding: csi-mock-volumes-5835-6944/external-snapshotter-leaderelection Jun 18 00:10:11.678: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5835-6944/csi-mock Jun 18 00:10:11.681: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5835 Jun 18 00:10:11.684: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5835 Jun 18 00:10:11.686: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5835 Jun 18 00:10:11.689: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5835 Jun 18 00:10:11.692: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5835 Jun 18 00:10:11.695: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5835 Jun 18 00:10:11.697: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5835 Jun 18 00:10:11.700: INFO: creating *v1.StatefulSet: csi-mock-volumes-5835-6944/csi-mockplugin Jun 18 00:10:11.705: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-5835 Jun 18 00:10:11.708: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-5835" Jun 18 00:10:11.710: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5835 to register on node node2 STEP: Creating pod Jun 18 00:10:16.724: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 18 00:10:16.730: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-kd5wl] to have phase Bound Jun 18 00:10:16.732: INFO: PersistentVolumeClaim pvc-kd5wl found but phase is Pending instead of Bound. Jun 18 00:10:18.737: INFO: PersistentVolumeClaim pvc-kd5wl found and phase=Bound (2.00690454s) STEP: Checking if VolumeAttachment was created for the pod STEP: Deleting pod pvc-volume-tester-7h54s Jun 18 00:10:22.766: INFO: Deleting pod "pvc-volume-tester-7h54s" in namespace "csi-mock-volumes-5835" Jun 18 00:10:22.770: INFO: Wait up to 5m0s for pod "pvc-volume-tester-7h54s" to be fully deleted STEP: Deleting claim pvc-kd5wl Jun 18 00:10:30.788: INFO: Waiting up to 2m0s for PersistentVolume pvc-0290c63f-b0a6-422d-aad8-71eff11f16a2 to get deleted Jun 18 00:10:30.790: INFO: PersistentVolume pvc-0290c63f-b0a6-422d-aad8-71eff11f16a2 found and phase=Bound (2.146409ms) Jun 18 00:10:32.793: INFO: PersistentVolume pvc-0290c63f-b0a6-422d-aad8-71eff11f16a2 was removed STEP: Deleting storageclass csi-mock-volumes-5835-sclbbn6 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-5835 STEP: Waiting for namespaces [csi-mock-volumes-5835] to vanish STEP: uninstalling csi mock driver Jun 18 00:10:38.807: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5835-6944/csi-attacher Jun 18 00:10:38.811: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5835 Jun 18 00:10:38.814: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5835 Jun 18 00:10:38.819: INFO: deleting *v1.Role: csi-mock-volumes-5835-6944/external-attacher-cfg-csi-mock-volumes-5835 Jun 18 00:10:38.823: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5835-6944/csi-attacher-role-cfg Jun 18 00:10:38.827: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5835-6944/csi-provisioner Jun 18 00:10:38.830: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5835 Jun 18 00:10:38.833: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5835 Jun 18 00:10:38.837: INFO: deleting *v1.Role: csi-mock-volumes-5835-6944/external-provisioner-cfg-csi-mock-volumes-5835 Jun 18 00:10:38.840: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5835-6944/csi-provisioner-role-cfg Jun 18 00:10:38.843: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5835-6944/csi-resizer Jun 18 00:10:38.846: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5835 Jun 18 00:10:38.850: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5835 Jun 18 00:10:38.853: INFO: deleting *v1.Role: csi-mock-volumes-5835-6944/external-resizer-cfg-csi-mock-volumes-5835 Jun 18 00:10:38.857: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5835-6944/csi-resizer-role-cfg Jun 18 00:10:38.860: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5835-6944/csi-snapshotter Jun 18 00:10:38.863: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5835 Jun 18 00:10:38.867: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5835 Jun 18 00:10:38.870: INFO: deleting *v1.Role: csi-mock-volumes-5835-6944/external-snapshotter-leaderelection-csi-mock-volumes-5835 Jun 18 00:10:38.874: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5835-6944/external-snapshotter-leaderelection Jun 18 00:10:38.878: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5835-6944/csi-mock Jun 18 00:10:38.882: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5835 Jun 18 00:10:38.886: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5835 Jun 18 00:10:38.888: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5835 Jun 18 00:10:38.892: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5835 Jun 18 00:10:38.896: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5835 Jun 18 00:10:38.899: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5835 Jun 18 00:10:38.903: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5835 Jun 18 00:10:38.906: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5835-6944/csi-mockplugin Jun 18 00:10:38.909: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-5835 STEP: deleting the driver namespace: csi-mock-volumes-5835-6944 STEP: Waiting for namespaces [csi-mock-volumes-5835-6944] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:10:50.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:39.381 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI attach test using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:316 should not require VolumeAttach for drivers without attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment","total":-1,"completed":6,"skipped":257,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:10:50.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:77 Jun 18 00:10:51.000: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:10:51.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-5653" for this suite. [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:111 Jun 18 00:10:51.010: INFO: AfterEach: Cleaning up test resources Jun 18 00:10:51.010: INFO: pvc is nil Jun 18 00:10:51.010: INFO: pv is nil S [SKIPPING] in Spec Setup (BeforeEach) [0.036 seconds] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:142 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:09:41.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not be passed when CSIDriver does not exist /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 STEP: Building a driver namespace object, basename csi-mock-volumes-1150 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 18 00:09:41.173: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1150-3854/csi-attacher Jun 18 00:09:41.176: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1150 Jun 18 00:09:41.176: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-1150 Jun 18 00:09:41.178: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1150 Jun 18 00:09:41.182: INFO: creating *v1.Role: csi-mock-volumes-1150-3854/external-attacher-cfg-csi-mock-volumes-1150 Jun 18 00:09:41.185: INFO: creating *v1.RoleBinding: csi-mock-volumes-1150-3854/csi-attacher-role-cfg Jun 18 00:09:41.187: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1150-3854/csi-provisioner Jun 18 00:09:41.190: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1150 Jun 18 00:09:41.190: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-1150 Jun 18 00:09:41.193: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1150 Jun 18 00:09:41.197: INFO: creating *v1.Role: csi-mock-volumes-1150-3854/external-provisioner-cfg-csi-mock-volumes-1150 Jun 18 00:09:41.200: INFO: creating *v1.RoleBinding: csi-mock-volumes-1150-3854/csi-provisioner-role-cfg Jun 18 00:09:41.203: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1150-3854/csi-resizer Jun 18 00:09:41.206: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1150 Jun 18 00:09:41.206: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-1150 Jun 18 00:09:41.208: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1150 Jun 18 00:09:41.211: INFO: creating *v1.Role: csi-mock-volumes-1150-3854/external-resizer-cfg-csi-mock-volumes-1150 Jun 18 00:09:41.214: INFO: creating *v1.RoleBinding: csi-mock-volumes-1150-3854/csi-resizer-role-cfg Jun 18 00:09:41.217: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1150-3854/csi-snapshotter Jun 18 00:09:41.220: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1150 Jun 18 00:09:41.220: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-1150 Jun 18 00:09:41.223: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1150 Jun 18 00:09:41.227: INFO: creating *v1.Role: csi-mock-volumes-1150-3854/external-snapshotter-leaderelection-csi-mock-volumes-1150 Jun 18 00:09:41.230: INFO: creating *v1.RoleBinding: csi-mock-volumes-1150-3854/external-snapshotter-leaderelection Jun 18 00:09:41.233: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1150-3854/csi-mock Jun 18 00:09:41.235: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1150 Jun 18 00:09:41.238: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1150 Jun 18 00:09:41.241: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1150 Jun 18 00:09:41.243: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1150 Jun 18 00:09:41.247: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1150 Jun 18 00:09:41.249: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1150 Jun 18 00:09:41.251: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1150 Jun 18 00:09:41.254: INFO: creating *v1.StatefulSet: csi-mock-volumes-1150-3854/csi-mockplugin Jun 18 00:09:41.258: INFO: creating *v1.StatefulSet: csi-mock-volumes-1150-3854/csi-mockplugin-attacher Jun 18 00:09:41.261: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-1150 to register on node node2 STEP: Creating pod Jun 18 00:09:50.775: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 18 00:09:50.780: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-sgzt6] to have phase Bound Jun 18 00:09:50.782: INFO: PersistentVolumeClaim pvc-sgzt6 found but phase is Pending instead of Bound. Jun 18 00:09:52.786: INFO: PersistentVolumeClaim pvc-sgzt6 found and phase=Bound (2.00603433s) STEP: Deleting the previously created pod Jun 18 00:10:06.809: INFO: Deleting pod "pvc-volume-tester-9z5bk" in namespace "csi-mock-volumes-1150" Jun 18 00:10:06.815: INFO: Wait up to 5m0s for pod "pvc-volume-tester-9z5bk" to be fully deleted STEP: Checking CSI driver logs Jun 18 00:10:18.887: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/c9a1cda8-d725-43c8-90b8-9c1ebd08a22c/volumes/kubernetes.io~csi/pvc-8e7324dc-2cc2-4a7b-9a9f-b5cd33ed8a7f/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-9z5bk Jun 18 00:10:18.887: INFO: Deleting pod "pvc-volume-tester-9z5bk" in namespace "csi-mock-volumes-1150" STEP: Deleting claim pvc-sgzt6 Jun 18 00:10:18.895: INFO: Waiting up to 2m0s for PersistentVolume pvc-8e7324dc-2cc2-4a7b-9a9f-b5cd33ed8a7f to get deleted Jun 18 00:10:18.898: INFO: PersistentVolume pvc-8e7324dc-2cc2-4a7b-9a9f-b5cd33ed8a7f found and phase=Bound (2.437084ms) Jun 18 00:10:20.904: INFO: PersistentVolume pvc-8e7324dc-2cc2-4a7b-9a9f-b5cd33ed8a7f was removed STEP: Deleting storageclass csi-mock-volumes-1150-sczh74q STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-1150 STEP: Waiting for namespaces [csi-mock-volumes-1150] to vanish STEP: uninstalling csi mock driver Jun 18 00:10:26.919: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1150-3854/csi-attacher Jun 18 00:10:26.923: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1150 Jun 18 00:10:26.927: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1150 Jun 18 00:10:26.930: INFO: deleting *v1.Role: csi-mock-volumes-1150-3854/external-attacher-cfg-csi-mock-volumes-1150 Jun 18 00:10:26.934: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1150-3854/csi-attacher-role-cfg Jun 18 00:10:26.938: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1150-3854/csi-provisioner Jun 18 00:10:26.941: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1150 Jun 18 00:10:26.944: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1150 Jun 18 00:10:26.947: INFO: deleting *v1.Role: csi-mock-volumes-1150-3854/external-provisioner-cfg-csi-mock-volumes-1150 Jun 18 00:10:26.950: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1150-3854/csi-provisioner-role-cfg Jun 18 00:10:26.953: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1150-3854/csi-resizer Jun 18 00:10:26.959: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1150 Jun 18 00:10:26.963: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1150 Jun 18 00:10:26.967: INFO: deleting *v1.Role: csi-mock-volumes-1150-3854/external-resizer-cfg-csi-mock-volumes-1150 Jun 18 00:10:26.970: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1150-3854/csi-resizer-role-cfg Jun 18 00:10:26.974: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1150-3854/csi-snapshotter Jun 18 00:10:26.978: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1150 Jun 18 00:10:26.982: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1150 Jun 18 00:10:26.985: INFO: deleting *v1.Role: csi-mock-volumes-1150-3854/external-snapshotter-leaderelection-csi-mock-volumes-1150 Jun 18 00:10:26.989: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1150-3854/external-snapshotter-leaderelection Jun 18 00:10:26.995: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1150-3854/csi-mock Jun 18 00:10:26.998: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1150 Jun 18 00:10:27.002: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1150 Jun 18 00:10:27.005: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1150 Jun 18 00:10:27.008: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1150 Jun 18 00:10:27.011: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1150 Jun 18 00:10:27.015: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1150 Jun 18 00:10:27.018: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1150 Jun 18 00:10:27.022: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1150-3854/csi-mockplugin Jun 18 00:10:27.025: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1150-3854/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-1150-3854 STEP: Waiting for namespaces [csi-mock-volumes-1150-3854] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:10:55.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:73.934 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443 should not be passed when CSIDriver does not exist /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist","total":-1,"completed":4,"skipped":87,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:10:51.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 18 00:10:55.096: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-a899e963-b81a-485b-b59f-68f03f7a35db] Namespace:persistent-local-volumes-test-5767 PodName:hostexec-node2-vlptg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:10:55.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:10:55.182: INFO: Creating a PV followed by a PVC Jun 18 00:10:55.188: INFO: Waiting for PV local-pvgcsmz to bind to PVC pvc-mdvv4 Jun 18 00:10:55.188: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-mdvv4] to have phase Bound Jun 18 00:10:55.191: INFO: PersistentVolumeClaim pvc-mdvv4 found but phase is Pending instead of Bound. Jun 18 00:10:57.194: INFO: PersistentVolumeClaim pvc-mdvv4 found and phase=Bound (2.005458616s) Jun 18 00:10:57.194: INFO: Waiting up to 3m0s for PersistentVolume local-pvgcsmz to have phase Bound Jun 18 00:10:57.196: INFO: PersistentVolume local-pvgcsmz found and phase=Bound (1.998ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jun 18 00:11:01.221: INFO: pod "pod-293ea92a-19b5-4810-b991-860d2840e2a8" created on Node "node2" STEP: Writing in pod1 Jun 18 00:11:01.221: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5767 PodName:pod-293ea92a-19b5-4810-b991-860d2840e2a8 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:11:01.221: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:11:01.306: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Jun 18 00:11:01.306: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5767 PodName:pod-293ea92a-19b5-4810-b991-860d2840e2a8 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:11:01.306: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:11:01.402: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-293ea92a-19b5-4810-b991-860d2840e2a8 in namespace persistent-local-volumes-test-5767 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:11:01.406: INFO: Deleting PersistentVolumeClaim "pvc-mdvv4" Jun 18 00:11:01.410: INFO: Deleting PersistentVolume "local-pvgcsmz" STEP: Removing the test directory Jun 18 00:11:01.415: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a899e963-b81a-485b-b59f-68f03f7a35db] Namespace:persistent-local-volumes-test-5767 PodName:hostexec-node2-vlptg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:01.415: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:11:01.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5767" for this suite. • [SLOW TEST:10.467 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":7,"skipped":289,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:10:49.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 18 00:10:53.116: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-6390db0c-8f11-4b57-8b22-f01835c60ca0] Namespace:persistent-local-volumes-test-1324 PodName:hostexec-node2-c8nv9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:10:53.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:10:53.215: INFO: Creating a PV followed by a PVC Jun 18 00:10:53.223: INFO: Waiting for PV local-pvmbmw8 to bind to PVC pvc-wdbjf Jun 18 00:10:53.223: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-wdbjf] to have phase Bound Jun 18 00:10:53.226: INFO: PersistentVolumeClaim pvc-wdbjf found but phase is Pending instead of Bound. Jun 18 00:10:55.231: INFO: PersistentVolumeClaim pvc-wdbjf found but phase is Pending instead of Bound. Jun 18 00:10:57.235: INFO: PersistentVolumeClaim pvc-wdbjf found and phase=Bound (4.011289829s) Jun 18 00:10:57.235: INFO: Waiting up to 3m0s for PersistentVolume local-pvmbmw8 to have phase Bound Jun 18 00:10:57.238: INFO: PersistentVolume local-pvmbmw8 found and phase=Bound (2.749216ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jun 18 00:11:01.263: INFO: pod "pod-cfabc395-c818-4295-87c0-1a9e38bba6a3" created on Node "node2" STEP: Writing in pod1 Jun 18 00:11:01.263: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1324 PodName:pod-cfabc395-c818-4295-87c0-1a9e38bba6a3 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:11:01.263: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:11:01.367: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Jun 18 00:11:01.367: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1324 PodName:pod-cfabc395-c818-4295-87c0-1a9e38bba6a3 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:11:01.367: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:11:01.452: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Jun 18 00:11:01.452: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-6390db0c-8f11-4b57-8b22-f01835c60ca0 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1324 PodName:pod-cfabc395-c818-4295-87c0-1a9e38bba6a3 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:11:01.452: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:11:01.549: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-6390db0c-8f11-4b57-8b22-f01835c60ca0 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-cfabc395-c818-4295-87c0-1a9e38bba6a3 in namespace persistent-local-volumes-test-1324 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:11:01.553: INFO: Deleting PersistentVolumeClaim "pvc-wdbjf" Jun 18 00:11:01.557: INFO: Deleting PersistentVolume "local-pvmbmw8" STEP: Removing the test directory Jun 18 00:11:01.561: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6390db0c-8f11-4b57-8b22-f01835c60ca0] Namespace:persistent-local-volumes-test-1324 PodName:hostexec-node2-c8nv9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:01.561: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:11:01.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1324" for this suite. • [SLOW TEST:12.600 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":12,"skipped":432,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:10:38.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 18 00:10:42.930: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-710860eb-966a-463f-b5df-ce97dd1cfe7c-backend && mount --bind /tmp/local-volume-test-710860eb-966a-463f-b5df-ce97dd1cfe7c-backend /tmp/local-volume-test-710860eb-966a-463f-b5df-ce97dd1cfe7c-backend && ln -s /tmp/local-volume-test-710860eb-966a-463f-b5df-ce97dd1cfe7c-backend /tmp/local-volume-test-710860eb-966a-463f-b5df-ce97dd1cfe7c] Namespace:persistent-local-volumes-test-5467 PodName:hostexec-node1-8f74h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:10:42.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:10:43.042: INFO: Creating a PV followed by a PVC Jun 18 00:10:43.049: INFO: Waiting for PV local-pv2ct84 to bind to PVC pvc-scs8m Jun 18 00:10:43.049: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-scs8m] to have phase Bound Jun 18 00:10:43.051: INFO: PersistentVolumeClaim pvc-scs8m found but phase is Pending instead of Bound. Jun 18 00:10:45.055: INFO: PersistentVolumeClaim pvc-scs8m found but phase is Pending instead of Bound. Jun 18 00:10:47.059: INFO: PersistentVolumeClaim pvc-scs8m found but phase is Pending instead of Bound. Jun 18 00:10:49.063: INFO: PersistentVolumeClaim pvc-scs8m found but phase is Pending instead of Bound. Jun 18 00:10:51.070: INFO: PersistentVolumeClaim pvc-scs8m found but phase is Pending instead of Bound. Jun 18 00:10:53.073: INFO: PersistentVolumeClaim pvc-scs8m found but phase is Pending instead of Bound. Jun 18 00:10:55.079: INFO: PersistentVolumeClaim pvc-scs8m found but phase is Pending instead of Bound. Jun 18 00:10:57.088: INFO: PersistentVolumeClaim pvc-scs8m found and phase=Bound (14.039157844s) Jun 18 00:10:57.088: INFO: Waiting up to 3m0s for PersistentVolume local-pv2ct84 to have phase Bound Jun 18 00:10:57.091: INFO: PersistentVolume local-pv2ct84 found and phase=Bound (2.199184ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jun 18 00:11:03.122: INFO: pod "pod-0da2a4e8-c0cc-426b-b88c-7b53bc032c44" created on Node "node1" STEP: Writing in pod1 Jun 18 00:11:03.122: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5467 PodName:pod-0da2a4e8-c0cc-426b-b88c-7b53bc032c44 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:11:03.122: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:11:03.295: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Jun 18 00:11:03.295: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5467 PodName:pod-0da2a4e8-c0cc-426b-b88c-7b53bc032c44 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:11:03.295: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:11:03.499: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-0da2a4e8-c0cc-426b-b88c-7b53bc032c44 in namespace persistent-local-volumes-test-5467 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:11:03.505: INFO: Deleting PersistentVolumeClaim "pvc-scs8m" Jun 18 00:11:03.508: INFO: Deleting PersistentVolume "local-pv2ct84" STEP: Removing the test directory Jun 18 00:11:03.512: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-710860eb-966a-463f-b5df-ce97dd1cfe7c && umount /tmp/local-volume-test-710860eb-966a-463f-b5df-ce97dd1cfe7c-backend && rm -r /tmp/local-volume-test-710860eb-966a-463f-b5df-ce97dd1cfe7c-backend] Namespace:persistent-local-volumes-test-5467 PodName:hostexec-node1-8f74h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:03.512: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:11:03.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5467" for this suite. • [SLOW TEST:24.831 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":15,"skipped":577,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:09:29.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should modify fsGroup if fsGroupPolicy=File /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1582 STEP: Building a driver namespace object, basename csi-mock-volumes-9884 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 18 00:09:29.330: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9884-9435/csi-attacher Jun 18 00:09:29.333: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9884 Jun 18 00:09:29.333: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-9884 Jun 18 00:09:29.336: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9884 Jun 18 00:09:29.339: INFO: creating *v1.Role: csi-mock-volumes-9884-9435/external-attacher-cfg-csi-mock-volumes-9884 Jun 18 00:09:29.341: INFO: creating *v1.RoleBinding: csi-mock-volumes-9884-9435/csi-attacher-role-cfg Jun 18 00:09:29.344: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9884-9435/csi-provisioner Jun 18 00:09:29.346: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9884 Jun 18 00:09:29.346: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-9884 Jun 18 00:09:29.349: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9884 Jun 18 00:09:29.351: INFO: creating *v1.Role: csi-mock-volumes-9884-9435/external-provisioner-cfg-csi-mock-volumes-9884 Jun 18 00:09:29.354: INFO: creating *v1.RoleBinding: csi-mock-volumes-9884-9435/csi-provisioner-role-cfg Jun 18 00:09:29.357: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9884-9435/csi-resizer Jun 18 00:09:29.360: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9884 Jun 18 00:09:29.360: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-9884 Jun 18 00:09:29.362: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9884 Jun 18 00:09:29.366: INFO: creating *v1.Role: csi-mock-volumes-9884-9435/external-resizer-cfg-csi-mock-volumes-9884 Jun 18 00:09:29.368: INFO: creating *v1.RoleBinding: csi-mock-volumes-9884-9435/csi-resizer-role-cfg Jun 18 00:09:29.371: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9884-9435/csi-snapshotter Jun 18 00:09:29.374: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9884 Jun 18 00:09:29.374: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-9884 Jun 18 00:09:29.377: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9884 Jun 18 00:09:29.380: INFO: creating *v1.Role: csi-mock-volumes-9884-9435/external-snapshotter-leaderelection-csi-mock-volumes-9884 Jun 18 00:09:29.383: INFO: creating *v1.RoleBinding: csi-mock-volumes-9884-9435/external-snapshotter-leaderelection Jun 18 00:09:29.385: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9884-9435/csi-mock Jun 18 00:09:29.388: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9884 Jun 18 00:09:29.391: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9884 Jun 18 00:09:29.394: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9884 Jun 18 00:09:29.397: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9884 Jun 18 00:09:29.399: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9884 Jun 18 00:09:29.402: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9884 Jun 18 00:09:29.404: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9884 Jun 18 00:09:29.409: INFO: creating *v1.StatefulSet: csi-mock-volumes-9884-9435/csi-mockplugin Jun 18 00:09:29.415: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-9884 Jun 18 00:09:29.420: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-9884" Jun 18 00:09:29.427: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-9884 to register on node node1 STEP: Creating pod with fsGroup Jun 18 00:09:39.442: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 18 00:09:39.446: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-cm59q] to have phase Bound Jun 18 00:09:39.451: INFO: PersistentVolumeClaim pvc-cm59q found but phase is Pending instead of Bound. Jun 18 00:09:41.456: INFO: PersistentVolumeClaim pvc-cm59q found and phase=Bound (2.009192491s) Jun 18 00:09:45.476: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir /mnt/test/csi-mock-volumes-9884] Namespace:csi-mock-volumes-9884 PodName:pvc-volume-tester-78v9l ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:09:45.476: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:09:45.602: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'filecontents' > '/mnt/test/csi-mock-volumes-9884/csi-mock-volumes-9884'; sync] Namespace:csi-mock-volumes-9884 PodName:pvc-volume-tester-78v9l ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:09:45.602: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:09:47.627: INFO: ExecWithOptions {Command:[/bin/sh -c ls -l /mnt/test/csi-mock-volumes-9884/csi-mock-volumes-9884] Namespace:csi-mock-volumes-9884 PodName:pvc-volume-tester-78v9l ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:09:47.627: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:09:47.711: INFO: pod csi-mock-volumes-9884/pvc-volume-tester-78v9l exec for cmd ls -l /mnt/test/csi-mock-volumes-9884/csi-mock-volumes-9884, stdout: -rw-r--r-- 1 root 16597 13 Jun 18 00:09 /mnt/test/csi-mock-volumes-9884/csi-mock-volumes-9884, stderr: Jun 18 00:09:47.711: INFO: ExecWithOptions {Command:[/bin/sh -c rm -fr /mnt/test/csi-mock-volumes-9884] Namespace:csi-mock-volumes-9884 PodName:pvc-volume-tester-78v9l ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:09:47.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod pvc-volume-tester-78v9l Jun 18 00:09:47.799: INFO: Deleting pod "pvc-volume-tester-78v9l" in namespace "csi-mock-volumes-9884" Jun 18 00:09:47.804: INFO: Wait up to 5m0s for pod "pvc-volume-tester-78v9l" to be fully deleted STEP: Deleting claim pvc-cm59q Jun 18 00:10:29.818: INFO: Waiting up to 2m0s for PersistentVolume pvc-785d9e9d-d247-45db-ba87-977d611c2623 to get deleted Jun 18 00:10:29.820: INFO: PersistentVolume pvc-785d9e9d-d247-45db-ba87-977d611c2623 found and phase=Bound (1.941026ms) Jun 18 00:10:31.825: INFO: PersistentVolume pvc-785d9e9d-d247-45db-ba87-977d611c2623 was removed STEP: Deleting storageclass csi-mock-volumes-9884-sc6dh96 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-9884 STEP: Waiting for namespaces [csi-mock-volumes-9884] to vanish STEP: uninstalling csi mock driver Jun 18 00:10:37.839: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9884-9435/csi-attacher Jun 18 00:10:37.844: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9884 Jun 18 00:10:37.848: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9884 Jun 18 00:10:37.851: INFO: deleting *v1.Role: csi-mock-volumes-9884-9435/external-attacher-cfg-csi-mock-volumes-9884 Jun 18 00:10:37.854: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9884-9435/csi-attacher-role-cfg Jun 18 00:10:37.858: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9884-9435/csi-provisioner Jun 18 00:10:37.861: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9884 Jun 18 00:10:37.864: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9884 Jun 18 00:10:37.868: INFO: deleting *v1.Role: csi-mock-volumes-9884-9435/external-provisioner-cfg-csi-mock-volumes-9884 Jun 18 00:10:37.882: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9884-9435/csi-provisioner-role-cfg Jun 18 00:10:37.886: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9884-9435/csi-resizer Jun 18 00:10:37.890: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9884 Jun 18 00:10:37.894: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9884 Jun 18 00:10:37.897: INFO: deleting *v1.Role: csi-mock-volumes-9884-9435/external-resizer-cfg-csi-mock-volumes-9884 Jun 18 00:10:37.900: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9884-9435/csi-resizer-role-cfg Jun 18 00:10:37.903: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9884-9435/csi-snapshotter Jun 18 00:10:37.906: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9884 Jun 18 00:10:37.910: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9884 Jun 18 00:10:37.912: INFO: deleting *v1.Role: csi-mock-volumes-9884-9435/external-snapshotter-leaderelection-csi-mock-volumes-9884 Jun 18 00:10:37.917: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9884-9435/external-snapshotter-leaderelection Jun 18 00:10:37.920: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9884-9435/csi-mock Jun 18 00:10:37.924: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9884 Jun 18 00:10:37.927: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9884 Jun 18 00:10:37.931: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9884 Jun 18 00:10:37.934: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9884 Jun 18 00:10:37.993: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9884 Jun 18 00:10:37.997: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9884 Jun 18 00:10:38.000: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9884 Jun 18 00:10:38.004: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9884-9435/csi-mockplugin Jun 18 00:10:38.009: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-9884 STEP: deleting the driver namespace: csi-mock-volumes-9884-9435 STEP: Waiting for namespaces [csi-mock-volumes-9884-9435] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:11:06.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:96.760 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI FSGroupPolicy [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1558 should modify fsGroup if fsGroupPolicy=File /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1582 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File","total":-1,"completed":16,"skipped":399,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:11:03.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:90 STEP: Creating projection with secret that has name projected-secret-test-af98c8a6-0239-4391-ad68-49fb57d565f1 STEP: Creating a pod to test consume secrets Jun 18 00:11:03.839: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9ba37225-6e36-4d07-bb59-b40f147d1291" in namespace "projected-9494" to be "Succeeded or Failed" Jun 18 00:11:03.842: INFO: Pod "pod-projected-secrets-9ba37225-6e36-4d07-bb59-b40f147d1291": Phase="Pending", Reason="", readiness=false. Elapsed: 3.027137ms Jun 18 00:11:05.846: INFO: Pod "pod-projected-secrets-9ba37225-6e36-4d07-bb59-b40f147d1291": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007862406s Jun 18 00:11:07.851: INFO: Pod "pod-projected-secrets-9ba37225-6e36-4d07-bb59-b40f147d1291": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012883592s Jun 18 00:11:09.855: INFO: Pod "pod-projected-secrets-9ba37225-6e36-4d07-bb59-b40f147d1291": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016305233s Jun 18 00:11:11.860: INFO: Pod "pod-projected-secrets-9ba37225-6e36-4d07-bb59-b40f147d1291": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.02184563s STEP: Saw pod success Jun 18 00:11:11.860: INFO: Pod "pod-projected-secrets-9ba37225-6e36-4d07-bb59-b40f147d1291" satisfied condition "Succeeded or Failed" Jun 18 00:11:11.864: INFO: Trying to get logs from node node1 pod pod-projected-secrets-9ba37225-6e36-4d07-bb59-b40f147d1291 container projected-secret-volume-test: STEP: delete the pod Jun 18 00:11:11.883: INFO: Waiting for pod pod-projected-secrets-9ba37225-6e36-4d07-bb59-b40f147d1291 to disappear Jun 18 00:11:11.885: INFO: Pod pod-projected-secrets-9ba37225-6e36-4d07-bb59-b40f147d1291 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:11:11.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9494" for this suite. STEP: Destroying namespace "secret-namespace-5755" for this suite. • [SLOW TEST:8.123 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:90 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]","total":-1,"completed":16,"skipped":607,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:11:06.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-160a0526-9db1-4156-a1ef-51911b76d234" Jun 18 00:11:08.115: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-160a0526-9db1-4156-a1ef-51911b76d234 && dd if=/dev/zero of=/tmp/local-volume-test-160a0526-9db1-4156-a1ef-51911b76d234/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-160a0526-9db1-4156-a1ef-51911b76d234/file] Namespace:persistent-local-volumes-test-4370 PodName:hostexec-node2-lj7hw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:08.115: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:11:08.234: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-160a0526-9db1-4156-a1ef-51911b76d234/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-4370 PodName:hostexec-node2-lj7hw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:08.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:11:08.325: INFO: Creating a PV followed by a PVC Jun 18 00:11:08.332: INFO: Waiting for PV local-pvqpjsh to bind to PVC pvc-9xbfv Jun 18 00:11:08.332: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-9xbfv] to have phase Bound Jun 18 00:11:08.334: INFO: PersistentVolumeClaim pvc-9xbfv found but phase is Pending instead of Bound. Jun 18 00:11:10.339: INFO: PersistentVolumeClaim pvc-9xbfv found but phase is Pending instead of Bound. Jun 18 00:11:12.343: INFO: PersistentVolumeClaim pvc-9xbfv found and phase=Bound (4.010347112s) Jun 18 00:11:12.343: INFO: Waiting up to 3m0s for PersistentVolume local-pvqpjsh to have phase Bound Jun 18 00:11:12.345: INFO: PersistentVolume local-pvqpjsh found and phase=Bound (1.922232ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 Jun 18 00:11:12.349: INFO: We don't set fsGroup on block device, skipped. [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:11:12.351: INFO: Deleting PersistentVolumeClaim "pvc-9xbfv" Jun 18 00:11:12.354: INFO: Deleting PersistentVolume "local-pvqpjsh" Jun 18 00:11:12.358: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-160a0526-9db1-4156-a1ef-51911b76d234/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-4370 PodName:hostexec-node2-lj7hw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:12.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-160a0526-9db1-4156-a1ef-51911b76d234/file Jun 18 00:11:12.454: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-4370 PodName:hostexec-node2-lj7hw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:12.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-160a0526-9db1-4156-a1ef-51911b76d234 Jun 18 00:11:12.552: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-160a0526-9db1-4156-a1ef-51911b76d234] Namespace:persistent-local-volumes-test-4370 PodName:hostexec-node2-lj7hw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:12.552: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:11:12.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4370" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [6.601 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 We don't set fsGroup on block device, skipped. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:263 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:11:01.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:354 STEP: Initializing test volumes Jun 18 00:11:07.778: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-a58c1d2c-605c-45a5-90a1-ddbb74220131] Namespace:persistent-local-volumes-test-927 PodName:hostexec-node1-85vsh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:07.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:11:07.874: INFO: Creating a PV followed by a PVC Jun 18 00:11:07.882: INFO: Waiting for PV local-pv9h6vp to bind to PVC pvc-9hgbv Jun 18 00:11:07.882: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-9hgbv] to have phase Bound Jun 18 00:11:07.884: INFO: PersistentVolumeClaim pvc-9hgbv found but phase is Pending instead of Bound. Jun 18 00:11:09.887: INFO: PersistentVolumeClaim pvc-9hgbv found but phase is Pending instead of Bound. Jun 18 00:11:11.889: INFO: PersistentVolumeClaim pvc-9hgbv found and phase=Bound (4.007677878s) Jun 18 00:11:11.889: INFO: Waiting up to 3m0s for PersistentVolume local-pv9h6vp to have phase Bound Jun 18 00:11:11.891: INFO: PersistentVolume local-pv9h6vp found and phase=Bound (2.01908ms) [It] should fail scheduling due to different NodeSelector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379 STEP: local-volume-type: dir Jun 18 00:11:11.907: INFO: Waiting up to 5m0s for pod "pod-bc8446be-b886-4296-bd31-6ff30be29a83" in namespace "persistent-local-volumes-test-927" to be "Unschedulable" Jun 18 00:11:11.909: INFO: Pod "pod-bc8446be-b886-4296-bd31-6ff30be29a83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.424654ms Jun 18 00:11:13.913: INFO: Pod "pod-bc8446be-b886-4296-bd31-6ff30be29a83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00589252s Jun 18 00:11:13.913: INFO: Pod "pod-bc8446be-b886-4296-bd31-6ff30be29a83" satisfied condition "Unschedulable" [AfterEach] Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:370 STEP: Cleaning up PVC and PV Jun 18 00:11:13.913: INFO: Deleting PersistentVolumeClaim "pvc-9hgbv" Jun 18 00:11:13.917: INFO: Deleting PersistentVolume "local-pv9h6vp" STEP: Removing the test directory Jun 18 00:11:13.922: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a58c1d2c-605c-45a5-90a1-ddbb74220131] Namespace:persistent-local-volumes-test-927 PodName:hostexec-node1-85vsh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:13.922: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:11:14.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-927" for this suite. • [SLOW TEST:12.305 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347 should fail scheduling due to different NodeSelector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector","total":-1,"completed":13,"skipped":466,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:10:55.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Jun 18 00:10:55.146: INFO: The status of Pod test-hostpath-type-v4mn8 is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:10:57.149: INFO: The status of Pod test-hostpath-type-v4mn8 is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:10:59.150: INFO: The status of Pod test-hostpath-type-v4mn8 is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:11:01.154: INFO: The status of Pod test-hostpath-type-v4mn8 is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:11:03.149: INFO: The status of Pod test-hostpath-type-v4mn8 is Running (Ready = true) STEP: running on node node1 STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should be able to mount directory 'adir' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:80 [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:11:25.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-9165" for this suite. • [SLOW TEST:30.099 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount directory 'adir' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:80 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should be able to mount directory 'adir' successfully when HostPathType is HostPathUnset","total":-1,"completed":5,"skipped":116,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:10:28.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:391 STEP: Setting up local volumes on node "node1" STEP: Initializing test volumes Jun 18 00:10:30.556: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-9b4d1ba2-5da7-4aa4-bc90-be4ab8d701a1] Namespace:persistent-local-volumes-test-5151 PodName:hostexec-node1-s5qrc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:10:30.556: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:10:30.670: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-1510ef72-5b6d-418a-966f-e77118935dc4] Namespace:persistent-local-volumes-test-5151 PodName:hostexec-node1-s5qrc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:10:30.670: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:10:30.765: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-c4f731dc-7cc7-4cf0-8f62-6ab6392fda49] Namespace:persistent-local-volumes-test-5151 PodName:hostexec-node1-s5qrc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:10:30.765: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:10:30.861: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-47a9b254-b263-4cce-8759-b1f769ff979f] Namespace:persistent-local-volumes-test-5151 PodName:hostexec-node1-s5qrc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:10:30.861: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:10:30.944: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-e12e6ea4-03dc-4db7-9995-73df3e2835f5] Namespace:persistent-local-volumes-test-5151 PodName:hostexec-node1-s5qrc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:10:30.944: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:10:31.027: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-50ae10d8-9097-42f8-9703-caed7f36cf44] Namespace:persistent-local-volumes-test-5151 PodName:hostexec-node1-s5qrc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:10:31.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:10:31.107: INFO: Creating a PV followed by a PVC Jun 18 00:10:31.115: INFO: Creating a PV followed by a PVC Jun 18 00:10:31.121: INFO: Creating a PV followed by a PVC Jun 18 00:10:31.126: INFO: Creating a PV followed by a PVC Jun 18 00:10:31.132: INFO: Creating a PV followed by a PVC Jun 18 00:10:31.137: INFO: Creating a PV followed by a PVC Jun 18 00:10:41.185: INFO: PVCs were not bound within 10s (that's good) STEP: Setting up local volumes on node "node2" STEP: Initializing test volumes Jun 18 00:10:43.201: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-e031a3e6-0b8c-4ef9-9922-611744c3485b] Namespace:persistent-local-volumes-test-5151 PodName:hostexec-node2-gpwsb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:10:43.201: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:10:43.291: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-10e92bbb-4623-4107-96bb-92b6413a10eb] Namespace:persistent-local-volumes-test-5151 PodName:hostexec-node2-gpwsb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:10:43.291: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:10:43.377: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-12b33123-8bc9-4c34-8406-10c3105b8b44] Namespace:persistent-local-volumes-test-5151 PodName:hostexec-node2-gpwsb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:10:43.377: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:10:43.490: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-f8441f40-6093-461f-a733-fcc2179c5efc] Namespace:persistent-local-volumes-test-5151 PodName:hostexec-node2-gpwsb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:10:43.490: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:10:43.575: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-3f5df947-9f35-4f60-822f-ddf1a7377d60] Namespace:persistent-local-volumes-test-5151 PodName:hostexec-node2-gpwsb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:10:43.575: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:10:43.660: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-ba0137f6-5e8d-44b2-899c-ce7e56c4097f] Namespace:persistent-local-volumes-test-5151 PodName:hostexec-node2-gpwsb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:10:43.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:10:43.752: INFO: Creating a PV followed by a PVC Jun 18 00:10:43.759: INFO: Creating a PV followed by a PVC Jun 18 00:10:43.765: INFO: Creating a PV followed by a PVC Jun 18 00:10:43.772: INFO: Creating a PV followed by a PVC Jun 18 00:10:43.778: INFO: Creating a PV followed by a PVC Jun 18 00:10:43.784: INFO: Creating a PV followed by a PVC Jun 18 00:10:53.833: INFO: PVCs were not bound within 10s (that's good) [It] should use volumes on one node when pod has affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:419 STEP: Creating a StatefulSet with pod affinity on nodes Jun 18 00:10:53.840: INFO: Found 0 stateful pods, waiting for 3 Jun 18 00:11:03.845: INFO: Found 2 stateful pods, waiting for 3 Jun 18 00:11:13.845: INFO: Waiting for pod local-volume-statefulset-0 to enter Running - Ready=true, currently Running - Ready=true Jun 18 00:11:13.845: INFO: Waiting for pod local-volume-statefulset-1 to enter Running - Ready=true, currently Running - Ready=true Jun 18 00:11:13.845: INFO: Waiting for pod local-volume-statefulset-2 to enter Running - Ready=true, currently Pending - Ready=false Jun 18 00:11:23.846: INFO: Waiting for pod local-volume-statefulset-0 to enter Running - Ready=true, currently Running - Ready=true Jun 18 00:11:23.846: INFO: Waiting for pod local-volume-statefulset-1 to enter Running - Ready=true, currently Running - Ready=true Jun 18 00:11:23.846: INFO: Waiting for pod local-volume-statefulset-2 to enter Running - Ready=true, currently Running - Ready=true Jun 18 00:11:23.850: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol1-local-volume-statefulset-0] to have phase Bound Jun 18 00:11:23.852: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-0 found and phase=Bound (2.155653ms) Jun 18 00:11:23.852: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol2-local-volume-statefulset-0] to have phase Bound Jun 18 00:11:23.855: INFO: PersistentVolumeClaim vol2-local-volume-statefulset-0 found and phase=Bound (2.405995ms) Jun 18 00:11:23.855: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol1-local-volume-statefulset-1] to have phase Bound Jun 18 00:11:23.856: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-1 found and phase=Bound (1.8121ms) Jun 18 00:11:23.856: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol2-local-volume-statefulset-1] to have phase Bound Jun 18 00:11:23.859: INFO: PersistentVolumeClaim vol2-local-volume-statefulset-1 found and phase=Bound (2.853577ms) Jun 18 00:11:23.859: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol1-local-volume-statefulset-2] to have phase Bound Jun 18 00:11:23.862: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-2 found and phase=Bound (2.589507ms) Jun 18 00:11:23.862: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol2-local-volume-statefulset-2] to have phase Bound Jun 18 00:11:23.865: INFO: PersistentVolumeClaim vol2-local-volume-statefulset-2 found and phase=Bound (2.513909ms) [AfterEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:403 STEP: Cleaning up PVC and PV Jun 18 00:11:23.865: INFO: Deleting PersistentVolumeClaim "pvc-vsb2z" Jun 18 00:11:23.869: INFO: Deleting PersistentVolume "local-pvzwlfv" STEP: Cleaning up PVC and PV Jun 18 00:11:23.873: INFO: Deleting PersistentVolumeClaim "pvc-zzcsw" Jun 18 00:11:23.876: INFO: Deleting PersistentVolume "local-pvq54qz" STEP: Cleaning up PVC and PV Jun 18 00:11:23.880: INFO: Deleting PersistentVolumeClaim "pvc-ddr4w" Jun 18 00:11:23.884: INFO: Deleting PersistentVolume "local-pv6ss98" STEP: Cleaning up PVC and PV Jun 18 00:11:23.888: INFO: Deleting PersistentVolumeClaim "pvc-48jpt" Jun 18 00:11:23.891: INFO: Deleting PersistentVolume "local-pvvjsr4" STEP: Cleaning up PVC and PV Jun 18 00:11:23.895: INFO: Deleting PersistentVolumeClaim "pvc-wp5cz" Jun 18 00:11:23.898: INFO: Deleting PersistentVolume "local-pvnxgzk" STEP: Cleaning up PVC and PV Jun 18 00:11:23.903: INFO: Deleting PersistentVolumeClaim "pvc-wmtjn" Jun 18 00:11:23.906: INFO: Deleting PersistentVolume "local-pvrfczp" STEP: Removing the test directory Jun 18 00:11:23.909: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-9b4d1ba2-5da7-4aa4-bc90-be4ab8d701a1] Namespace:persistent-local-volumes-test-5151 PodName:hostexec-node1-s5qrc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:23.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:11:24.012: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-1510ef72-5b6d-418a-966f-e77118935dc4] Namespace:persistent-local-volumes-test-5151 PodName:hostexec-node1-s5qrc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:24.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:11:24.550: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c4f731dc-7cc7-4cf0-8f62-6ab6392fda49] Namespace:persistent-local-volumes-test-5151 PodName:hostexec-node1-s5qrc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:24.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:11:24.636: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-47a9b254-b263-4cce-8759-b1f769ff979f] Namespace:persistent-local-volumes-test-5151 PodName:hostexec-node1-s5qrc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:24.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:11:24.722: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e12e6ea4-03dc-4db7-9995-73df3e2835f5] Namespace:persistent-local-volumes-test-5151 PodName:hostexec-node1-s5qrc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:24.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:11:24.811: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-50ae10d8-9097-42f8-9703-caed7f36cf44] Namespace:persistent-local-volumes-test-5151 PodName:hostexec-node1-s5qrc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:24.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up PVC and PV Jun 18 00:11:24.899: INFO: Deleting PersistentVolumeClaim "pvc-lnczm" Jun 18 00:11:24.903: INFO: Deleting PersistentVolume "local-pv45brd" STEP: Cleaning up PVC and PV Jun 18 00:11:24.907: INFO: Deleting PersistentVolumeClaim "pvc-cr9xx" Jun 18 00:11:24.911: INFO: Deleting PersistentVolume "local-pvbzxm4" STEP: Cleaning up PVC and PV Jun 18 00:11:24.915: INFO: Deleting PersistentVolumeClaim "pvc-gxshh" Jun 18 00:11:24.919: INFO: Deleting PersistentVolume "local-pv6zqrr" STEP: Cleaning up PVC and PV Jun 18 00:11:24.922: INFO: Deleting PersistentVolumeClaim "pvc-qndlv" Jun 18 00:11:24.926: INFO: Deleting PersistentVolume "local-pvn4l6t" STEP: Cleaning up PVC and PV Jun 18 00:11:24.929: INFO: Deleting PersistentVolumeClaim "pvc-bm54f" Jun 18 00:11:24.933: INFO: Deleting PersistentVolume "local-pvgz66d" STEP: Cleaning up PVC and PV Jun 18 00:11:24.938: INFO: Deleting PersistentVolumeClaim "pvc-nnjgz" Jun 18 00:11:24.942: INFO: Deleting PersistentVolume "local-pvs4qwl" STEP: Removing the test directory Jun 18 00:11:24.945: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e031a3e6-0b8c-4ef9-9922-611744c3485b] Namespace:persistent-local-volumes-test-5151 PodName:hostexec-node2-gpwsb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:24.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:11:25.036: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-10e92bbb-4623-4107-96bb-92b6413a10eb] Namespace:persistent-local-volumes-test-5151 PodName:hostexec-node2-gpwsb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:25.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:11:25.136: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-12b33123-8bc9-4c34-8406-10c3105b8b44] Namespace:persistent-local-volumes-test-5151 PodName:hostexec-node2-gpwsb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:25.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:11:25.230: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f8441f40-6093-461f-a733-fcc2179c5efc] Namespace:persistent-local-volumes-test-5151 PodName:hostexec-node2-gpwsb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:25.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:11:25.325: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-3f5df947-9f35-4f60-822f-ddf1a7377d60] Namespace:persistent-local-volumes-test-5151 PodName:hostexec-node2-gpwsb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:25.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:11:25.431: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ba0137f6-5e8d-44b2-899c-ce7e56c4097f] Namespace:persistent-local-volumes-test-5151 PodName:hostexec-node2-gpwsb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:25.431: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:11:25.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5151" for this suite. • [SLOW TEST:57.035 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:384 should use volumes on one node when pod has affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:419 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local StatefulSet with pod affinity [Slow] should use volumes on one node when pod has affinity","total":-1,"completed":6,"skipped":158,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:01:19.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [It] should fail due to non-existent path /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:307 STEP: Creating local PVC and PV Jun 18 00:01:19.555: INFO: Creating a PV followed by a PVC Jun 18 00:01:19.563: INFO: Waiting for PV local-pvnx9g5 to bind to PVC pvc-9ww94 Jun 18 00:01:19.563: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-9ww94] to have phase Bound Jun 18 00:01:19.566: INFO: PersistentVolumeClaim pvc-9ww94 found but phase is Pending instead of Bound. Jun 18 00:01:21.570: INFO: PersistentVolumeClaim pvc-9ww94 found but phase is Pending instead of Bound. Jun 18 00:01:23.574: INFO: PersistentVolumeClaim pvc-9ww94 found but phase is Pending instead of Bound. Jun 18 00:01:25.579: INFO: PersistentVolumeClaim pvc-9ww94 found but phase is Pending instead of Bound. Jun 18 00:01:27.584: INFO: PersistentVolumeClaim pvc-9ww94 found and phase=Bound (8.020238055s) Jun 18 00:01:27.584: INFO: Waiting up to 3m0s for PersistentVolume local-pvnx9g5 to have phase Bound Jun 18 00:01:27.587: INFO: PersistentVolume local-pvnx9g5 found and phase=Bound (3.040479ms) STEP: Creating a pod STEP: Cleaning up PVC and PV Jun 18 00:11:27.617: INFO: Deleting PersistentVolumeClaim "pvc-9ww94" Jun 18 00:11:27.622: INFO: Deleting PersistentVolume "local-pvnx9g5" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:11:27.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9551" for this suite. • [SLOW TEST:608.485 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Local volume that cannot be mounted [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:304 should fail due to non-existent path /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:307 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Local volume that cannot be mounted [Slow] should fail due to non-existent path","total":-1,"completed":1,"skipped":43,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:11:12.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:391 STEP: Setting up local volumes on node "node1" STEP: Initializing test volumes Jun 18 00:11:18.759: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-01d01a18-c26e-46b7-9806-5b42a953cca6] Namespace:persistent-local-volumes-test-675 PodName:hostexec-node1-vtx5k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:18.759: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:11:18.928: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-6defc70f-b0b7-46af-9a7e-5b4af2cc2d3e] Namespace:persistent-local-volumes-test-675 PodName:hostexec-node1-vtx5k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:18.928: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:11:19.043: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-417a6ed2-63c3-47e2-8aa4-c13efb31007b] Namespace:persistent-local-volumes-test-675 PodName:hostexec-node1-vtx5k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:19.043: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:11:19.161: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-a29e91d3-c3b9-41ef-bde1-63c14495303d] Namespace:persistent-local-volumes-test-675 PodName:hostexec-node1-vtx5k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:19.161: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:11:19.344: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-a6300e45-d28e-4d32-92af-7eaabc74bcd6] Namespace:persistent-local-volumes-test-675 PodName:hostexec-node1-vtx5k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:19.344: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:11:19.459: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-6de9824f-b418-4b9d-b42c-61281f8f2f44] Namespace:persistent-local-volumes-test-675 PodName:hostexec-node1-vtx5k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:19.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:11:19.601: INFO: Creating a PV followed by a PVC Jun 18 00:11:19.607: INFO: Creating a PV followed by a PVC Jun 18 00:11:19.613: INFO: Creating a PV followed by a PVC Jun 18 00:11:19.619: INFO: Creating a PV followed by a PVC Jun 18 00:11:19.624: INFO: Creating a PV followed by a PVC Jun 18 00:11:19.630: INFO: Creating a PV followed by a PVC Jun 18 00:11:29.672: INFO: PVCs were not bound within 10s (that's good) STEP: Setting up local volumes on node "node2" STEP: Initializing test volumes Jun 18 00:11:31.693: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-850ff957-c631-4d02-9835-cbb5ab8a8041] Namespace:persistent-local-volumes-test-675 PodName:hostexec-node2-vp7jh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:31.693: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:11:31.784: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-3f4b8d1d-7429-4d2f-adb8-d580cd2af41a] Namespace:persistent-local-volumes-test-675 PodName:hostexec-node2-vp7jh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:31.784: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:11:31.889: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-55891940-a2d3-4672-a240-6697d0d47660] Namespace:persistent-local-volumes-test-675 PodName:hostexec-node2-vp7jh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:31.889: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:11:31.998: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-91ab255f-2d7a-4b2d-ba93-5e4873b760cb] Namespace:persistent-local-volumes-test-675 PodName:hostexec-node2-vp7jh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:31.998: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:11:32.085: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-ea1add3a-5fc6-4fa2-be98-7347bd11080d] Namespace:persistent-local-volumes-test-675 PodName:hostexec-node2-vp7jh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:32.085: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:11:32.173: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-6c470b29-a425-4c97-802a-61805bda0707] Namespace:persistent-local-volumes-test-675 PodName:hostexec-node2-vp7jh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:32.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:11:32.256: INFO: Creating a PV followed by a PVC Jun 18 00:11:32.263: INFO: Creating a PV followed by a PVC Jun 18 00:11:32.270: INFO: Creating a PV followed by a PVC Jun 18 00:11:32.275: INFO: Creating a PV followed by a PVC Jun 18 00:11:32.281: INFO: Creating a PV followed by a PVC Jun 18 00:11:32.286: INFO: Creating a PV followed by a PVC Jun 18 00:11:42.332: INFO: PVCs were not bound within 10s (that's good) [It] should use volumes spread across nodes when pod management is parallel and pod has anti-affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:425 Jun 18 00:11:42.332: INFO: Runs only when number of nodes >= 3 [AfterEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:403 STEP: Cleaning up PVC and PV Jun 18 00:11:42.333: INFO: Deleting PersistentVolumeClaim "pvc-jwmvg" Jun 18 00:11:42.338: INFO: Deleting PersistentVolume "local-pvhb24z" STEP: Cleaning up PVC and PV Jun 18 00:11:42.343: INFO: Deleting PersistentVolumeClaim "pvc-njjx6" Jun 18 00:11:42.346: INFO: Deleting PersistentVolume "local-pvzdq7n" STEP: Cleaning up PVC and PV Jun 18 00:11:42.350: INFO: Deleting PersistentVolumeClaim "pvc-nvtds" Jun 18 00:11:42.354: INFO: Deleting PersistentVolume "local-pv54fwz" STEP: Cleaning up PVC and PV Jun 18 00:11:42.358: INFO: Deleting PersistentVolumeClaim "pvc-xktfk" Jun 18 00:11:42.362: INFO: Deleting PersistentVolume "local-pvt6xk9" STEP: Cleaning up PVC and PV Jun 18 00:11:42.365: INFO: Deleting PersistentVolumeClaim "pvc-xjpwm" Jun 18 00:11:42.369: INFO: Deleting PersistentVolume "local-pv9v22g" STEP: Cleaning up PVC and PV Jun 18 00:11:42.372: INFO: Deleting PersistentVolumeClaim "pvc-j5vzb" Jun 18 00:11:42.375: INFO: Deleting PersistentVolume "local-pvpx2lg" STEP: Removing the test directory Jun 18 00:11:42.379: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-01d01a18-c26e-46b7-9806-5b42a953cca6] Namespace:persistent-local-volumes-test-675 PodName:hostexec-node1-vtx5k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:42.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:11:42.470: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6defc70f-b0b7-46af-9a7e-5b4af2cc2d3e] Namespace:persistent-local-volumes-test-675 PodName:hostexec-node1-vtx5k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:42.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:11:42.572: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-417a6ed2-63c3-47e2-8aa4-c13efb31007b] Namespace:persistent-local-volumes-test-675 PodName:hostexec-node1-vtx5k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:42.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:11:42.653: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a29e91d3-c3b9-41ef-bde1-63c14495303d] Namespace:persistent-local-volumes-test-675 PodName:hostexec-node1-vtx5k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:42.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:11:42.764: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a6300e45-d28e-4d32-92af-7eaabc74bcd6] Namespace:persistent-local-volumes-test-675 PodName:hostexec-node1-vtx5k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:42.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:11:42.863: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6de9824f-b418-4b9d-b42c-61281f8f2f44] Namespace:persistent-local-volumes-test-675 PodName:hostexec-node1-vtx5k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:42.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up PVC and PV Jun 18 00:11:42.958: INFO: Deleting PersistentVolumeClaim "pvc-gl8cs" Jun 18 00:11:42.963: INFO: Deleting PersistentVolume "local-pvltps7" STEP: Cleaning up PVC and PV Jun 18 00:11:42.967: INFO: Deleting PersistentVolumeClaim "pvc-bsvr2" Jun 18 00:11:42.970: INFO: Deleting PersistentVolume "local-pvtmhng" STEP: Cleaning up PVC and PV Jun 18 00:11:42.974: INFO: Deleting PersistentVolumeClaim "pvc-fxx94" Jun 18 00:11:42.982: INFO: Deleting PersistentVolume "local-pvhttjk" STEP: Cleaning up PVC and PV Jun 18 00:11:42.985: INFO: Deleting PersistentVolumeClaim "pvc-m2nwv" Jun 18 00:11:42.989: INFO: Deleting PersistentVolume "local-pv2756z" STEP: Cleaning up PVC and PV Jun 18 00:11:42.992: INFO: Deleting PersistentVolumeClaim "pvc-8l6hl" Jun 18 00:11:42.995: INFO: Deleting PersistentVolume "local-pvg7rxj" STEP: Cleaning up PVC and PV Jun 18 00:11:42.999: INFO: Deleting PersistentVolumeClaim "pvc-drxb4" Jun 18 00:11:43.002: INFO: Deleting PersistentVolume "local-pv9zdz4" STEP: Removing the test directory Jun 18 00:11:43.006: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-850ff957-c631-4d02-9835-cbb5ab8a8041] Namespace:persistent-local-volumes-test-675 PodName:hostexec-node2-vp7jh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:43.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:11:43.110: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-3f4b8d1d-7429-4d2f-adb8-d580cd2af41a] Namespace:persistent-local-volumes-test-675 PodName:hostexec-node2-vp7jh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:43.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:11:43.247: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-55891940-a2d3-4672-a240-6697d0d47660] Namespace:persistent-local-volumes-test-675 PodName:hostexec-node2-vp7jh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:43.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:11:43.704: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-91ab255f-2d7a-4b2d-ba93-5e4873b760cb] Namespace:persistent-local-volumes-test-675 PodName:hostexec-node2-vp7jh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:43.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:11:43.811: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ea1add3a-5fc6-4fa2-be98-7347bd11080d] Namespace:persistent-local-volumes-test-675 PodName:hostexec-node2-vp7jh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:43.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:11:43.889: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6c470b29-a425-4c97-802a-61805bda0707] Namespace:persistent-local-volumes-test-675 PodName:hostexec-node2-vp7jh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:43.889: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:11:43.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-675" for this suite. S [SKIPPING] [31.279 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:384 should use volumes spread across nodes when pod management is parallel and pod has anti-affinity [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:425 Runs only when number of nodes >= 3 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:427 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:11:44.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 18 00:11:44.173: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:11:44.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-1582" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create prometheus metrics for volume provisioning and attach/detach [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:101 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:11:25.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 18 00:11:29.641: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-bbdda7e0-2fd5-4f90-90dd-965afc364bc1-backend && ln -s /tmp/local-volume-test-bbdda7e0-2fd5-4f90-90dd-965afc364bc1-backend /tmp/local-volume-test-bbdda7e0-2fd5-4f90-90dd-965afc364bc1] Namespace:persistent-local-volumes-test-2793 PodName:hostexec-node2-2ng7l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:29.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:11:29.731: INFO: Creating a PV followed by a PVC Jun 18 00:11:29.738: INFO: Waiting for PV local-pv5xn25 to bind to PVC pvc-v5jmt Jun 18 00:11:29.738: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-v5jmt] to have phase Bound Jun 18 00:11:29.743: INFO: PersistentVolumeClaim pvc-v5jmt found but phase is Pending instead of Bound. Jun 18 00:11:31.748: INFO: PersistentVolumeClaim pvc-v5jmt found but phase is Pending instead of Bound. Jun 18 00:11:33.752: INFO: PersistentVolumeClaim pvc-v5jmt found but phase is Pending instead of Bound. Jun 18 00:11:35.756: INFO: PersistentVolumeClaim pvc-v5jmt found but phase is Pending instead of Bound. Jun 18 00:11:37.759: INFO: PersistentVolumeClaim pvc-v5jmt found but phase is Pending instead of Bound. Jun 18 00:11:39.764: INFO: PersistentVolumeClaim pvc-v5jmt found but phase is Pending instead of Bound. Jun 18 00:11:41.768: INFO: PersistentVolumeClaim pvc-v5jmt found and phase=Bound (12.029831884s) Jun 18 00:11:41.768: INFO: Waiting up to 3m0s for PersistentVolume local-pv5xn25 to have phase Bound Jun 18 00:11:41.770: INFO: PersistentVolume local-pv5xn25 found and phase=Bound (2.382457ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Jun 18 00:11:45.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-2793 exec pod-1a07be3d-d045-4a39-9f36-5963ec5bd7d9 --namespace=persistent-local-volumes-test-2793 -- stat -c %g /mnt/volume1' Jun 18 00:11:46.129: INFO: stderr: "" Jun 18 00:11:46.129: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-1a07be3d-d045-4a39-9f36-5963ec5bd7d9 in namespace persistent-local-volumes-test-2793 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:11:46.135: INFO: Deleting PersistentVolumeClaim "pvc-v5jmt" Jun 18 00:11:46.139: INFO: Deleting PersistentVolume "local-pv5xn25" STEP: Removing the test directory Jun 18 00:11:46.143: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-bbdda7e0-2fd5-4f90-90dd-965afc364bc1 && rm -r /tmp/local-volume-test-bbdda7e0-2fd5-4f90-90dd-965afc364bc1-backend] Namespace:persistent-local-volumes-test-2793 PodName:hostexec-node2-2ng7l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:46.144: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:11:46.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2793" for this suite. • [SLOW TEST:20.657 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":7,"skipped":176,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:11:27.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-fc7bed94-c761-44ac-9b0c-74476b54dca8" Jun 18 00:11:33.699: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-fc7bed94-c761-44ac-9b0c-74476b54dca8" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-fc7bed94-c761-44ac-9b0c-74476b54dca8" "/tmp/local-volume-test-fc7bed94-c761-44ac-9b0c-74476b54dca8"] Namespace:persistent-local-volumes-test-105 PodName:hostexec-node1-bxkr8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:33.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:11:33.854: INFO: Creating a PV followed by a PVC Jun 18 00:11:33.861: INFO: Waiting for PV local-pvbprgw to bind to PVC pvc-whdqn Jun 18 00:11:33.861: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-whdqn] to have phase Bound Jun 18 00:11:33.864: INFO: PersistentVolumeClaim pvc-whdqn found but phase is Pending instead of Bound. Jun 18 00:11:35.867: INFO: PersistentVolumeClaim pvc-whdqn found and phase=Bound (2.006268853s) Jun 18 00:11:35.867: INFO: Waiting up to 3m0s for PersistentVolume local-pvbprgw to have phase Bound Jun 18 00:11:35.870: INFO: PersistentVolume local-pvbprgw found and phase=Bound (2.217705ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Jun 18 00:11:43.898: INFO: pod "pod-076f0f53-08fe-4f29-9848-3d80a05f6b9a" created on Node "node1" STEP: Writing in pod1 Jun 18 00:11:43.898: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-105 PodName:pod-076f0f53-08fe-4f29-9848-3d80a05f6b9a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:11:43.898: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:11:43.985: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Jun 18 00:11:43.985: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-105 PodName:pod-076f0f53-08fe-4f29-9848-3d80a05f6b9a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:11:43.985: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:11:45.459: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Jun 18 00:11:49.481: INFO: pod "pod-ab57aa75-1f7d-4ce7-9c31-e534b4cc1656" created on Node "node1" Jun 18 00:11:49.481: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-105 PodName:pod-ab57aa75-1f7d-4ce7-9c31-e534b4cc1656 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:11:49.481: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:11:49.578: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Jun 18 00:11:49.578: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-fc7bed94-c761-44ac-9b0c-74476b54dca8 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-105 PodName:pod-ab57aa75-1f7d-4ce7-9c31-e534b4cc1656 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:11:49.578: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:11:49.700: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-fc7bed94-c761-44ac-9b0c-74476b54dca8 > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Jun 18 00:11:49.700: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-105 PodName:pod-076f0f53-08fe-4f29-9848-3d80a05f6b9a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:11:49.700: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:11:49.796: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-fc7bed94-c761-44ac-9b0c-74476b54dca8", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-076f0f53-08fe-4f29-9848-3d80a05f6b9a in namespace persistent-local-volumes-test-105 STEP: Deleting pod2 STEP: Deleting pod pod-ab57aa75-1f7d-4ce7-9c31-e534b4cc1656 in namespace persistent-local-volumes-test-105 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:11:49.805: INFO: Deleting PersistentVolumeClaim "pvc-whdqn" Jun 18 00:11:49.808: INFO: Deleting PersistentVolume "local-pvbprgw" STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-fc7bed94-c761-44ac-9b0c-74476b54dca8" Jun 18 00:11:49.812: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-fc7bed94-c761-44ac-9b0c-74476b54dca8"] Namespace:persistent-local-volumes-test-105 PodName:hostexec-node1-bxkr8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:49.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:11:49.915: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-fc7bed94-c761-44ac-9b0c-74476b54dca8] Namespace:persistent-local-volumes-test-105 PodName:hostexec-node1-bxkr8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:11:49.915: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:11:50.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-105" for this suite. • [SLOW TEST:22.360 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":2,"skipped":49,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:09:33.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume without restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687 STEP: Building a driver namespace object, basename csi-mock-volumes-638 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 18 00:09:33.746: INFO: creating *v1.ServiceAccount: csi-mock-volumes-638-9733/csi-attacher Jun 18 00:09:33.749: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-638 Jun 18 00:09:33.749: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-638 Jun 18 00:09:33.752: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-638 Jun 18 00:09:33.755: INFO: creating *v1.Role: csi-mock-volumes-638-9733/external-attacher-cfg-csi-mock-volumes-638 Jun 18 00:09:33.758: INFO: creating *v1.RoleBinding: csi-mock-volumes-638-9733/csi-attacher-role-cfg Jun 18 00:09:33.761: INFO: creating *v1.ServiceAccount: csi-mock-volumes-638-9733/csi-provisioner Jun 18 00:09:33.767: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-638 Jun 18 00:09:33.767: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-638 Jun 18 00:09:33.770: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-638 Jun 18 00:09:33.773: INFO: creating *v1.Role: csi-mock-volumes-638-9733/external-provisioner-cfg-csi-mock-volumes-638 Jun 18 00:09:33.777: INFO: creating *v1.RoleBinding: csi-mock-volumes-638-9733/csi-provisioner-role-cfg Jun 18 00:09:33.780: INFO: creating *v1.ServiceAccount: csi-mock-volumes-638-9733/csi-resizer Jun 18 00:09:33.782: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-638 Jun 18 00:09:33.782: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-638 Jun 18 00:09:33.785: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-638 Jun 18 00:09:33.788: INFO: creating *v1.Role: csi-mock-volumes-638-9733/external-resizer-cfg-csi-mock-volumes-638 Jun 18 00:09:33.791: INFO: creating *v1.RoleBinding: csi-mock-volumes-638-9733/csi-resizer-role-cfg Jun 18 00:09:33.793: INFO: creating *v1.ServiceAccount: csi-mock-volumes-638-9733/csi-snapshotter Jun 18 00:09:33.797: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-638 Jun 18 00:09:33.797: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-638 Jun 18 00:09:33.800: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-638 Jun 18 00:09:33.803: INFO: creating *v1.Role: csi-mock-volumes-638-9733/external-snapshotter-leaderelection-csi-mock-volumes-638 Jun 18 00:09:33.806: INFO: creating *v1.RoleBinding: csi-mock-volumes-638-9733/external-snapshotter-leaderelection Jun 18 00:09:33.808: INFO: creating *v1.ServiceAccount: csi-mock-volumes-638-9733/csi-mock Jun 18 00:09:33.811: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-638 Jun 18 00:09:33.813: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-638 Jun 18 00:09:33.817: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-638 Jun 18 00:09:33.819: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-638 Jun 18 00:09:33.822: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-638 Jun 18 00:09:33.826: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-638 Jun 18 00:09:33.828: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-638 Jun 18 00:09:33.831: INFO: creating *v1.StatefulSet: csi-mock-volumes-638-9733/csi-mockplugin Jun 18 00:09:33.835: INFO: creating *v1.StatefulSet: csi-mock-volumes-638-9733/csi-mockplugin-attacher Jun 18 00:09:33.839: INFO: creating *v1.StatefulSet: csi-mock-volumes-638-9733/csi-mockplugin-resizer Jun 18 00:09:33.842: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-638 to register on node node2 STEP: Creating pod Jun 18 00:09:50.112: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 18 00:09:50.117: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-hj9mq] to have phase Bound Jun 18 00:09:50.119: INFO: PersistentVolumeClaim pvc-hj9mq found but phase is Pending instead of Bound. Jun 18 00:09:52.125: INFO: PersistentVolumeClaim pvc-hj9mq found and phase=Bound (2.008223173s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-tnjsb Jun 18 00:11:26.171: INFO: Deleting pod "pvc-volume-tester-tnjsb" in namespace "csi-mock-volumes-638" Jun 18 00:11:26.177: INFO: Wait up to 5m0s for pod "pvc-volume-tester-tnjsb" to be fully deleted STEP: Deleting claim pvc-hj9mq Jun 18 00:11:30.190: INFO: Waiting up to 2m0s for PersistentVolume pvc-15da9e2b-2aac-44a1-bde9-c732477bc5c2 to get deleted Jun 18 00:11:30.192: INFO: PersistentVolume pvc-15da9e2b-2aac-44a1-bde9-c732477bc5c2 found and phase=Bound (2.089212ms) Jun 18 00:11:32.195: INFO: PersistentVolume pvc-15da9e2b-2aac-44a1-bde9-c732477bc5c2 found and phase=Released (2.005627105s) Jun 18 00:11:34.199: INFO: PersistentVolume pvc-15da9e2b-2aac-44a1-bde9-c732477bc5c2 was removed STEP: Deleting storageclass csi-mock-volumes-638-scvv4lf STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-638 STEP: Waiting for namespaces [csi-mock-volumes-638] to vanish STEP: uninstalling csi mock driver Jun 18 00:11:40.215: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-638-9733/csi-attacher Jun 18 00:11:40.220: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-638 Jun 18 00:11:40.224: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-638 Jun 18 00:11:40.227: INFO: deleting *v1.Role: csi-mock-volumes-638-9733/external-attacher-cfg-csi-mock-volumes-638 Jun 18 00:11:40.231: INFO: deleting *v1.RoleBinding: csi-mock-volumes-638-9733/csi-attacher-role-cfg Jun 18 00:11:40.234: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-638-9733/csi-provisioner Jun 18 00:11:40.239: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-638 Jun 18 00:11:40.242: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-638 Jun 18 00:11:40.250: INFO: deleting *v1.Role: csi-mock-volumes-638-9733/external-provisioner-cfg-csi-mock-volumes-638 Jun 18 00:11:40.258: INFO: deleting *v1.RoleBinding: csi-mock-volumes-638-9733/csi-provisioner-role-cfg Jun 18 00:11:40.265: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-638-9733/csi-resizer Jun 18 00:11:40.271: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-638 Jun 18 00:11:40.274: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-638 Jun 18 00:11:40.277: INFO: deleting *v1.Role: csi-mock-volumes-638-9733/external-resizer-cfg-csi-mock-volumes-638 Jun 18 00:11:40.282: INFO: deleting *v1.RoleBinding: csi-mock-volumes-638-9733/csi-resizer-role-cfg Jun 18 00:11:40.285: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-638-9733/csi-snapshotter Jun 18 00:11:40.289: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-638 Jun 18 00:11:40.293: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-638 Jun 18 00:11:40.296: INFO: deleting *v1.Role: csi-mock-volumes-638-9733/external-snapshotter-leaderelection-csi-mock-volumes-638 Jun 18 00:11:40.300: INFO: deleting *v1.RoleBinding: csi-mock-volumes-638-9733/external-snapshotter-leaderelection Jun 18 00:11:40.303: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-638-9733/csi-mock Jun 18 00:11:40.308: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-638 Jun 18 00:11:40.311: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-638 Jun 18 00:11:40.314: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-638 Jun 18 00:11:40.318: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-638 Jun 18 00:11:40.321: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-638 Jun 18 00:11:40.324: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-638 Jun 18 00:11:40.327: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-638 Jun 18 00:11:40.330: INFO: deleting *v1.StatefulSet: csi-mock-volumes-638-9733/csi-mockplugin Jun 18 00:11:40.337: INFO: deleting *v1.StatefulSet: csi-mock-volumes-638-9733/csi-mockplugin-attacher Jun 18 00:11:40.341: INFO: deleting *v1.StatefulSet: csi-mock-volumes-638-9733/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-638-9733 STEP: Waiting for namespaces [csi-mock-volumes-638-9733] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:11:52.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:138.672 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI online volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:672 should expand volume without restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=on, nodeExpansion=on","total":-1,"completed":12,"skipped":274,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:11:52.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support memory backed volumes of specified size /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:298 [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:11:52.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3087" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size","total":-1,"completed":13,"skipped":310,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:07:38.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not expand volume if resizingOnDriver=off, resizingOnSC=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 STEP: Building a driver namespace object, basename csi-mock-volumes-5611 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 18 00:07:38.910: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5611-6264/csi-attacher Jun 18 00:07:38.913: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5611 Jun 18 00:07:38.913: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-5611 Jun 18 00:07:38.915: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5611 Jun 18 00:07:38.918: INFO: creating *v1.Role: csi-mock-volumes-5611-6264/external-attacher-cfg-csi-mock-volumes-5611 Jun 18 00:07:38.921: INFO: creating *v1.RoleBinding: csi-mock-volumes-5611-6264/csi-attacher-role-cfg Jun 18 00:07:38.923: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5611-6264/csi-provisioner Jun 18 00:07:38.926: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5611 Jun 18 00:07:38.926: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-5611 Jun 18 00:07:38.928: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5611 Jun 18 00:07:38.931: INFO: creating *v1.Role: csi-mock-volumes-5611-6264/external-provisioner-cfg-csi-mock-volumes-5611 Jun 18 00:07:38.934: INFO: creating *v1.RoleBinding: csi-mock-volumes-5611-6264/csi-provisioner-role-cfg Jun 18 00:07:38.937: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5611-6264/csi-resizer Jun 18 00:07:38.939: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5611 Jun 18 00:07:38.939: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-5611 Jun 18 00:07:38.942: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5611 Jun 18 00:07:38.944: INFO: creating *v1.Role: csi-mock-volumes-5611-6264/external-resizer-cfg-csi-mock-volumes-5611 Jun 18 00:07:38.947: INFO: creating *v1.RoleBinding: csi-mock-volumes-5611-6264/csi-resizer-role-cfg Jun 18 00:07:38.950: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5611-6264/csi-snapshotter Jun 18 00:07:38.952: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5611 Jun 18 00:07:38.952: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-5611 Jun 18 00:07:38.955: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5611 Jun 18 00:07:38.958: INFO: creating *v1.Role: csi-mock-volumes-5611-6264/external-snapshotter-leaderelection-csi-mock-volumes-5611 Jun 18 00:07:38.960: INFO: creating *v1.RoleBinding: csi-mock-volumes-5611-6264/external-snapshotter-leaderelection Jun 18 00:07:38.963: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5611-6264/csi-mock Jun 18 00:07:38.966: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5611 Jun 18 00:07:38.968: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5611 Jun 18 00:07:38.971: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5611 Jun 18 00:07:38.974: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5611 Jun 18 00:07:38.977: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5611 Jun 18 00:07:38.979: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5611 Jun 18 00:07:38.982: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5611 Jun 18 00:07:38.985: INFO: creating *v1.StatefulSet: csi-mock-volumes-5611-6264/csi-mockplugin Jun 18 00:07:38.990: INFO: creating *v1.StatefulSet: csi-mock-volumes-5611-6264/csi-mockplugin-attacher Jun 18 00:07:38.993: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5611 to register on node node2 STEP: Creating pod Jun 18 00:07:55.267: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 18 00:07:55.273: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-ddnct] to have phase Bound Jun 18 00:07:55.275: INFO: PersistentVolumeClaim pvc-ddnct found but phase is Pending instead of Bound. Jun 18 00:07:57.278: INFO: PersistentVolumeClaim pvc-ddnct found and phase=Bound (2.005673001s) STEP: Expanding current pvc STEP: Deleting pod pvc-volume-tester-lqxg8 Jun 18 00:11:11.318: INFO: Deleting pod "pvc-volume-tester-lqxg8" in namespace "csi-mock-volumes-5611" Jun 18 00:11:11.324: INFO: Wait up to 5m0s for pod "pvc-volume-tester-lqxg8" to be fully deleted STEP: Deleting claim pvc-ddnct Jun 18 00:11:19.335: INFO: Waiting up to 2m0s for PersistentVolume pvc-fd0baaa3-025a-4746-b8b8-d70021b97950 to get deleted Jun 18 00:11:19.338: INFO: PersistentVolume pvc-fd0baaa3-025a-4746-b8b8-d70021b97950 found and phase=Bound (2.47914ms) Jun 18 00:11:21.341: INFO: PersistentVolume pvc-fd0baaa3-025a-4746-b8b8-d70021b97950 found and phase=Released (2.005873948s) Jun 18 00:11:23.346: INFO: PersistentVolume pvc-fd0baaa3-025a-4746-b8b8-d70021b97950 found and phase=Released (4.010285352s) Jun 18 00:11:25.350: INFO: PersistentVolume pvc-fd0baaa3-025a-4746-b8b8-d70021b97950 found and phase=Released (6.014421474s) Jun 18 00:11:27.352: INFO: PersistentVolume pvc-fd0baaa3-025a-4746-b8b8-d70021b97950 was removed STEP: Deleting storageclass csi-mock-volumes-5611-sclqsl6 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-5611 STEP: Waiting for namespaces [csi-mock-volumes-5611] to vanish STEP: uninstalling csi mock driver Jun 18 00:11:33.366: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5611-6264/csi-attacher Jun 18 00:11:33.369: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5611 Jun 18 00:11:33.373: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5611 Jun 18 00:11:33.378: INFO: deleting *v1.Role: csi-mock-volumes-5611-6264/external-attacher-cfg-csi-mock-volumes-5611 Jun 18 00:11:33.383: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5611-6264/csi-attacher-role-cfg Jun 18 00:11:33.386: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5611-6264/csi-provisioner Jun 18 00:11:33.389: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5611 Jun 18 00:11:33.393: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5611 Jun 18 00:11:33.396: INFO: deleting *v1.Role: csi-mock-volumes-5611-6264/external-provisioner-cfg-csi-mock-volumes-5611 Jun 18 00:11:33.399: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5611-6264/csi-provisioner-role-cfg Jun 18 00:11:33.403: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5611-6264/csi-resizer Jun 18 00:11:33.407: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5611 Jun 18 00:11:33.410: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5611 Jun 18 00:11:33.413: INFO: deleting *v1.Role: csi-mock-volumes-5611-6264/external-resizer-cfg-csi-mock-volumes-5611 Jun 18 00:11:33.417: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5611-6264/csi-resizer-role-cfg Jun 18 00:11:33.421: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5611-6264/csi-snapshotter Jun 18 00:11:33.425: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5611 Jun 18 00:11:33.429: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5611 Jun 18 00:11:33.433: INFO: deleting *v1.Role: csi-mock-volumes-5611-6264/external-snapshotter-leaderelection-csi-mock-volumes-5611 Jun 18 00:11:33.436: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5611-6264/external-snapshotter-leaderelection Jun 18 00:11:33.440: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5611-6264/csi-mock Jun 18 00:11:33.443: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5611 Jun 18 00:11:33.447: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5611 Jun 18 00:11:33.450: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5611 Jun 18 00:11:33.454: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5611 Jun 18 00:11:33.458: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5611 Jun 18 00:11:33.461: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5611 Jun 18 00:11:33.464: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5611 Jun 18 00:11:33.468: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5611-6264/csi-mockplugin Jun 18 00:11:33.472: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5611-6264/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-5611-6264 STEP: Waiting for namespaces [csi-mock-volumes-5611-6264] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:12:01.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:262.642 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI Volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561 should not expand volume if resizingOnDriver=off, resizingOnSC=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on","total":-1,"completed":12,"skipped":338,"failed":1,"failures":["[sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage final error"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:11:52.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Jun 18 00:11:52.634: INFO: The status of Pod test-hostpath-type-fznr4 is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:11:54.638: INFO: The status of Pod test-hostpath-type-fznr4 is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:11:56.640: INFO: The status of Pod test-hostpath-type-fznr4 is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:11:58.637: INFO: The status of Pod test-hostpath-type-fznr4 is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:12:00.639: INFO: The status of Pod test-hostpath-type-fznr4 is Running (Ready = true) STEP: running on node node2 STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should fail on mounting directory 'adir' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:94 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:12:06.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-1387" for this suite. • [SLOW TEST:14.105 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting directory 'adir' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:94 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathCharDev","total":-1,"completed":14,"skipped":377,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:12:06.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 18 00:12:06.797: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:12:06.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-6886" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create metrics for total number of volumes in A/D Controller [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:322 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:11:44.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should create and delete persistent volumes [fast] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:794 STEP: creating a Gluster DP server Pod STEP: locating the provisioner pod STEP: creating a StorageClass STEP: Creating a StorageClass STEP: creating a claim object with a suffix for gluster dynamic provisioner Jun 18 00:12:00.304: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: creating claim=&PersistentVolumeClaim{ObjectMeta:{ pvc- volume-provisioning-2228 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {} 2Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*volume-provisioning-2228-glusterdptestkd5cs,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} Jun 18 00:12:00.310: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-jwpmh] to have phase Bound Jun 18 00:12:00.312: INFO: PersistentVolumeClaim pvc-jwpmh found but phase is Pending instead of Bound. Jun 18 00:12:02.317: INFO: PersistentVolumeClaim pvc-jwpmh found and phase=Bound (2.006924717s) STEP: checking the claim STEP: checking the PV STEP: deleting claim "volume-provisioning-2228"/"pvc-jwpmh" STEP: deleting the claim's PV "pvc-c74d8b25-b89b-4c17-8705-1254cc0b58b7" Jun 18 00:12:02.327: INFO: Waiting up to 20m0s for PersistentVolume pvc-c74d8b25-b89b-4c17-8705-1254cc0b58b7 to get deleted Jun 18 00:12:02.329: INFO: PersistentVolume pvc-c74d8b25-b89b-4c17-8705-1254cc0b58b7 found and phase=Bound (2.116773ms) Jun 18 00:12:07.332: INFO: PersistentVolume pvc-c74d8b25-b89b-4c17-8705-1254cc0b58b7 was removed Jun 18 00:12:07.332: INFO: deleting claim "volume-provisioning-2228"/"pvc-jwpmh" Jun 18 00:12:07.335: INFO: deleting storage class volume-provisioning-2228-glusterdptestkd5cs [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:12:07.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-2228" for this suite. • [SLOW TEST:23.088 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 GlusterDynamicProvisioner /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:793 should create and delete persistent volumes [fast] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:794 ------------------------------ {"msg":"PASSED [sig-storage] Dynamic Provisioning GlusterDynamicProvisioner should create and delete persistent volumes [fast]","total":-1,"completed":17,"skipped":552,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:10:50.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] exhausted, late binding, with topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1080 STEP: Building a driver namespace object, basename csi-mock-volumes-5647 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Jun 18 00:10:50.448: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5647-2146/csi-attacher Jun 18 00:10:50.451: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5647 Jun 18 00:10:50.451: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-5647 Jun 18 00:10:50.454: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5647 Jun 18 00:10:50.457: INFO: creating *v1.Role: csi-mock-volumes-5647-2146/external-attacher-cfg-csi-mock-volumes-5647 Jun 18 00:10:50.460: INFO: creating *v1.RoleBinding: csi-mock-volumes-5647-2146/csi-attacher-role-cfg Jun 18 00:10:50.463: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5647-2146/csi-provisioner Jun 18 00:10:50.466: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5647 Jun 18 00:10:50.466: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-5647 Jun 18 00:10:50.485: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5647 Jun 18 00:10:50.488: INFO: creating *v1.Role: csi-mock-volumes-5647-2146/external-provisioner-cfg-csi-mock-volumes-5647 Jun 18 00:10:50.491: INFO: creating *v1.RoleBinding: csi-mock-volumes-5647-2146/csi-provisioner-role-cfg Jun 18 00:10:50.495: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5647-2146/csi-resizer Jun 18 00:10:50.498: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5647 Jun 18 00:10:50.498: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-5647 Jun 18 00:10:50.501: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5647 Jun 18 00:10:50.504: INFO: creating *v1.Role: csi-mock-volumes-5647-2146/external-resizer-cfg-csi-mock-volumes-5647 Jun 18 00:10:50.507: INFO: creating *v1.RoleBinding: csi-mock-volumes-5647-2146/csi-resizer-role-cfg Jun 18 00:10:50.510: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5647-2146/csi-snapshotter Jun 18 00:10:50.513: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5647 Jun 18 00:10:50.513: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-5647 Jun 18 00:10:50.516: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5647 Jun 18 00:10:50.518: INFO: creating *v1.Role: csi-mock-volumes-5647-2146/external-snapshotter-leaderelection-csi-mock-volumes-5647 Jun 18 00:10:50.521: INFO: creating *v1.RoleBinding: csi-mock-volumes-5647-2146/external-snapshotter-leaderelection Jun 18 00:10:50.523: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5647-2146/csi-mock Jun 18 00:10:50.527: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5647 Jun 18 00:10:50.529: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5647 Jun 18 00:10:50.532: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5647 Jun 18 00:10:50.535: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5647 Jun 18 00:10:50.539: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5647 Jun 18 00:10:50.541: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5647 Jun 18 00:10:50.544: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5647 Jun 18 00:10:50.548: INFO: creating *v1.StatefulSet: csi-mock-volumes-5647-2146/csi-mockplugin Jun 18 00:10:50.553: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-5647 Jun 18 00:10:50.555: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-5647" Jun 18 00:10:50.557: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5647 to register on node node2 I0618 00:10:55.602182 34 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I0618 00:10:55.604154 34 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-5647","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0618 00:10:55.606366 34 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}},{"Type":{"Service":{"type":2}}}]},"Error":"","FullError":null} I0618 00:10:55.647486 34 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I0618 00:10:55.714291 34 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-5647","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0618 00:10:55.798940 34 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-5647","accessible_topology":{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}},"Error":"","FullError":null} STEP: Creating pod Jun 18 00:11:00.080: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil I0618 00:11:00.108853 34 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-f653847b-51d3-4bdc-9601-a97e9ea3974a","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}],"accessibility_requirements":{"requisite":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}],"preferred":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}} I0618 00:11:02.224257 34 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-f653847b-51d3-4bdc-9601-a97e9ea3974a","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}],"accessibility_requirements":{"requisite":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}],"preferred":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-f653847b-51d3-4bdc-9601-a97e9ea3974a"},"accessible_topology":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Error":"","FullError":null} I0618 00:11:03.423974 34 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Jun 18 00:11:03.426: INFO: >>> kubeConfig: /root/.kube/config I0618 00:11:03.515294 34 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-f653847b-51d3-4bdc-9601-a97e9ea3974a/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-f653847b-51d3-4bdc-9601-a97e9ea3974a","storage.kubernetes.io/csiProvisionerIdentity":"1655511055687-8081-csi-mock-csi-mock-volumes-5647"}},"Response":{},"Error":"","FullError":null} I0618 00:11:03.519881 34 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Jun 18 00:11:03.521: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:11:03.605: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:11:03.685: INFO: >>> kubeConfig: /root/.kube/config I0618 00:11:03.770361 34 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-f653847b-51d3-4bdc-9601-a97e9ea3974a/globalmount","target_path":"/var/lib/kubelet/pods/11c07e3b-1873-42e8-a0a4-9aa927e95d97/volumes/kubernetes.io~csi/pvc-f653847b-51d3-4bdc-9601-a97e9ea3974a/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-f653847b-51d3-4bdc-9601-a97e9ea3974a","storage.kubernetes.io/csiProvisionerIdentity":"1655511055687-8081-csi-mock-csi-mock-volumes-5647"}},"Response":{},"Error":"","FullError":null} Jun 18 00:11:08.115: INFO: Deleting pod "pvc-volume-tester-hwtd8" in namespace "csi-mock-volumes-5647" Jun 18 00:11:08.120: INFO: Wait up to 5m0s for pod "pvc-volume-tester-hwtd8" to be fully deleted I0618 00:11:08.869755 34 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0618 00:11:08.873015 34 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetVolumeStats","Request":{"volume_id":"4","volume_path":"/var/lib/kubelet/pods/11c07e3b-1873-42e8-a0a4-9aa927e95d97/volumes/kubernetes.io~csi/pvc-f653847b-51d3-4bdc-9601-a97e9ea3974a/mount"},"Response":{"usage":[{"total":1073741824,"unit":1}],"volume_condition":{}},"Error":"","FullError":null} Jun 18 00:11:10.941: INFO: >>> kubeConfig: /root/.kube/config I0618 00:11:11.063121 34 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/11c07e3b-1873-42e8-a0a4-9aa927e95d97/volumes/kubernetes.io~csi/pvc-f653847b-51d3-4bdc-9601-a97e9ea3974a/mount"},"Response":{},"Error":"","FullError":null} I0618 00:11:11.141739 34 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0618 00:11:11.143650 34 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-f653847b-51d3-4bdc-9601-a97e9ea3974a/globalmount"},"Response":{},"Error":"","FullError":null} I0618 00:11:20.140464 34 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} STEP: Checking PVC events Jun 18 00:11:21.130: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-hsnfd", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5647", SelfLink:"", UID:"f653847b-51d3-4bdc-9601-a97e9ea3974a", ResourceVersion:"98331", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63791107860, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003478258), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003478270)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc00414e2f0), VolumeMode:(*v1.PersistentVolumeMode)(0xc00414e300), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 18 00:11:21.130: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-hsnfd", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5647", SelfLink:"", UID:"f653847b-51d3-4bdc-9601-a97e9ea3974a", ResourceVersion:"98335", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63791107860, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.kubernetes.io/selected-node":"node2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004118090), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0041180a8)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0041180c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0041180d8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc004aa80a0), VolumeMode:(*v1.PersistentVolumeMode)(0xc004aa80c0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 18 00:11:21.130: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-hsnfd", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5647", SelfLink:"", UID:"f653847b-51d3-4bdc-9601-a97e9ea3974a", ResourceVersion:"98336", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63791107860, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5647", "volume.kubernetes.io/selected-node":"node2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002821458), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002821470)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002821488), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0028214b8)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0028214d0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0028214e8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc000996200), VolumeMode:(*v1.PersistentVolumeMode)(0xc000996250), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 18 00:11:21.130: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-hsnfd", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5647", SelfLink:"", UID:"f653847b-51d3-4bdc-9601-a97e9ea3974a", ResourceVersion:"98340", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63791107860, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5647"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0059c11a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0059c11b8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0059c11d0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0059c11e8)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0059c1200), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0059c1218)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0042f0c10), VolumeMode:(*v1.PersistentVolumeMode)(0xc0042f0c20), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 18 00:11:21.131: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-hsnfd", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5647", SelfLink:"", UID:"f653847b-51d3-4bdc-9601-a97e9ea3974a", ResourceVersion:"98433", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63791107860, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5647", "volume.kubernetes.io/selected-node":"node2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0059c1248), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0059c1260)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0059c1278), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0059c1290)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0059c12a8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0059c12c0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0042f0c50), VolumeMode:(*v1.PersistentVolumeMode)(0xc0042f0c60), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 18 00:11:21.131: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-hsnfd", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5647", SelfLink:"", UID:"f653847b-51d3-4bdc-9601-a97e9ea3974a", ResourceVersion:"98439", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63791107860, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5647", "volume.kubernetes.io/selected-node":"node2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002821728), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002821740)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002821758), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002821770)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002821788), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0028217a0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-f653847b-51d3-4bdc-9601-a97e9ea3974a", StorageClassName:(*string)(0xc000996d00), VolumeMode:(*v1.PersistentVolumeMode)(0xc000996d20), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 18 00:11:21.131: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-hsnfd", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5647", SelfLink:"", UID:"f653847b-51d3-4bdc-9601-a97e9ea3974a", ResourceVersion:"98440", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63791107860, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5647", "volume.kubernetes.io/selected-node":"node2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0028217d0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0028217e8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002821800), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002821818)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002821830), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002821848)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-f653847b-51d3-4bdc-9601-a97e9ea3974a", StorageClassName:(*string)(0xc000996da0), VolumeMode:(*v1.PersistentVolumeMode)(0xc000996db0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 18 00:11:21.131: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-hsnfd", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5647", SelfLink:"", UID:"f653847b-51d3-4bdc-9601-a97e9ea3974a", ResourceVersion:"99002", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63791107860, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(0xc002821878), DeletionGracePeriodSeconds:(*int64)(0xc0002fea98), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5647", "volume.kubernetes.io/selected-node":"node2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002821890), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0028218a8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0028218c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0028218d8)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0028218f0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002821908)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-f653847b-51d3-4bdc-9601-a97e9ea3974a", StorageClassName:(*string)(0xc000996df0), VolumeMode:(*v1.PersistentVolumeMode)(0xc000996e00), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 18 00:11:21.131: INFO: PVC event DELETED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-hsnfd", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5647", SelfLink:"", UID:"f653847b-51d3-4bdc-9601-a97e9ea3974a", ResourceVersion:"99003", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63791107860, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(0xc002821938), DeletionGracePeriodSeconds:(*int64)(0xc0002fecd8), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5647", "volume.kubernetes.io/selected-node":"node2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002821950), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002821968)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002821980), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002821998)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0028219b0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0028219c8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-f653847b-51d3-4bdc-9601-a97e9ea3974a", StorageClassName:(*string)(0xc000996e60), VolumeMode:(*v1.PersistentVolumeMode)(0xc000996e70), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} STEP: Deleting pod pvc-volume-tester-hwtd8 Jun 18 00:11:21.131: INFO: Deleting pod "pvc-volume-tester-hwtd8" in namespace "csi-mock-volumes-5647" STEP: Deleting claim pvc-hsnfd STEP: Deleting storageclass csi-mock-volumes-5647-scqmn49 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-5647 STEP: Waiting for namespaces [csi-mock-volumes-5647] to vanish STEP: uninstalling csi mock driver Jun 18 00:11:27.162: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5647-2146/csi-attacher Jun 18 00:11:27.167: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5647 Jun 18 00:11:27.171: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5647 Jun 18 00:11:27.174: INFO: deleting *v1.Role: csi-mock-volumes-5647-2146/external-attacher-cfg-csi-mock-volumes-5647 Jun 18 00:11:27.178: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5647-2146/csi-attacher-role-cfg Jun 18 00:11:27.181: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5647-2146/csi-provisioner Jun 18 00:11:27.184: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5647 Jun 18 00:11:27.188: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5647 Jun 18 00:11:27.191: INFO: deleting *v1.Role: csi-mock-volumes-5647-2146/external-provisioner-cfg-csi-mock-volumes-5647 Jun 18 00:11:27.195: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5647-2146/csi-provisioner-role-cfg Jun 18 00:11:27.198: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5647-2146/csi-resizer Jun 18 00:11:27.202: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5647 Jun 18 00:11:27.205: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5647 Jun 18 00:11:27.208: INFO: deleting *v1.Role: csi-mock-volumes-5647-2146/external-resizer-cfg-csi-mock-volumes-5647 Jun 18 00:11:27.212: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5647-2146/csi-resizer-role-cfg Jun 18 00:11:27.216: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5647-2146/csi-snapshotter Jun 18 00:11:27.221: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5647 Jun 18 00:11:27.224: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5647 Jun 18 00:11:27.227: INFO: deleting *v1.Role: csi-mock-volumes-5647-2146/external-snapshotter-leaderelection-csi-mock-volumes-5647 Jun 18 00:11:27.231: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5647-2146/external-snapshotter-leaderelection Jun 18 00:11:27.234: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5647-2146/csi-mock Jun 18 00:11:27.237: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5647 Jun 18 00:11:27.240: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5647 Jun 18 00:11:27.243: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5647 Jun 18 00:11:27.246: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5647 Jun 18 00:11:27.250: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5647 Jun 18 00:11:27.253: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5647 Jun 18 00:11:27.256: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5647 Jun 18 00:11:27.259: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5647-2146/csi-mockplugin Jun 18 00:11:27.263: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-5647 STEP: deleting the driver namespace: csi-mock-volumes-5647-2146 STEP: Waiting for namespaces [csi-mock-volumes-5647-2146] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:12:11.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:80.912 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 storage capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1022 exhausted, late binding, with topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1080 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology","total":-1,"completed":9,"skipped":405,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:12:01.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-11229396-aec7-4848-827f-ce807b2b40b0" Jun 18 00:12:05.600: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-11229396-aec7-4848-827f-ce807b2b40b0" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-11229396-aec7-4848-827f-ce807b2b40b0" "/tmp/local-volume-test-11229396-aec7-4848-827f-ce807b2b40b0"] Namespace:persistent-local-volumes-test-2757 PodName:hostexec-node2-zkhnt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:12:05.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:12:05.684: INFO: Creating a PV followed by a PVC Jun 18 00:12:05.692: INFO: Waiting for PV local-pvsz25k to bind to PVC pvc-drzdv Jun 18 00:12:05.692: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-drzdv] to have phase Bound Jun 18 00:12:05.694: INFO: PersistentVolumeClaim pvc-drzdv found but phase is Pending instead of Bound. Jun 18 00:12:07.699: INFO: PersistentVolumeClaim pvc-drzdv found but phase is Pending instead of Bound. Jun 18 00:12:09.704: INFO: PersistentVolumeClaim pvc-drzdv found but phase is Pending instead of Bound. Jun 18 00:12:11.708: INFO: PersistentVolumeClaim pvc-drzdv found and phase=Bound (6.016295182s) Jun 18 00:12:11.708: INFO: Waiting up to 3m0s for PersistentVolume local-pvsz25k to have phase Bound Jun 18 00:12:11.710: INFO: PersistentVolume local-pvsz25k found and phase=Bound (2.054794ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Jun 18 00:12:11.714: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:12:11.716: INFO: Deleting PersistentVolumeClaim "pvc-drzdv" Jun 18 00:12:11.719: INFO: Deleting PersistentVolume "local-pvsz25k" STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-11229396-aec7-4848-827f-ce807b2b40b0" Jun 18 00:12:11.723: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-11229396-aec7-4848-827f-ce807b2b40b0"] Namespace:persistent-local-volumes-test-2757 PodName:hostexec-node2-zkhnt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:12:11.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:12:11.869: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-11229396-aec7-4848-827f-ce807b2b40b0] Namespace:persistent-local-volumes-test-2757 PodName:hostexec-node2-zkhnt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:12:11.869: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:12:11.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2757" for this suite. S [SKIPPING] [10.427 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:12:06.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-6c671a5f-072a-45ad-8b93-260606d7d4fd" Jun 18 00:12:10.953: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-6c671a5f-072a-45ad-8b93-260606d7d4fd && dd if=/dev/zero of=/tmp/local-volume-test-6c671a5f-072a-45ad-8b93-260606d7d4fd/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-6c671a5f-072a-45ad-8b93-260606d7d4fd/file] Namespace:persistent-local-volumes-test-5035 PodName:hostexec-node2-jmbkj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:12:10.953: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:12:11.067: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-6c671a5f-072a-45ad-8b93-260606d7d4fd/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-5035 PodName:hostexec-node2-jmbkj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:12:11.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:12:11.150: INFO: Creating a PV followed by a PVC Jun 18 00:12:11.157: INFO: Waiting for PV local-pvxljt2 to bind to PVC pvc-fvg8w Jun 18 00:12:11.157: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-fvg8w] to have phase Bound Jun 18 00:12:11.160: INFO: PersistentVolumeClaim pvc-fvg8w found but phase is Pending instead of Bound. Jun 18 00:12:13.163: INFO: PersistentVolumeClaim pvc-fvg8w found and phase=Bound (2.006075477s) Jun 18 00:12:13.163: INFO: Waiting up to 3m0s for PersistentVolume local-pvxljt2 to have phase Bound Jun 18 00:12:13.165: INFO: PersistentVolume local-pvxljt2 found and phase=Bound (2.126963ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Jun 18 00:12:13.169: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:12:13.171: INFO: Deleting PersistentVolumeClaim "pvc-fvg8w" Jun 18 00:12:13.175: INFO: Deleting PersistentVolume "local-pvxljt2" Jun 18 00:12:13.179: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-6c671a5f-072a-45ad-8b93-260606d7d4fd/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-5035 PodName:hostexec-node2-jmbkj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:12:13.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-6c671a5f-072a-45ad-8b93-260606d7d4fd/file Jun 18 00:12:13.308: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-5035 PodName:hostexec-node2-jmbkj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:12:13.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-6c671a5f-072a-45ad-8b93-260606d7d4fd Jun 18 00:12:13.418: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6c671a5f-072a-45ad-8b93-260606d7d4fd] Namespace:persistent-local-volumes-test-5035 PodName:hostexec-node2-jmbkj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:12:13.418: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:12:13.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5035" for this suite. S [SKIPPING] [6.614 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:12:07.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Jun 18 00:12:07.423: INFO: The status of Pod test-hostpath-type-4qh2k is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:12:09.426: INFO: The status of Pod test-hostpath-type-4qh2k is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:12:11.427: INFO: The status of Pod test-hostpath-type-4qh2k is Running (Ready = true) STEP: running on node node2 STEP: Create a character device for further testing Jun 18 00:12:11.429: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-8619 PodName:test-hostpath-type-4qh2k ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:12:11.429: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting character device 'achardev' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:285 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:12:13.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-8619" for this suite. • [SLOW TEST:6.163 seconds] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting character device 'achardev' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:285 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathDirectory","total":-1,"completed":18,"skipped":570,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:12:13.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 18 00:12:13.551: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:12:13.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3276" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create unbound pvc count metrics for pvc controller after creating pvc only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:494 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:11:46.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 18 00:12:00.364: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-54de9100-406f-467b-b5fb-03ea5e0eeb3b && mount --bind /tmp/local-volume-test-54de9100-406f-467b-b5fb-03ea5e0eeb3b /tmp/local-volume-test-54de9100-406f-467b-b5fb-03ea5e0eeb3b] Namespace:persistent-local-volumes-test-6211 PodName:hostexec-node2-nztxj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:12:00.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:12:00.449: INFO: Creating a PV followed by a PVC Jun 18 00:12:00.457: INFO: Waiting for PV local-pv9r65x to bind to PVC pvc-6td6c Jun 18 00:12:00.457: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-6td6c] to have phase Bound Jun 18 00:12:00.459: INFO: PersistentVolumeClaim pvc-6td6c found but phase is Pending instead of Bound. Jun 18 00:12:02.463: INFO: PersistentVolumeClaim pvc-6td6c found but phase is Pending instead of Bound. Jun 18 00:12:04.467: INFO: PersistentVolumeClaim pvc-6td6c found but phase is Pending instead of Bound. Jun 18 00:12:06.473: INFO: PersistentVolumeClaim pvc-6td6c found but phase is Pending instead of Bound. Jun 18 00:12:08.478: INFO: PersistentVolumeClaim pvc-6td6c found but phase is Pending instead of Bound. Jun 18 00:12:10.482: INFO: PersistentVolumeClaim pvc-6td6c found but phase is Pending instead of Bound. Jun 18 00:12:12.485: INFO: PersistentVolumeClaim pvc-6td6c found and phase=Bound (12.02856623s) Jun 18 00:12:12.485: INFO: Waiting up to 3m0s for PersistentVolume local-pv9r65x to have phase Bound Jun 18 00:12:12.487: INFO: PersistentVolume local-pv9r65x found and phase=Bound (1.876265ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Jun 18 00:12:16.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-6211 exec pod-127560a4-75fe-4ced-9f55-cbd87def6900 --namespace=persistent-local-volumes-test-6211 -- stat -c %g /mnt/volume1' Jun 18 00:12:16.782: INFO: stderr: "" Jun 18 00:12:16.782: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-127560a4-75fe-4ced-9f55-cbd87def6900 in namespace persistent-local-volumes-test-6211 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:12:16.787: INFO: Deleting PersistentVolumeClaim "pvc-6td6c" Jun 18 00:12:16.790: INFO: Deleting PersistentVolume "local-pv9r65x" STEP: Removing the test directory Jun 18 00:12:16.793: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-54de9100-406f-467b-b5fb-03ea5e0eeb3b && rm -r /tmp/local-volume-test-54de9100-406f-467b-b5fb-03ea5e0eeb3b] Namespace:persistent-local-volumes-test-6211 PodName:hostexec-node2-nztxj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:12:16.794: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:12:16.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6211" for this suite. • [SLOW TEST:30.644 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":8,"skipped":205,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:12:13.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:59 STEP: Creating configMap with name configmap-test-volume-3fc6178d-cf25-4ecb-a463-ab134fb39c10 STEP: Creating a pod to test consume configMaps Jun 18 00:12:13.673: INFO: Waiting up to 5m0s for pod "pod-configmaps-4b8028c5-69ff-4e15-a8b9-8af31bbb8416" in namespace "configmap-3394" to be "Succeeded or Failed" Jun 18 00:12:13.678: INFO: Pod "pod-configmaps-4b8028c5-69ff-4e15-a8b9-8af31bbb8416": Phase="Pending", Reason="", readiness=false. Elapsed: 4.876337ms Jun 18 00:12:15.682: INFO: Pod "pod-configmaps-4b8028c5-69ff-4e15-a8b9-8af31bbb8416": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00867742s Jun 18 00:12:17.686: INFO: Pod "pod-configmaps-4b8028c5-69ff-4e15-a8b9-8af31bbb8416": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012540962s STEP: Saw pod success Jun 18 00:12:17.686: INFO: Pod "pod-configmaps-4b8028c5-69ff-4e15-a8b9-8af31bbb8416" satisfied condition "Succeeded or Failed" Jun 18 00:12:17.688: INFO: Trying to get logs from node node2 pod pod-configmaps-4b8028c5-69ff-4e15-a8b9-8af31bbb8416 container agnhost-container: STEP: delete the pod Jun 18 00:12:17.711: INFO: Waiting for pod pod-configmaps-4b8028c5-69ff-4e15-a8b9-8af31bbb8416 to disappear Jun 18 00:12:17.713: INFO: Pod pod-configmaps-4b8028c5-69ff-4e15-a8b9-8af31bbb8416 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:12:17.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3394" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":15,"skipped":502,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:11:25.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume by restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 STEP: Building a driver namespace object, basename csi-mock-volumes-3049 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 18 00:11:25.343: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3049-8224/csi-attacher Jun 18 00:11:25.347: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3049 Jun 18 00:11:25.347: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-3049 Jun 18 00:11:25.350: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3049 Jun 18 00:11:25.353: INFO: creating *v1.Role: csi-mock-volumes-3049-8224/external-attacher-cfg-csi-mock-volumes-3049 Jun 18 00:11:25.356: INFO: creating *v1.RoleBinding: csi-mock-volumes-3049-8224/csi-attacher-role-cfg Jun 18 00:11:25.358: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3049-8224/csi-provisioner Jun 18 00:11:25.361: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3049 Jun 18 00:11:25.361: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-3049 Jun 18 00:11:25.363: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3049 Jun 18 00:11:25.366: INFO: creating *v1.Role: csi-mock-volumes-3049-8224/external-provisioner-cfg-csi-mock-volumes-3049 Jun 18 00:11:25.369: INFO: creating *v1.RoleBinding: csi-mock-volumes-3049-8224/csi-provisioner-role-cfg Jun 18 00:11:25.371: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3049-8224/csi-resizer Jun 18 00:11:25.374: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3049 Jun 18 00:11:25.374: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-3049 Jun 18 00:11:25.377: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3049 Jun 18 00:11:25.380: INFO: creating *v1.Role: csi-mock-volumes-3049-8224/external-resizer-cfg-csi-mock-volumes-3049 Jun 18 00:11:25.382: INFO: creating *v1.RoleBinding: csi-mock-volumes-3049-8224/csi-resizer-role-cfg Jun 18 00:11:25.385: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3049-8224/csi-snapshotter Jun 18 00:11:25.387: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3049 Jun 18 00:11:25.388: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-3049 Jun 18 00:11:25.390: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3049 Jun 18 00:11:25.393: INFO: creating *v1.Role: csi-mock-volumes-3049-8224/external-snapshotter-leaderelection-csi-mock-volumes-3049 Jun 18 00:11:25.397: INFO: creating *v1.RoleBinding: csi-mock-volumes-3049-8224/external-snapshotter-leaderelection Jun 18 00:11:25.400: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3049-8224/csi-mock Jun 18 00:11:25.403: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3049 Jun 18 00:11:25.406: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3049 Jun 18 00:11:25.409: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3049 Jun 18 00:11:25.413: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3049 Jun 18 00:11:25.415: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3049 Jun 18 00:11:25.418: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3049 Jun 18 00:11:25.421: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3049 Jun 18 00:11:25.424: INFO: creating *v1.StatefulSet: csi-mock-volumes-3049-8224/csi-mockplugin Jun 18 00:11:25.428: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-3049 Jun 18 00:11:25.431: INFO: creating *v1.StatefulSet: csi-mock-volumes-3049-8224/csi-mockplugin-resizer Jun 18 00:11:25.434: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-3049" Jun 18 00:11:25.437: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-3049 to register on node node1 STEP: Creating pod Jun 18 00:11:41.707: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 18 00:11:41.712: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-b468z] to have phase Bound Jun 18 00:11:41.714: INFO: PersistentVolumeClaim pvc-b468z found but phase is Pending instead of Bound. Jun 18 00:11:43.718: INFO: PersistentVolumeClaim pvc-b468z found and phase=Bound (2.006258949s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Checking for conditions on pvc STEP: Deleting the previously created pod Jun 18 00:11:51.754: INFO: Deleting pod "pvc-volume-tester-hbzpx" in namespace "csi-mock-volumes-3049" Jun 18 00:11:51.759: INFO: Wait up to 5m0s for pod "pvc-volume-tester-hbzpx" to be fully deleted STEP: Creating a new pod with same volume STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-hbzpx Jun 18 00:11:57.783: INFO: Deleting pod "pvc-volume-tester-hbzpx" in namespace "csi-mock-volumes-3049" STEP: Deleting pod pvc-volume-tester-8sv2g Jun 18 00:11:57.785: INFO: Deleting pod "pvc-volume-tester-8sv2g" in namespace "csi-mock-volumes-3049" Jun 18 00:11:57.790: INFO: Wait up to 5m0s for pod "pvc-volume-tester-8sv2g" to be fully deleted STEP: Deleting claim pvc-b468z Jun 18 00:12:01.803: INFO: Waiting up to 2m0s for PersistentVolume pvc-efd8a8bc-8cea-4330-b174-9122e5d50046 to get deleted Jun 18 00:12:01.806: INFO: PersistentVolume pvc-efd8a8bc-8cea-4330-b174-9122e5d50046 found and phase=Bound (2.987645ms) Jun 18 00:12:03.811: INFO: PersistentVolume pvc-efd8a8bc-8cea-4330-b174-9122e5d50046 was removed STEP: Deleting storageclass csi-mock-volumes-3049-scxrpzs STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-3049 STEP: Waiting for namespaces [csi-mock-volumes-3049] to vanish STEP: uninstalling csi mock driver Jun 18 00:12:09.823: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3049-8224/csi-attacher Jun 18 00:12:09.828: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3049 Jun 18 00:12:09.833: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3049 Jun 18 00:12:09.836: INFO: deleting *v1.Role: csi-mock-volumes-3049-8224/external-attacher-cfg-csi-mock-volumes-3049 Jun 18 00:12:09.839: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3049-8224/csi-attacher-role-cfg Jun 18 00:12:09.843: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3049-8224/csi-provisioner Jun 18 00:12:09.846: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3049 Jun 18 00:12:09.849: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3049 Jun 18 00:12:09.853: INFO: deleting *v1.Role: csi-mock-volumes-3049-8224/external-provisioner-cfg-csi-mock-volumes-3049 Jun 18 00:12:09.856: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3049-8224/csi-provisioner-role-cfg Jun 18 00:12:09.860: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3049-8224/csi-resizer Jun 18 00:12:09.863: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3049 Jun 18 00:12:09.866: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3049 Jun 18 00:12:09.870: INFO: deleting *v1.Role: csi-mock-volumes-3049-8224/external-resizer-cfg-csi-mock-volumes-3049 Jun 18 00:12:09.873: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3049-8224/csi-resizer-role-cfg Jun 18 00:12:09.876: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3049-8224/csi-snapshotter Jun 18 00:12:09.880: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3049 Jun 18 00:12:09.883: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3049 Jun 18 00:12:09.887: INFO: deleting *v1.Role: csi-mock-volumes-3049-8224/external-snapshotter-leaderelection-csi-mock-volumes-3049 Jun 18 00:12:09.890: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3049-8224/external-snapshotter-leaderelection Jun 18 00:12:09.893: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3049-8224/csi-mock Jun 18 00:12:09.897: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3049 Jun 18 00:12:09.901: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3049 Jun 18 00:12:09.904: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3049 Jun 18 00:12:09.907: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3049 Jun 18 00:12:09.911: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3049 Jun 18 00:12:09.914: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3049 Jun 18 00:12:09.918: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3049 Jun 18 00:12:09.921: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3049-8224/csi-mockplugin Jun 18 00:12:09.925: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-3049 Jun 18 00:12:09.929: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3049-8224/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-3049-8224 STEP: Waiting for namespaces [csi-mock-volumes-3049-8224] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:12:21.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:56.681 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI Volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561 should expand volume by restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":6,"skipped":142,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:12:13.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 18 00:12:19.618: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-a5ad2c2d-cad7-4474-848d-dcfc269e75ad] Namespace:persistent-local-volumes-test-5669 PodName:hostexec-node1-5s588 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:12:19.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:12:19.862: INFO: Creating a PV followed by a PVC Jun 18 00:12:19.869: INFO: Waiting for PV local-pvjzk2r to bind to PVC pvc-gm775 Jun 18 00:12:19.869: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-gm775] to have phase Bound Jun 18 00:12:19.871: INFO: PersistentVolumeClaim pvc-gm775 found but phase is Pending instead of Bound. Jun 18 00:12:21.876: INFO: PersistentVolumeClaim pvc-gm775 found and phase=Bound (2.006833794s) Jun 18 00:12:21.876: INFO: Waiting up to 3m0s for PersistentVolume local-pvjzk2r to have phase Bound Jun 18 00:12:21.878: INFO: PersistentVolume local-pvjzk2r found and phase=Bound (2.389737ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Jun 18 00:12:21.883: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:12:21.885: INFO: Deleting PersistentVolumeClaim "pvc-gm775" Jun 18 00:12:21.888: INFO: Deleting PersistentVolume "local-pvjzk2r" STEP: Removing the test directory Jun 18 00:12:21.893: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a5ad2c2d-cad7-4474-848d-dcfc269e75ad] Namespace:persistent-local-volumes-test-5669 PodName:hostexec-node1-5s588 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:12:21.893: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:12:22.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5669" for this suite. S [SKIPPING] [8.452 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:12:11.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 18 00:12:15.435: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-5e0d3466-e8ad-41a8-b7ef-d25a08285a10-backend && ln -s /tmp/local-volume-test-5e0d3466-e8ad-41a8-b7ef-d25a08285a10-backend /tmp/local-volume-test-5e0d3466-e8ad-41a8-b7ef-d25a08285a10] Namespace:persistent-local-volumes-test-4251 PodName:hostexec-node1-zr9dt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:12:15.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:12:15.589: INFO: Creating a PV followed by a PVC Jun 18 00:12:15.596: INFO: Waiting for PV local-pv4r854 to bind to PVC pvc-9s6bt Jun 18 00:12:15.596: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-9s6bt] to have phase Bound Jun 18 00:12:15.598: INFO: PersistentVolumeClaim pvc-9s6bt found but phase is Pending instead of Bound. Jun 18 00:12:17.601: INFO: PersistentVolumeClaim pvc-9s6bt found and phase=Bound (2.005461074s) Jun 18 00:12:17.601: INFO: Waiting up to 3m0s for PersistentVolume local-pv4r854 to have phase Bound Jun 18 00:12:17.603: INFO: PersistentVolume local-pv4r854 found and phase=Bound (1.987192ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jun 18 00:12:25.628: INFO: pod "pod-2091c7f4-22a2-4693-82e8-fe509d5912ca" created on Node "node1" STEP: Writing in pod1 Jun 18 00:12:25.628: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4251 PodName:pod-2091c7f4-22a2-4693-82e8-fe509d5912ca ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:12:25.628: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:12:25.716: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Jun 18 00:12:25.716: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4251 PodName:pod-2091c7f4-22a2-4693-82e8-fe509d5912ca ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:12:25.716: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:12:25.802: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Jun 18 00:12:25.802: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-5e0d3466-e8ad-41a8-b7ef-d25a08285a10 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4251 PodName:pod-2091c7f4-22a2-4693-82e8-fe509d5912ca ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:12:25.802: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:12:25.882: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-5e0d3466-e8ad-41a8-b7ef-d25a08285a10 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-2091c7f4-22a2-4693-82e8-fe509d5912ca in namespace persistent-local-volumes-test-4251 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:12:25.888: INFO: Deleting PersistentVolumeClaim "pvc-9s6bt" Jun 18 00:12:25.892: INFO: Deleting PersistentVolume "local-pv4r854" STEP: Removing the test directory Jun 18 00:12:25.896: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-5e0d3466-e8ad-41a8-b7ef-d25a08285a10 && rm -r /tmp/local-volume-test-5e0d3466-e8ad-41a8-b7ef-d25a08285a10-backend] Namespace:persistent-local-volumes-test-4251 PodName:hostexec-node1-zr9dt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:12:25.896: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:12:26.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4251" for this suite. • [SLOW TEST:14.628 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":10,"skipped":452,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:12:22.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] new files should be created with FSGroup ownership when container is root /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:55 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 18 00:12:22.073: INFO: Waiting up to 5m0s for pod "pod-9ba68b4e-76d4-4852-80f6-898e754d51fb" in namespace "emptydir-7113" to be "Succeeded or Failed" Jun 18 00:12:22.076: INFO: Pod "pod-9ba68b4e-76d4-4852-80f6-898e754d51fb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.479988ms Jun 18 00:12:24.080: INFO: Pod "pod-9ba68b4e-76d4-4852-80f6-898e754d51fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006525239s Jun 18 00:12:26.083: INFO: Pod "pod-9ba68b4e-76d4-4852-80f6-898e754d51fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009879465s STEP: Saw pod success Jun 18 00:12:26.083: INFO: Pod "pod-9ba68b4e-76d4-4852-80f6-898e754d51fb" satisfied condition "Succeeded or Failed" Jun 18 00:12:26.085: INFO: Trying to get logs from node node2 pod pod-9ba68b4e-76d4-4852-80f6-898e754d51fb container test-container: STEP: delete the pod Jun 18 00:12:26.099: INFO: Waiting for pod pod-9ba68b4e-76d4-4852-80f6-898e754d51fb to disappear Jun 18 00:12:26.101: INFO: Pod pod-9ba68b4e-76d4-4852-80f6-898e754d51fb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:12:26.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7113" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root","total":-1,"completed":19,"skipped":587,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:12:12.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-3fd62342-876c-4b12-b0be-e493606c478b" Jun 18 00:12:18.129: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-3fd62342-876c-4b12-b0be-e493606c478b" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-3fd62342-876c-4b12-b0be-e493606c478b" "/tmp/local-volume-test-3fd62342-876c-4b12-b0be-e493606c478b"] Namespace:persistent-local-volumes-test-2965 PodName:hostexec-node1-s27tf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:12:18.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:12:18.262: INFO: Creating a PV followed by a PVC Jun 18 00:12:18.269: INFO: Waiting for PV local-pvdh8q8 to bind to PVC pvc-qbthz Jun 18 00:12:18.269: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-qbthz] to have phase Bound Jun 18 00:12:18.271: INFO: PersistentVolumeClaim pvc-qbthz found but phase is Pending instead of Bound. Jun 18 00:12:20.275: INFO: PersistentVolumeClaim pvc-qbthz found and phase=Bound (2.006039987s) Jun 18 00:12:20.275: INFO: Waiting up to 3m0s for PersistentVolume local-pvdh8q8 to have phase Bound Jun 18 00:12:20.277: INFO: PersistentVolume local-pvdh8q8 found and phase=Bound (1.813743ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jun 18 00:12:26.302: INFO: pod "pod-1b8e0ea2-a37b-4aca-a907-c456df7a06c6" created on Node "node1" STEP: Writing in pod1 Jun 18 00:12:26.302: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2965 PodName:pod-1b8e0ea2-a37b-4aca-a907-c456df7a06c6 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:12:26.302: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:12:26.416: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Jun 18 00:12:26.416: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2965 PodName:pod-1b8e0ea2-a37b-4aca-a907-c456df7a06c6 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:12:26.416: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:12:26.936: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-1b8e0ea2-a37b-4aca-a907-c456df7a06c6 in namespace persistent-local-volumes-test-2965 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:12:26.941: INFO: Deleting PersistentVolumeClaim "pvc-qbthz" Jun 18 00:12:26.945: INFO: Deleting PersistentVolume "local-pvdh8q8" STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-3fd62342-876c-4b12-b0be-e493606c478b" Jun 18 00:12:26.949: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-3fd62342-876c-4b12-b0be-e493606c478b"] Namespace:persistent-local-volumes-test-2965 PodName:hostexec-node1-s27tf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:12:26.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:12:27.215: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-3fd62342-876c-4b12-b0be-e493606c478b] Namespace:persistent-local-volumes-test-2965 PodName:hostexec-node1-s27tf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:12:27.215: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:12:27.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2965" for this suite. • [SLOW TEST:15.277 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":13,"skipped":418,"failed":1,"failures":["[sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage final error"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:11:01.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should retry NodeStage after NodeStage ephemeral error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 STEP: Building a driver namespace object, basename csi-mock-volumes-7515 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Jun 18 00:11:01.626: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7515-6817/csi-attacher Jun 18 00:11:01.629: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7515 Jun 18 00:11:01.629: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-7515 Jun 18 00:11:01.631: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7515 Jun 18 00:11:01.634: INFO: creating *v1.Role: csi-mock-volumes-7515-6817/external-attacher-cfg-csi-mock-volumes-7515 Jun 18 00:11:01.636: INFO: creating *v1.RoleBinding: csi-mock-volumes-7515-6817/csi-attacher-role-cfg Jun 18 00:11:01.639: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7515-6817/csi-provisioner Jun 18 00:11:01.642: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7515 Jun 18 00:11:01.642: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-7515 Jun 18 00:11:01.645: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7515 Jun 18 00:11:01.648: INFO: creating *v1.Role: csi-mock-volumes-7515-6817/external-provisioner-cfg-csi-mock-volumes-7515 Jun 18 00:11:01.652: INFO: creating *v1.RoleBinding: csi-mock-volumes-7515-6817/csi-provisioner-role-cfg Jun 18 00:11:01.654: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7515-6817/csi-resizer Jun 18 00:11:01.656: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7515 Jun 18 00:11:01.656: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-7515 Jun 18 00:11:01.659: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7515 Jun 18 00:11:01.663: INFO: creating *v1.Role: csi-mock-volumes-7515-6817/external-resizer-cfg-csi-mock-volumes-7515 Jun 18 00:11:01.665: INFO: creating *v1.RoleBinding: csi-mock-volumes-7515-6817/csi-resizer-role-cfg Jun 18 00:11:01.668: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7515-6817/csi-snapshotter Jun 18 00:11:01.671: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7515 Jun 18 00:11:01.671: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-7515 Jun 18 00:11:01.673: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7515 Jun 18 00:11:01.676: INFO: creating *v1.Role: csi-mock-volumes-7515-6817/external-snapshotter-leaderelection-csi-mock-volumes-7515 Jun 18 00:11:01.678: INFO: creating *v1.RoleBinding: csi-mock-volumes-7515-6817/external-snapshotter-leaderelection Jun 18 00:11:01.681: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7515-6817/csi-mock Jun 18 00:11:01.683: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7515 Jun 18 00:11:01.685: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7515 Jun 18 00:11:01.688: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7515 Jun 18 00:11:01.691: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7515 Jun 18 00:11:01.693: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7515 Jun 18 00:11:01.695: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7515 Jun 18 00:11:01.698: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7515 Jun 18 00:11:01.701: INFO: creating *v1.StatefulSet: csi-mock-volumes-7515-6817/csi-mockplugin Jun 18 00:11:01.705: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-7515 Jun 18 00:11:01.708: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-7515" Jun 18 00:11:01.710: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-7515 to register on node node1 I0618 00:11:12.806158 28 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I0618 00:11:12.808760 28 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-7515","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0618 00:11:12.811211 28 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I0618 00:11:12.813060 28 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I0618 00:11:12.915485 28 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-7515","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0618 00:11:13.870073 28 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-7515"},"Error":"","FullError":null} STEP: Creating pod Jun 18 00:11:17.979: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 18 00:11:17.983: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-b2kjj] to have phase Bound Jun 18 00:11:17.986: INFO: PersistentVolumeClaim pvc-b2kjj found but phase is Pending instead of Bound. I0618 00:11:18.045740 28 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-fdf1325d-01ac-49ef-9831-46fe8648f679","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-fdf1325d-01ac-49ef-9831-46fe8648f679"}}},"Error":"","FullError":null} Jun 18 00:11:19.989: INFO: PersistentVolumeClaim pvc-b2kjj found and phase=Bound (2.005422747s) Jun 18 00:11:20.004: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-b2kjj] to have phase Bound Jun 18 00:11:20.006: INFO: PersistentVolumeClaim pvc-b2kjj found and phase=Bound (2.128499ms) STEP: Waiting for expected CSI calls I0618 00:11:22.976760 28 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0618 00:11:22.979688 28 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-fdf1325d-01ac-49ef-9831-46fe8648f679/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-fdf1325d-01ac-49ef-9831-46fe8648f679","storage.kubernetes.io/csiProvisionerIdentity":"1655511072816-8081-csi-mock-csi-mock-volumes-7515"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I0618 00:11:23.493824 28 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0618 00:11:23.496290 28 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-fdf1325d-01ac-49ef-9831-46fe8648f679/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-fdf1325d-01ac-49ef-9831-46fe8648f679","storage.kubernetes.io/csiProvisionerIdentity":"1655511072816-8081-csi-mock-csi-mock-volumes-7515"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I0618 00:11:24.593447 28 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0618 00:11:24.602183 28 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-fdf1325d-01ac-49ef-9831-46fe8648f679/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-fdf1325d-01ac-49ef-9831-46fe8648f679","storage.kubernetes.io/csiProvisionerIdentity":"1655511072816-8081-csi-mock-csi-mock-volumes-7515"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I0618 00:11:26.664231 28 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Jun 18 00:11:26.720: INFO: >>> kubeConfig: /root/.kube/config I0618 00:11:27.531356 28 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-fdf1325d-01ac-49ef-9831-46fe8648f679/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-fdf1325d-01ac-49ef-9831-46fe8648f679","storage.kubernetes.io/csiProvisionerIdentity":"1655511072816-8081-csi-mock-csi-mock-volumes-7515"}},"Response":{},"Error":"","FullError":null} I0618 00:11:27.536969 28 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Jun 18 00:11:27.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Waiting for pod to be running Jun 18 00:11:28.849: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:11:29.551: INFO: >>> kubeConfig: /root/.kube/config I0618 00:11:29.723100 28 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-fdf1325d-01ac-49ef-9831-46fe8648f679/globalmount","target_path":"/var/lib/kubelet/pods/2493b4b0-9a68-41c7-8b78-856b42c7c447/volumes/kubernetes.io~csi/pvc-fdf1325d-01ac-49ef-9831-46fe8648f679/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-fdf1325d-01ac-49ef-9831-46fe8648f679","storage.kubernetes.io/csiProvisionerIdentity":"1655511072816-8081-csi-mock-csi-mock-volumes-7515"}},"Response":{},"Error":"","FullError":null} STEP: Deleting the previously created pod Jun 18 00:11:36.021: INFO: Deleting pod "pvc-volume-tester-mhflb" in namespace "csi-mock-volumes-7515" Jun 18 00:11:36.024: INFO: Wait up to 5m0s for pod "pvc-volume-tester-mhflb" to be fully deleted Jun 18 00:11:37.899: INFO: >>> kubeConfig: /root/.kube/config I0618 00:11:37.992338 28 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/2493b4b0-9a68-41c7-8b78-856b42c7c447/volumes/kubernetes.io~csi/pvc-fdf1325d-01ac-49ef-9831-46fe8648f679/mount"},"Response":{},"Error":"","FullError":null} I0618 00:11:38.002608 28 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0618 00:11:38.004346 28 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-fdf1325d-01ac-49ef-9831-46fe8648f679/globalmount"},"Response":{},"Error":"","FullError":null} STEP: Waiting for all remaining expected CSI calls STEP: Deleting pod pvc-volume-tester-mhflb Jun 18 00:11:41.031: INFO: Deleting pod "pvc-volume-tester-mhflb" in namespace "csi-mock-volumes-7515" STEP: Deleting claim pvc-b2kjj Jun 18 00:11:41.042: INFO: Waiting up to 2m0s for PersistentVolume pvc-fdf1325d-01ac-49ef-9831-46fe8648f679 to get deleted Jun 18 00:11:41.044: INFO: PersistentVolume pvc-fdf1325d-01ac-49ef-9831-46fe8648f679 found and phase=Bound (2.455721ms) I0618 00:11:41.057367 28 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} Jun 18 00:11:43.048: INFO: PersistentVolume pvc-fdf1325d-01ac-49ef-9831-46fe8648f679 was removed STEP: Deleting storageclass csi-mock-volumes-7515-sclgnbm STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-7515 STEP: Waiting for namespaces [csi-mock-volumes-7515] to vanish STEP: uninstalling csi mock driver Jun 18 00:11:49.077: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7515-6817/csi-attacher Jun 18 00:11:49.081: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7515 Jun 18 00:11:49.084: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7515 Jun 18 00:11:49.088: INFO: deleting *v1.Role: csi-mock-volumes-7515-6817/external-attacher-cfg-csi-mock-volumes-7515 Jun 18 00:11:49.091: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7515-6817/csi-attacher-role-cfg Jun 18 00:11:49.094: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7515-6817/csi-provisioner Jun 18 00:11:49.098: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7515 Jun 18 00:11:49.101: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7515 Jun 18 00:11:49.105: INFO: deleting *v1.Role: csi-mock-volumes-7515-6817/external-provisioner-cfg-csi-mock-volumes-7515 Jun 18 00:11:49.109: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7515-6817/csi-provisioner-role-cfg Jun 18 00:11:49.112: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7515-6817/csi-resizer Jun 18 00:11:49.115: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7515 Jun 18 00:11:49.119: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7515 Jun 18 00:11:49.122: INFO: deleting *v1.Role: csi-mock-volumes-7515-6817/external-resizer-cfg-csi-mock-volumes-7515 Jun 18 00:11:49.125: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7515-6817/csi-resizer-role-cfg Jun 18 00:11:49.129: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7515-6817/csi-snapshotter Jun 18 00:11:49.132: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7515 Jun 18 00:11:49.135: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7515 Jun 18 00:11:49.138: INFO: deleting *v1.Role: csi-mock-volumes-7515-6817/external-snapshotter-leaderelection-csi-mock-volumes-7515 Jun 18 00:11:49.142: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7515-6817/external-snapshotter-leaderelection Jun 18 00:11:49.145: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7515-6817/csi-mock Jun 18 00:11:49.148: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7515 Jun 18 00:11:49.152: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7515 Jun 18 00:11:49.155: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7515 Jun 18 00:11:49.158: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7515 Jun 18 00:11:49.162: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7515 Jun 18 00:11:49.165: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7515 Jun 18 00:11:49.168: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7515 Jun 18 00:11:49.172: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7515-6817/csi-mockplugin Jun 18 00:11:49.175: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-7515 STEP: deleting the driver namespace: csi-mock-volumes-7515-6817 STEP: Waiting for namespaces [csi-mock-volumes-7515-6817] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:12:33.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:91.655 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI NodeStage error cases [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:734 should retry NodeStage after NodeStage ephemeral error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should retry NodeStage after NodeStage ephemeral error","total":-1,"completed":8,"skipped":299,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:11:12.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] token should not be plumbed down when CSIDriver is not deployed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1524 STEP: Building a driver namespace object, basename csi-mock-volumes-8225 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 18 00:11:12.082: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8225-4042/csi-attacher Jun 18 00:11:12.084: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8225 Jun 18 00:11:12.084: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-8225 Jun 18 00:11:12.087: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8225 Jun 18 00:11:12.091: INFO: creating *v1.Role: csi-mock-volumes-8225-4042/external-attacher-cfg-csi-mock-volumes-8225 Jun 18 00:11:12.093: INFO: creating *v1.RoleBinding: csi-mock-volumes-8225-4042/csi-attacher-role-cfg Jun 18 00:11:12.096: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8225-4042/csi-provisioner Jun 18 00:11:12.098: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8225 Jun 18 00:11:12.098: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-8225 Jun 18 00:11:12.101: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8225 Jun 18 00:11:12.104: INFO: creating *v1.Role: csi-mock-volumes-8225-4042/external-provisioner-cfg-csi-mock-volumes-8225 Jun 18 00:11:12.107: INFO: creating *v1.RoleBinding: csi-mock-volumes-8225-4042/csi-provisioner-role-cfg Jun 18 00:11:12.109: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8225-4042/csi-resizer Jun 18 00:11:12.112: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8225 Jun 18 00:11:12.113: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-8225 Jun 18 00:11:12.115: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8225 Jun 18 00:11:12.118: INFO: creating *v1.Role: csi-mock-volumes-8225-4042/external-resizer-cfg-csi-mock-volumes-8225 Jun 18 00:11:12.121: INFO: creating *v1.RoleBinding: csi-mock-volumes-8225-4042/csi-resizer-role-cfg Jun 18 00:11:12.125: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8225-4042/csi-snapshotter Jun 18 00:11:12.127: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8225 Jun 18 00:11:12.127: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-8225 Jun 18 00:11:12.129: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8225 Jun 18 00:11:12.132: INFO: creating *v1.Role: csi-mock-volumes-8225-4042/external-snapshotter-leaderelection-csi-mock-volumes-8225 Jun 18 00:11:12.134: INFO: creating *v1.RoleBinding: csi-mock-volumes-8225-4042/external-snapshotter-leaderelection Jun 18 00:11:12.136: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8225-4042/csi-mock Jun 18 00:11:12.138: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8225 Jun 18 00:11:12.141: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8225 Jun 18 00:11:12.143: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8225 Jun 18 00:11:12.145: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8225 Jun 18 00:11:12.148: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8225 Jun 18 00:11:12.150: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8225 Jun 18 00:11:12.152: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8225 Jun 18 00:11:12.155: INFO: creating *v1.StatefulSet: csi-mock-volumes-8225-4042/csi-mockplugin Jun 18 00:11:12.159: INFO: creating *v1.StatefulSet: csi-mock-volumes-8225-4042/csi-mockplugin-attacher Jun 18 00:11:12.163: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-8225 to register on node node1 STEP: Creating pod Jun 18 00:11:28.431: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 18 00:11:28.435: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-jwkrx] to have phase Bound Jun 18 00:11:28.437: INFO: PersistentVolumeClaim pvc-jwkrx found but phase is Pending instead of Bound. Jun 18 00:11:30.443: INFO: PersistentVolumeClaim pvc-jwkrx found and phase=Bound (2.007656461s) STEP: Deleting the previously created pod Jun 18 00:11:54.464: INFO: Deleting pod "pvc-volume-tester-6nddk" in namespace "csi-mock-volumes-8225" Jun 18 00:11:54.470: INFO: Wait up to 5m0s for pod "pvc-volume-tester-6nddk" to be fully deleted STEP: Checking CSI driver logs Jun 18 00:12:00.489: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/a07d6a7a-766e-4621-90f8-7c639c2b4bcd/volumes/kubernetes.io~csi/pvc-4fa23662-eb77-4c34-8b7c-648c689eabe7/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-6nddk Jun 18 00:12:00.489: INFO: Deleting pod "pvc-volume-tester-6nddk" in namespace "csi-mock-volumes-8225" STEP: Deleting claim pvc-jwkrx Jun 18 00:12:00.499: INFO: Waiting up to 2m0s for PersistentVolume pvc-4fa23662-eb77-4c34-8b7c-648c689eabe7 to get deleted Jun 18 00:12:00.501: INFO: PersistentVolume pvc-4fa23662-eb77-4c34-8b7c-648c689eabe7 found and phase=Bound (2.145996ms) Jun 18 00:12:02.504: INFO: PersistentVolume pvc-4fa23662-eb77-4c34-8b7c-648c689eabe7 was removed STEP: Deleting storageclass csi-mock-volumes-8225-sckh44j STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-8225 STEP: Waiting for namespaces [csi-mock-volumes-8225] to vanish STEP: uninstalling csi mock driver Jun 18 00:12:08.519: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8225-4042/csi-attacher Jun 18 00:12:08.523: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8225 Jun 18 00:12:08.527: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8225 Jun 18 00:12:08.530: INFO: deleting *v1.Role: csi-mock-volumes-8225-4042/external-attacher-cfg-csi-mock-volumes-8225 Jun 18 00:12:08.534: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8225-4042/csi-attacher-role-cfg Jun 18 00:12:08.537: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8225-4042/csi-provisioner Jun 18 00:12:08.541: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8225 Jun 18 00:12:08.544: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8225 Jun 18 00:12:08.550: INFO: deleting *v1.Role: csi-mock-volumes-8225-4042/external-provisioner-cfg-csi-mock-volumes-8225 Jun 18 00:12:08.557: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8225-4042/csi-provisioner-role-cfg Jun 18 00:12:08.564: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8225-4042/csi-resizer Jun 18 00:12:08.571: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8225 Jun 18 00:12:08.575: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8225 Jun 18 00:12:08.579: INFO: deleting *v1.Role: csi-mock-volumes-8225-4042/external-resizer-cfg-csi-mock-volumes-8225 Jun 18 00:12:08.582: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8225-4042/csi-resizer-role-cfg Jun 18 00:12:08.586: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8225-4042/csi-snapshotter Jun 18 00:12:08.589: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8225 Jun 18 00:12:08.592: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8225 Jun 18 00:12:08.596: INFO: deleting *v1.Role: csi-mock-volumes-8225-4042/external-snapshotter-leaderelection-csi-mock-volumes-8225 Jun 18 00:12:08.600: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8225-4042/external-snapshotter-leaderelection Jun 18 00:12:08.603: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8225-4042/csi-mock Jun 18 00:12:08.606: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8225 Jun 18 00:12:08.610: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8225 Jun 18 00:12:08.614: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8225 Jun 18 00:12:08.617: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8225 Jun 18 00:12:08.620: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8225 Jun 18 00:12:08.623: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8225 Jun 18 00:12:08.626: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8225 Jun 18 00:12:08.629: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8225-4042/csi-mockplugin Jun 18 00:12:08.632: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8225-4042/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-8225-4042 STEP: Waiting for namespaces [csi-mock-volumes-8225-4042] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:12:36.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:84.643 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIServiceAccountToken /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1496 token should not be plumbed down when CSIDriver is not deployed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1524 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when CSIDriver is not deployed","total":-1,"completed":17,"skipped":655,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:12:36.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:106 STEP: Creating a pod to test downward API volume plugin Jun 18 00:12:36.808: INFO: Waiting up to 5m0s for pod "metadata-volume-7849183d-b350-4415-ac29-648580c4f252" in namespace "projected-1692" to be "Succeeded or Failed" Jun 18 00:12:36.810: INFO: Pod "metadata-volume-7849183d-b350-4415-ac29-648580c4f252": Phase="Pending", Reason="", readiness=false. Elapsed: 2.260658ms Jun 18 00:12:38.814: INFO: Pod "metadata-volume-7849183d-b350-4415-ac29-648580c4f252": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006145049s Jun 18 00:12:40.820: INFO: Pod "metadata-volume-7849183d-b350-4415-ac29-648580c4f252": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012463941s Jun 18 00:12:42.823: INFO: Pod "metadata-volume-7849183d-b350-4415-ac29-648580c4f252": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015578472s STEP: Saw pod success Jun 18 00:12:42.823: INFO: Pod "metadata-volume-7849183d-b350-4415-ac29-648580c4f252" satisfied condition "Succeeded or Failed" Jun 18 00:12:42.826: INFO: Trying to get logs from node node1 pod metadata-volume-7849183d-b350-4415-ac29-648580c4f252 container client-container: STEP: delete the pod Jun 18 00:12:42.836: INFO: Waiting for pod metadata-volume-7849183d-b350-4415-ac29-648580c4f252 to disappear Jun 18 00:12:42.838: INFO: Pod metadata-volume-7849183d-b350-4415-ac29-648580c4f252 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:12:42.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1692" for this suite. • [SLOW TEST:6.072 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:106 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":18,"skipped":711,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:12:17.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-57fbd27c-fdcb-410f-85a2-3649cbad9e6b" Jun 18 00:12:19.070: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-57fbd27c-fdcb-410f-85a2-3649cbad9e6b && dd if=/dev/zero of=/tmp/local-volume-test-57fbd27c-fdcb-410f-85a2-3649cbad9e6b/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-57fbd27c-fdcb-410f-85a2-3649cbad9e6b/file] Namespace:persistent-local-volumes-test-3761 PodName:hostexec-node2-qskzr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:12:19.070: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:12:19.291: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-57fbd27c-fdcb-410f-85a2-3649cbad9e6b/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-3761 PodName:hostexec-node2-qskzr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:12:19.291: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:12:19.461: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-57fbd27c-fdcb-410f-85a2-3649cbad9e6b && chmod o+rwx /tmp/local-volume-test-57fbd27c-fdcb-410f-85a2-3649cbad9e6b] Namespace:persistent-local-volumes-test-3761 PodName:hostexec-node2-qskzr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:12:19.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:12:19.697: INFO: Creating a PV followed by a PVC Jun 18 00:12:19.702: INFO: Waiting for PV local-pv5drbr to bind to PVC pvc-mszs6 Jun 18 00:12:19.702: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-mszs6] to have phase Bound Jun 18 00:12:19.704: INFO: PersistentVolumeClaim pvc-mszs6 found but phase is Pending instead of Bound. Jun 18 00:12:21.710: INFO: PersistentVolumeClaim pvc-mszs6 found but phase is Pending instead of Bound. Jun 18 00:12:23.715: INFO: PersistentVolumeClaim pvc-mszs6 found but phase is Pending instead of Bound. Jun 18 00:12:25.720: INFO: PersistentVolumeClaim pvc-mszs6 found but phase is Pending instead of Bound. Jun 18 00:12:27.724: INFO: PersistentVolumeClaim pvc-mszs6 found and phase=Bound (8.021759855s) Jun 18 00:12:27.724: INFO: Waiting up to 3m0s for PersistentVolume local-pv5drbr to have phase Bound Jun 18 00:12:27.726: INFO: PersistentVolume local-pv5drbr found and phase=Bound (2.045579ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Jun 18 00:12:35.751: INFO: pod "pod-d54f9e62-6cd6-412b-a963-3f4438543514" created on Node "node2" STEP: Writing in pod1 Jun 18 00:12:35.751: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3761 PodName:pod-d54f9e62-6cd6-412b-a963-3f4438543514 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:12:35.751: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:12:35.830: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Jun 18 00:12:35.830: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3761 PodName:pod-d54f9e62-6cd6-412b-a963-3f4438543514 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:12:35.830: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:12:35.914: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Jun 18 00:12:43.939: INFO: pod "pod-aa7a48b6-89d9-47a0-aee9-06e579884b5f" created on Node "node2" Jun 18 00:12:43.939: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3761 PodName:pod-aa7a48b6-89d9-47a0-aee9-06e579884b5f ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:12:43.939: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:12:44.028: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Jun 18 00:12:44.028: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-57fbd27c-fdcb-410f-85a2-3649cbad9e6b > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3761 PodName:pod-aa7a48b6-89d9-47a0-aee9-06e579884b5f ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:12:44.028: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:12:44.109: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-57fbd27c-fdcb-410f-85a2-3649cbad9e6b > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Jun 18 00:12:44.109: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3761 PodName:pod-d54f9e62-6cd6-412b-a963-3f4438543514 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:12:44.109: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:12:44.190: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-57fbd27c-fdcb-410f-85a2-3649cbad9e6b", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-d54f9e62-6cd6-412b-a963-3f4438543514 in namespace persistent-local-volumes-test-3761 STEP: Deleting pod2 STEP: Deleting pod pod-aa7a48b6-89d9-47a0-aee9-06e579884b5f in namespace persistent-local-volumes-test-3761 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:12:44.200: INFO: Deleting PersistentVolumeClaim "pvc-mszs6" Jun 18 00:12:44.204: INFO: Deleting PersistentVolume "local-pv5drbr" Jun 18 00:12:44.207: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-57fbd27c-fdcb-410f-85a2-3649cbad9e6b] Namespace:persistent-local-volumes-test-3761 PodName:hostexec-node2-qskzr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:12:44.208: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:12:44.320: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-57fbd27c-fdcb-410f-85a2-3649cbad9e6b/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-3761 PodName:hostexec-node2-qskzr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:12:44.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-57fbd27c-fdcb-410f-85a2-3649cbad9e6b/file Jun 18 00:12:44.428: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-3761 PodName:hostexec-node2-qskzr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:12:44.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-57fbd27c-fdcb-410f-85a2-3649cbad9e6b Jun 18 00:12:44.512: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-57fbd27c-fdcb-410f-85a2-3649cbad9e6b] Namespace:persistent-local-volumes-test-3761 PodName:hostexec-node2-qskzr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:12:44.512: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:12:44.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3761" for this suite. • [SLOW TEST:27.589 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":9,"skipped":234,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:12:17.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 18 00:12:23.936: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-e0c1fcf0-87ca-4f3f-b3c6-89bd762ad616 && mount --bind /tmp/local-volume-test-e0c1fcf0-87ca-4f3f-b3c6-89bd762ad616 /tmp/local-volume-test-e0c1fcf0-87ca-4f3f-b3c6-89bd762ad616] Namespace:persistent-local-volumes-test-4398 PodName:hostexec-node1-b8t7c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:12:23.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:12:24.031: INFO: Creating a PV followed by a PVC Jun 18 00:12:24.037: INFO: Waiting for PV local-pvh7gd7 to bind to PVC pvc-mhrwt Jun 18 00:12:24.037: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-mhrwt] to have phase Bound Jun 18 00:12:24.039: INFO: PersistentVolumeClaim pvc-mhrwt found but phase is Pending instead of Bound. Jun 18 00:12:26.044: INFO: PersistentVolumeClaim pvc-mhrwt found but phase is Pending instead of Bound. Jun 18 00:12:28.048: INFO: PersistentVolumeClaim pvc-mhrwt found and phase=Bound (4.010384707s) Jun 18 00:12:28.048: INFO: Waiting up to 3m0s for PersistentVolume local-pvh7gd7 to have phase Bound Jun 18 00:12:28.050: INFO: PersistentVolume local-pvh7gd7 found and phase=Bound (2.221066ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Jun 18 00:12:40.078: INFO: pod "pod-5a44c36e-7540-44f1-b310-026703d88b8b" created on Node "node1" STEP: Writing in pod1 Jun 18 00:12:40.078: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4398 PodName:pod-5a44c36e-7540-44f1-b310-026703d88b8b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:12:40.078: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:12:40.185: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Jun 18 00:12:40.185: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4398 PodName:pod-5a44c36e-7540-44f1-b310-026703d88b8b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:12:40.185: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:12:40.262: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Jun 18 00:12:44.283: INFO: pod "pod-1cab3490-e2c5-4df8-93f3-2768164c3a09" created on Node "node1" Jun 18 00:12:44.283: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4398 PodName:pod-1cab3490-e2c5-4df8-93f3-2768164c3a09 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:12:44.283: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:12:45.567: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Jun 18 00:12:45.567: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-e0c1fcf0-87ca-4f3f-b3c6-89bd762ad616 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4398 PodName:pod-1cab3490-e2c5-4df8-93f3-2768164c3a09 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:12:45.567: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:12:45.721: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-e0c1fcf0-87ca-4f3f-b3c6-89bd762ad616 > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Jun 18 00:12:45.721: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4398 PodName:pod-5a44c36e-7540-44f1-b310-026703d88b8b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:12:45.721: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:12:45.978: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-e0c1fcf0-87ca-4f3f-b3c6-89bd762ad616", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-5a44c36e-7540-44f1-b310-026703d88b8b in namespace persistent-local-volumes-test-4398 STEP: Deleting pod2 STEP: Deleting pod pod-1cab3490-e2c5-4df8-93f3-2768164c3a09 in namespace persistent-local-volumes-test-4398 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:12:45.986: INFO: Deleting PersistentVolumeClaim "pvc-mhrwt" Jun 18 00:12:45.990: INFO: Deleting PersistentVolume "local-pvh7gd7" STEP: Removing the test directory Jun 18 00:12:45.994: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-e0c1fcf0-87ca-4f3f-b3c6-89bd762ad616 && rm -r /tmp/local-volume-test-e0c1fcf0-87ca-4f3f-b3c6-89bd762ad616] Namespace:persistent-local-volumes-test-4398 PodName:hostexec-node1-b8t7c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:12:45.994: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:12:46.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4398" for this suite. • [SLOW TEST:28.355 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":16,"skipped":582,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:11:14.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] two pods: should call NodeStage after previous NodeUnstage transient error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:961 STEP: Building a driver namespace object, basename csi-mock-volumes-9878 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Jun 18 00:11:14.122: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9878-8589/csi-attacher Jun 18 00:11:14.124: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9878 Jun 18 00:11:14.124: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-9878 Jun 18 00:11:14.127: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9878 Jun 18 00:11:14.130: INFO: creating *v1.Role: csi-mock-volumes-9878-8589/external-attacher-cfg-csi-mock-volumes-9878 Jun 18 00:11:14.133: INFO: creating *v1.RoleBinding: csi-mock-volumes-9878-8589/csi-attacher-role-cfg Jun 18 00:11:14.136: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9878-8589/csi-provisioner Jun 18 00:11:14.139: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9878 Jun 18 00:11:14.139: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-9878 Jun 18 00:11:14.142: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9878 Jun 18 00:11:14.146: INFO: creating *v1.Role: csi-mock-volumes-9878-8589/external-provisioner-cfg-csi-mock-volumes-9878 Jun 18 00:11:14.149: INFO: creating *v1.RoleBinding: csi-mock-volumes-9878-8589/csi-provisioner-role-cfg Jun 18 00:11:14.151: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9878-8589/csi-resizer Jun 18 00:11:14.154: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9878 Jun 18 00:11:14.154: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-9878 Jun 18 00:11:14.156: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9878 Jun 18 00:11:14.159: INFO: creating *v1.Role: csi-mock-volumes-9878-8589/external-resizer-cfg-csi-mock-volumes-9878 Jun 18 00:11:14.162: INFO: creating *v1.RoleBinding: csi-mock-volumes-9878-8589/csi-resizer-role-cfg Jun 18 00:11:14.164: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9878-8589/csi-snapshotter Jun 18 00:11:14.167: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9878 Jun 18 00:11:14.167: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-9878 Jun 18 00:11:14.169: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9878 Jun 18 00:11:14.172: INFO: creating *v1.Role: csi-mock-volumes-9878-8589/external-snapshotter-leaderelection-csi-mock-volumes-9878 Jun 18 00:11:14.175: INFO: creating *v1.RoleBinding: csi-mock-volumes-9878-8589/external-snapshotter-leaderelection Jun 18 00:11:14.177: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9878-8589/csi-mock Jun 18 00:11:14.180: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9878 Jun 18 00:11:14.183: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9878 Jun 18 00:11:14.186: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9878 Jun 18 00:11:14.190: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9878 Jun 18 00:11:14.193: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9878 Jun 18 00:11:14.195: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9878 Jun 18 00:11:14.198: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9878 Jun 18 00:11:14.201: INFO: creating *v1.StatefulSet: csi-mock-volumes-9878-8589/csi-mockplugin Jun 18 00:11:14.205: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-9878 Jun 18 00:11:14.208: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-9878" Jun 18 00:11:14.210: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-9878 to register on node node1 I0618 00:11:27.565004 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I0618 00:11:27.605155 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-9878","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0618 00:11:27.607010 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I0618 00:11:27.609115 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I0618 00:11:27.956635 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-9878","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0618 00:11:28.782649 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-9878"},"Error":"","FullError":null} STEP: Creating pod Jun 18 00:11:30.478: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 18 00:11:30.482: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-t9qcv] to have phase Bound Jun 18 00:11:30.484: INFO: PersistentVolumeClaim pvc-t9qcv found but phase is Pending instead of Bound. I0618 00:11:30.490352 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-aa62d067-493a-4e78-8ef7-54479f57d6f9","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-aa62d067-493a-4e78-8ef7-54479f57d6f9"}}},"Error":"","FullError":null} Jun 18 00:11:32.486: INFO: PersistentVolumeClaim pvc-t9qcv found and phase=Bound (2.004157989s) Jun 18 00:11:32.500: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-t9qcv] to have phase Bound Jun 18 00:11:32.503: INFO: PersistentVolumeClaim pvc-t9qcv found and phase=Bound (2.357038ms) I0618 00:11:33.039046 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Jun 18 00:11:33.041: INFO: >>> kubeConfig: /root/.kube/config I0618 00:11:33.189918 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-aa62d067-493a-4e78-8ef7-54479f57d6f9/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-aa62d067-493a-4e78-8ef7-54479f57d6f9","storage.kubernetes.io/csiProvisionerIdentity":"1655511087612-8081-csi-mock-csi-mock-volumes-9878"}},"Response":{},"Error":"","FullError":null} I0618 00:11:33.475902 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Jun 18 00:11:33.477: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:11:33.745: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:11:33.872: INFO: >>> kubeConfig: /root/.kube/config I0618 00:11:33.996225 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-aa62d067-493a-4e78-8ef7-54479f57d6f9/globalmount","target_path":"/var/lib/kubelet/pods/67838120-e382-4a22-935a-a8f1cdc6c03c/volumes/kubernetes.io~csi/pvc-aa62d067-493a-4e78-8ef7-54479f57d6f9/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-aa62d067-493a-4e78-8ef7-54479f57d6f9","storage.kubernetes.io/csiProvisionerIdentity":"1655511087612-8081-csi-mock-csi-mock-volumes-9878"}},"Response":{},"Error":"","FullError":null} Jun 18 00:11:38.508: INFO: Deleting pod "pvc-volume-tester-m55fk" in namespace "csi-mock-volumes-9878" Jun 18 00:11:38.512: INFO: Wait up to 5m0s for pod "pvc-volume-tester-m55fk" to be fully deleted Jun 18 00:11:40.176: INFO: >>> kubeConfig: /root/.kube/config I0618 00:11:40.609673 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/67838120-e382-4a22-935a-a8f1cdc6c03c/volumes/kubernetes.io~csi/pvc-aa62d067-493a-4e78-8ef7-54479f57d6f9/mount"},"Response":{},"Error":"","FullError":null} I0618 00:11:40.653887 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0618 00:11:40.655748 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-aa62d067-493a-4e78-8ef7-54479f57d6f9/globalmount"},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake transient error","FullError":{"code":4,"message":"fake transient error"}} I0618 00:11:41.258921 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0618 00:11:41.260743 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-aa62d067-493a-4e78-8ef7-54479f57d6f9/globalmount"},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake transient error","FullError":{"code":4,"message":"fake transient error"}} I0618 00:11:42.273332 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0618 00:11:42.274905 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-aa62d067-493a-4e78-8ef7-54479f57d6f9/globalmount"},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake transient error","FullError":{"code":4,"message":"fake transient error"}} I0618 00:11:44.291923 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0618 00:11:44.293913 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-aa62d067-493a-4e78-8ef7-54479f57d6f9/globalmount"},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake transient error","FullError":{"code":4,"message":"fake transient error"}} I0618 00:11:48.365415 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0618 00:11:48.366989 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-aa62d067-493a-4e78-8ef7-54479f57d6f9/globalmount"},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake transient error","FullError":{"code":4,"message":"fake transient error"}} I0618 00:11:54.803318 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Jun 18 00:11:54.805: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:11:55.022: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:11:55.115: INFO: >>> kubeConfig: /root/.kube/config I0618 00:11:55.415718 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-aa62d067-493a-4e78-8ef7-54479f57d6f9/globalmount","target_path":"/var/lib/kubelet/pods/fe739052-a1b2-4493-bf60-05efbe262ca7/volumes/kubernetes.io~csi/pvc-aa62d067-493a-4e78-8ef7-54479f57d6f9/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-aa62d067-493a-4e78-8ef7-54479f57d6f9","storage.kubernetes.io/csiProvisionerIdentity":"1655511087612-8081-csi-mock-csi-mock-volumes-9878"}},"Response":{},"Error":"","FullError":null} Jun 18 00:12:02.534: INFO: Deleting pod "pvc-volume-tester-96545" in namespace "csi-mock-volumes-9878" Jun 18 00:12:02.539: INFO: Wait up to 5m0s for pod "pvc-volume-tester-96545" to be fully deleted Jun 18 00:12:03.841: INFO: >>> kubeConfig: /root/.kube/config I0618 00:12:03.961654 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/fe739052-a1b2-4493-bf60-05efbe262ca7/volumes/kubernetes.io~csi/pvc-aa62d067-493a-4e78-8ef7-54479f57d6f9/mount"},"Response":{},"Error":"","FullError":null} I0618 00:12:04.043615 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0618 00:12:04.045095 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-aa62d067-493a-4e78-8ef7-54479f57d6f9/globalmount"},"Response":{},"Error":"","FullError":null} STEP: Waiting for all remaining expected CSI calls Jun 18 00:12:11.545: FAIL: while waiting for all CSI calls Unexpected error: <*errors.errorString | 0xc00056fc70>: { s: "Unexpected CSI call 2: expected NodeStageVolume (0), got NodeUnstageVolume (0)", } Unexpected CSI call 2: expected NodeStageVolume (0), got NodeUnstageVolume (0) occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func1.13.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1017 +0x79e k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001199680) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001199680) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001199680, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 STEP: Deleting pod pvc-volume-tester-m55fk Jun 18 00:12:11.546: INFO: Deleting pod "pvc-volume-tester-m55fk" in namespace "csi-mock-volumes-9878" STEP: Deleting pod pvc-volume-tester-96545 Jun 18 00:12:11.548: INFO: Deleting pod "pvc-volume-tester-96545" in namespace "csi-mock-volumes-9878" STEP: Deleting claim pvc-t9qcv Jun 18 00:12:11.555: INFO: Waiting up to 2m0s for PersistentVolume pvc-aa62d067-493a-4e78-8ef7-54479f57d6f9 to get deleted Jun 18 00:12:11.557: INFO: PersistentVolume pvc-aa62d067-493a-4e78-8ef7-54479f57d6f9 found and phase=Bound (2.132988ms) I0618 00:12:11.567523 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} Jun 18 00:12:13.560: INFO: PersistentVolume pvc-aa62d067-493a-4e78-8ef7-54479f57d6f9 was removed STEP: Deleting storageclass csi-mock-volumes-9878-sclnlqg STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-9878 STEP: Waiting for namespaces [csi-mock-volumes-9878] to vanish STEP: uninstalling csi mock driver Jun 18 00:12:19.589: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9878-8589/csi-attacher Jun 18 00:12:19.594: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9878 Jun 18 00:12:19.597: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9878 Jun 18 00:12:19.600: INFO: deleting *v1.Role: csi-mock-volumes-9878-8589/external-attacher-cfg-csi-mock-volumes-9878 Jun 18 00:12:19.604: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9878-8589/csi-attacher-role-cfg Jun 18 00:12:19.608: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9878-8589/csi-provisioner Jun 18 00:12:19.612: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9878 Jun 18 00:12:19.616: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9878 Jun 18 00:12:19.619: INFO: deleting *v1.Role: csi-mock-volumes-9878-8589/external-provisioner-cfg-csi-mock-volumes-9878 Jun 18 00:12:19.623: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9878-8589/csi-provisioner-role-cfg Jun 18 00:12:19.627: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9878-8589/csi-resizer Jun 18 00:12:19.631: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9878 Jun 18 00:12:19.635: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9878 Jun 18 00:12:19.638: INFO: deleting *v1.Role: csi-mock-volumes-9878-8589/external-resizer-cfg-csi-mock-volumes-9878 Jun 18 00:12:19.643: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9878-8589/csi-resizer-role-cfg Jun 18 00:12:19.647: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9878-8589/csi-snapshotter Jun 18 00:12:19.650: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9878 Jun 18 00:12:19.653: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9878 Jun 18 00:12:19.656: INFO: deleting *v1.Role: csi-mock-volumes-9878-8589/external-snapshotter-leaderelection-csi-mock-volumes-9878 Jun 18 00:12:19.660: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9878-8589/external-snapshotter-leaderelection Jun 18 00:12:19.663: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9878-8589/csi-mock Jun 18 00:12:19.666: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9878 Jun 18 00:12:19.670: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9878 Jun 18 00:12:19.673: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9878 Jun 18 00:12:19.676: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9878 Jun 18 00:12:19.679: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9878 Jun 18 00:12:19.683: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9878 Jun 18 00:12:19.686: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9878 Jun 18 00:12:19.689: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9878-8589/csi-mockplugin Jun 18 00:12:19.693: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-9878 STEP: deleting the driver namespace: csi-mock-volumes-9878-8589 STEP: Waiting for namespaces [csi-mock-volumes-9878-8589] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:13:03.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • Failure [109.660 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI NodeUnstage error cases [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900 two pods: should call NodeStage after previous NodeUnstage transient error [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:961 Jun 18 00:12:11.545: while waiting for all CSI calls Unexpected error: <*errors.errorString | 0xc00056fc70>: { s: "Unexpected CSI call 2: expected NodeStageVolume (0), got NodeUnstageVolume (0)", } Unexpected CSI call 2: expected NodeStageVolume (0), got NodeUnstageVolume (0) occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1017 ------------------------------ {"msg":"FAILED [sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage transient error","total":-1,"completed":13,"skipped":470,"failed":1,"failures":["[sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage transient error"]} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:12:26.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity used, insufficient capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1299 STEP: Building a driver namespace object, basename csi-mock-volumes-5867 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 18 00:12:26.152: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5867-3395/csi-attacher Jun 18 00:12:26.154: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5867 Jun 18 00:12:26.154: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-5867 Jun 18 00:12:26.156: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5867 Jun 18 00:12:26.159: INFO: creating *v1.Role: csi-mock-volumes-5867-3395/external-attacher-cfg-csi-mock-volumes-5867 Jun 18 00:12:26.162: INFO: creating *v1.RoleBinding: csi-mock-volumes-5867-3395/csi-attacher-role-cfg Jun 18 00:12:26.164: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5867-3395/csi-provisioner Jun 18 00:12:26.167: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5867 Jun 18 00:12:26.167: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-5867 Jun 18 00:12:26.170: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5867 Jun 18 00:12:26.173: INFO: creating *v1.Role: csi-mock-volumes-5867-3395/external-provisioner-cfg-csi-mock-volumes-5867 Jun 18 00:12:26.176: INFO: creating *v1.RoleBinding: csi-mock-volumes-5867-3395/csi-provisioner-role-cfg Jun 18 00:12:26.179: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5867-3395/csi-resizer Jun 18 00:12:26.181: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5867 Jun 18 00:12:26.181: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-5867 Jun 18 00:12:26.184: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5867 Jun 18 00:12:26.186: INFO: creating *v1.Role: csi-mock-volumes-5867-3395/external-resizer-cfg-csi-mock-volumes-5867 Jun 18 00:12:26.189: INFO: creating *v1.RoleBinding: csi-mock-volumes-5867-3395/csi-resizer-role-cfg Jun 18 00:12:26.191: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5867-3395/csi-snapshotter Jun 18 00:12:26.194: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5867 Jun 18 00:12:26.194: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-5867 Jun 18 00:12:26.196: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5867 Jun 18 00:12:26.200: INFO: creating *v1.Role: csi-mock-volumes-5867-3395/external-snapshotter-leaderelection-csi-mock-volumes-5867 Jun 18 00:12:26.202: INFO: creating *v1.RoleBinding: csi-mock-volumes-5867-3395/external-snapshotter-leaderelection Jun 18 00:12:26.205: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5867-3395/csi-mock Jun 18 00:12:26.208: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5867 Jun 18 00:12:26.211: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5867 Jun 18 00:12:26.214: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5867 Jun 18 00:12:26.216: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5867 Jun 18 00:12:26.219: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5867 Jun 18 00:12:26.221: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5867 Jun 18 00:12:26.224: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5867 Jun 18 00:12:26.226: INFO: creating *v1.StatefulSet: csi-mock-volumes-5867-3395/csi-mockplugin Jun 18 00:12:26.230: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-5867 Jun 18 00:12:26.233: INFO: creating *v1.StatefulSet: csi-mock-volumes-5867-3395/csi-mockplugin-attacher Jun 18 00:12:26.237: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-5867" Jun 18 00:12:26.240: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5867 to register on node node2 STEP: Creating pod Jun 18 00:12:47.518: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Deleting the previously created pod Jun 18 00:12:47.538: INFO: Deleting pod "pvc-volume-tester-97rm5" in namespace "csi-mock-volumes-5867" Jun 18 00:12:47.545: INFO: Wait up to 5m0s for pod "pvc-volume-tester-97rm5" to be fully deleted STEP: Deleting pod pvc-volume-tester-97rm5 Jun 18 00:12:47.547: INFO: Deleting pod "pvc-volume-tester-97rm5" in namespace "csi-mock-volumes-5867" STEP: Deleting claim pvc-dpvv5 STEP: Deleting storageclass mock-csi-storage-capacity-csi-mock-volumes-5867 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-5867 STEP: Waiting for namespaces [csi-mock-volumes-5867] to vanish STEP: uninstalling csi mock driver Jun 18 00:12:53.570: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5867-3395/csi-attacher Jun 18 00:12:53.577: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5867 Jun 18 00:12:53.580: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5867 Jun 18 00:12:53.584: INFO: deleting *v1.Role: csi-mock-volumes-5867-3395/external-attacher-cfg-csi-mock-volumes-5867 Jun 18 00:12:53.588: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5867-3395/csi-attacher-role-cfg Jun 18 00:12:53.591: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5867-3395/csi-provisioner Jun 18 00:12:53.595: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5867 Jun 18 00:12:53.598: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5867 Jun 18 00:12:53.601: INFO: deleting *v1.Role: csi-mock-volumes-5867-3395/external-provisioner-cfg-csi-mock-volumes-5867 Jun 18 00:12:53.605: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5867-3395/csi-provisioner-role-cfg Jun 18 00:12:53.612: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5867-3395/csi-resizer Jun 18 00:12:53.616: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5867 Jun 18 00:12:53.620: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5867 Jun 18 00:12:53.623: INFO: deleting *v1.Role: csi-mock-volumes-5867-3395/external-resizer-cfg-csi-mock-volumes-5867 Jun 18 00:12:53.630: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5867-3395/csi-resizer-role-cfg Jun 18 00:12:53.633: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5867-3395/csi-snapshotter Jun 18 00:12:53.638: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5867 Jun 18 00:12:53.642: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5867 Jun 18 00:12:53.645: INFO: deleting *v1.Role: csi-mock-volumes-5867-3395/external-snapshotter-leaderelection-csi-mock-volumes-5867 Jun 18 00:12:53.649: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5867-3395/external-snapshotter-leaderelection Jun 18 00:12:53.652: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5867-3395/csi-mock Jun 18 00:12:53.656: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5867 Jun 18 00:12:53.659: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5867 Jun 18 00:12:53.662: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5867 Jun 18 00:12:53.665: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5867 Jun 18 00:12:53.668: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5867 Jun 18 00:12:53.671: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5867 Jun 18 00:12:53.675: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5867 Jun 18 00:12:53.678: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5867-3395/csi-mockplugin Jun 18 00:12:53.682: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-5867 Jun 18 00:12:53.686: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5867-3395/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-5867-3395 STEP: Waiting for namespaces [csi-mock-volumes-5867-3395] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:13:05.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:39.610 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1256 CSIStorageCapacity used, insufficient capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1299 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","total":-1,"completed":11,"skipped":484,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:12:42.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:49 [It] should allow deletion of pod with invalid volume : secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 Jun 18 00:13:12.927: INFO: Deleting pod "pv-3523"/"pod-ephm-test-projected-g6tl" Jun 18 00:13:12.927: INFO: Deleting pod "pod-ephm-test-projected-g6tl" in namespace "pv-3523" Jun 18 00:13:12.932: INFO: Wait up to 5m0s for pod "pod-ephm-test-projected-g6tl" to be fully deleted [AfterEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:13:20.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3523" for this suite. • [SLOW TEST:38.061 seconds] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 When pod refers to non-existent ephemeral storage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53 should allow deletion of pod with invalid volume : secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 ------------------------------ {"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret","total":-1,"completed":19,"skipped":728,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:13:21.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:77 Jun 18 00:13:21.101: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:13:21.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-6591" for this suite. [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:111 Jun 18 00:13:21.113: INFO: AfterEach: Cleaning up test resources Jun 18 00:13:21.113: INFO: pvc is nil Jun 18 00:13:21.113: INFO: pv is nil S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should test that deleting the Namespace of a PVC and Pod causes the successful detach of Persistent Disk [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:156 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85 ------------------------------ SSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:12:46.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 18 00:12:56.302: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-c09da36b-dae3-4cfb-9606-94ae61a8a01d] Namespace:persistent-local-volumes-test-442 PodName:hostexec-node2-gz964 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:12:56.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:12:56.553: INFO: Creating a PV followed by a PVC Jun 18 00:12:56.559: INFO: Waiting for PV local-pvtbgkd to bind to PVC pvc-zm4q8 Jun 18 00:12:56.559: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-zm4q8] to have phase Bound Jun 18 00:12:56.562: INFO: PersistentVolumeClaim pvc-zm4q8 found but phase is Pending instead of Bound. Jun 18 00:12:58.565: INFO: PersistentVolumeClaim pvc-zm4q8 found but phase is Pending instead of Bound. Jun 18 00:13:00.570: INFO: PersistentVolumeClaim pvc-zm4q8 found but phase is Pending instead of Bound. Jun 18 00:13:02.573: INFO: PersistentVolumeClaim pvc-zm4q8 found but phase is Pending instead of Bound. Jun 18 00:13:04.577: INFO: PersistentVolumeClaim pvc-zm4q8 found but phase is Pending instead of Bound. Jun 18 00:13:06.580: INFO: PersistentVolumeClaim pvc-zm4q8 found but phase is Pending instead of Bound. Jun 18 00:13:08.613: INFO: PersistentVolumeClaim pvc-zm4q8 found but phase is Pending instead of Bound. Jun 18 00:13:10.616: INFO: PersistentVolumeClaim pvc-zm4q8 found but phase is Pending instead of Bound. Jun 18 00:13:12.619: INFO: PersistentVolumeClaim pvc-zm4q8 found and phase=Bound (16.060405593s) Jun 18 00:13:12.619: INFO: Waiting up to 3m0s for PersistentVolume local-pvtbgkd to have phase Bound Jun 18 00:13:12.621: INFO: PersistentVolume local-pvtbgkd found and phase=Bound (1.889018ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Jun 18 00:13:16.644: INFO: pod "pod-9738f2ad-f8db-485e-92c5-ab5d346a7a53" created on Node "node2" STEP: Writing in pod1 Jun 18 00:13:16.644: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-442 PodName:pod-9738f2ad-f8db-485e-92c5-ab5d346a7a53 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:13:16.644: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:13:16.727: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Jun 18 00:13:16.727: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-442 PodName:pod-9738f2ad-f8db-485e-92c5-ab5d346a7a53 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:13:16.727: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:13:16.855: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Jun 18 00:13:20.882: INFO: pod "pod-113c40e8-c262-44e3-900a-c00009494d53" created on Node "node2" Jun 18 00:13:20.882: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-442 PodName:pod-113c40e8-c262-44e3-900a-c00009494d53 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:13:20.882: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:13:20.966: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Jun 18 00:13:20.966: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-c09da36b-dae3-4cfb-9606-94ae61a8a01d > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-442 PodName:pod-113c40e8-c262-44e3-900a-c00009494d53 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:13:20.966: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:13:21.090: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-c09da36b-dae3-4cfb-9606-94ae61a8a01d > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Jun 18 00:13:21.090: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-442 PodName:pod-9738f2ad-f8db-485e-92c5-ab5d346a7a53 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:13:21.090: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:13:21.179: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-c09da36b-dae3-4cfb-9606-94ae61a8a01d", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-9738f2ad-f8db-485e-92c5-ab5d346a7a53 in namespace persistent-local-volumes-test-442 STEP: Deleting pod2 STEP: Deleting pod pod-113c40e8-c262-44e3-900a-c00009494d53 in namespace persistent-local-volumes-test-442 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:13:21.187: INFO: Deleting PersistentVolumeClaim "pvc-zm4q8" Jun 18 00:13:21.190: INFO: Deleting PersistentVolume "local-pvtbgkd" STEP: Removing the test directory Jun 18 00:13:21.193: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c09da36b-dae3-4cfb-9606-94ae61a8a01d] Namespace:persistent-local-volumes-test-442 PodName:hostexec-node2-gz964 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:13:21.193: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:13:21.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-442" for this suite. • [SLOW TEST:35.040 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":17,"skipped":586,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:12:27.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] unlimited /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1080 STEP: Building a driver namespace object, basename csi-mock-volumes-7450 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 18 00:12:27.551: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7450-4823/csi-attacher Jun 18 00:12:27.554: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7450 Jun 18 00:12:27.554: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-7450 Jun 18 00:12:27.557: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7450 Jun 18 00:12:27.559: INFO: creating *v1.Role: csi-mock-volumes-7450-4823/external-attacher-cfg-csi-mock-volumes-7450 Jun 18 00:12:27.563: INFO: creating *v1.RoleBinding: csi-mock-volumes-7450-4823/csi-attacher-role-cfg Jun 18 00:12:27.565: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7450-4823/csi-provisioner Jun 18 00:12:27.568: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7450 Jun 18 00:12:27.568: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-7450 Jun 18 00:12:27.571: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7450 Jun 18 00:12:27.574: INFO: creating *v1.Role: csi-mock-volumes-7450-4823/external-provisioner-cfg-csi-mock-volumes-7450 Jun 18 00:12:27.576: INFO: creating *v1.RoleBinding: csi-mock-volumes-7450-4823/csi-provisioner-role-cfg Jun 18 00:12:27.579: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7450-4823/csi-resizer Jun 18 00:12:27.582: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7450 Jun 18 00:12:27.582: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-7450 Jun 18 00:12:27.585: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7450 Jun 18 00:12:27.588: INFO: creating *v1.Role: csi-mock-volumes-7450-4823/external-resizer-cfg-csi-mock-volumes-7450 Jun 18 00:12:27.591: INFO: creating *v1.RoleBinding: csi-mock-volumes-7450-4823/csi-resizer-role-cfg Jun 18 00:12:27.594: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7450-4823/csi-snapshotter Jun 18 00:12:27.596: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7450 Jun 18 00:12:27.596: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-7450 Jun 18 00:12:27.599: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7450 Jun 18 00:12:27.601: INFO: creating *v1.Role: csi-mock-volumes-7450-4823/external-snapshotter-leaderelection-csi-mock-volumes-7450 Jun 18 00:12:27.604: INFO: creating *v1.RoleBinding: csi-mock-volumes-7450-4823/external-snapshotter-leaderelection Jun 18 00:12:27.607: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7450-4823/csi-mock Jun 18 00:12:27.609: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7450 Jun 18 00:12:27.612: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7450 Jun 18 00:12:27.614: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7450 Jun 18 00:12:27.617: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7450 Jun 18 00:12:27.619: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7450 Jun 18 00:12:27.622: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7450 Jun 18 00:12:27.624: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7450 Jun 18 00:12:27.627: INFO: creating *v1.StatefulSet: csi-mock-volumes-7450-4823/csi-mockplugin Jun 18 00:12:27.633: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-7450 Jun 18 00:12:27.646: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-7450" Jun 18 00:12:27.649: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-7450 to register on node node2 STEP: Creating pod Jun 18 00:12:43.922: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 18 00:12:43.928: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-fl92v] to have phase Bound Jun 18 00:12:43.931: INFO: PersistentVolumeClaim pvc-fl92v found but phase is Pending instead of Bound. Jun 18 00:12:45.934: INFO: PersistentVolumeClaim pvc-fl92v found and phase=Bound (2.005981577s) Jun 18 00:12:55.963: INFO: Deleting pod "pvc-volume-tester-hrnht" in namespace "csi-mock-volumes-7450" Jun 18 00:12:55.968: INFO: Wait up to 5m0s for pod "pvc-volume-tester-hrnht" to be fully deleted STEP: Checking PVC events Jun 18 00:13:04.991: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-fl92v", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7450", SelfLink:"", UID:"504284e7-e123-47dc-bac0-f52e94fa6e5a", ResourceVersion:"101862", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63791107963, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0036ceff0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0036cf008)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0037f80b0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0037f80c0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 18 00:13:04.991: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-fl92v", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7450", SelfLink:"", UID:"504284e7-e123-47dc-bac0-f52e94fa6e5a", ResourceVersion:"101863", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63791107963, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7450"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0036cf038), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0036cf050)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0036cf068), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0036cf080)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0037f80f0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0037f8100), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 18 00:13:04.991: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-fl92v", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7450", SelfLink:"", UID:"504284e7-e123-47dc-bac0-f52e94fa6e5a", ResourceVersion:"101870", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63791107963, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7450"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0036cfec0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0036cfed8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0036cfef0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0036cff08)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-504284e7-e123-47dc-bac0-f52e94fa6e5a", StorageClassName:(*string)(0xc0037f8a10), VolumeMode:(*v1.PersistentVolumeMode)(0xc0037f8a20), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 18 00:13:04.991: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-fl92v", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7450", SelfLink:"", UID:"504284e7-e123-47dc-bac0-f52e94fa6e5a", ResourceVersion:"101871", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63791107963, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7450"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0036cff38), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0036cff50)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0036cff68), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0036cff80)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-504284e7-e123-47dc-bac0-f52e94fa6e5a", StorageClassName:(*string)(0xc0037f8a50), VolumeMode:(*v1.PersistentVolumeMode)(0xc0037f8a60), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 18 00:13:04.991: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-fl92v", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7450", SelfLink:"", UID:"504284e7-e123-47dc-bac0-f52e94fa6e5a", ResourceVersion:"102375", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63791107963, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(0xc002de9b48), DeletionGracePeriodSeconds:(*int64)(0xc002acdd48), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7450"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002de9b60), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002de9b78)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002de9b90), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002de9ba8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-504284e7-e123-47dc-bac0-f52e94fa6e5a", StorageClassName:(*string)(0xc0009198d0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0009198e0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Jun 18 00:13:04.991: INFO: PVC event DELETED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-fl92v", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7450", SelfLink:"", UID:"504284e7-e123-47dc-bac0-f52e94fa6e5a", ResourceVersion:"102376", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63791107963, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(0xc002de9bd8), DeletionGracePeriodSeconds:(*int64)(0xc002acddf8), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7450"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002de9bf0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002de9c08)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002de9c20), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002de9c38)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-504284e7-e123-47dc-bac0-f52e94fa6e5a", StorageClassName:(*string)(0xc000919930), VolumeMode:(*v1.PersistentVolumeMode)(0xc000919940), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} STEP: Deleting pod pvc-volume-tester-hrnht Jun 18 00:13:04.991: INFO: Deleting pod "pvc-volume-tester-hrnht" in namespace "csi-mock-volumes-7450" STEP: Deleting claim pvc-fl92v STEP: Deleting storageclass csi-mock-volumes-7450-sc64t9n STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-7450 STEP: Waiting for namespaces [csi-mock-volumes-7450] to vanish STEP: uninstalling csi mock driver Jun 18 00:13:11.011: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7450-4823/csi-attacher Jun 18 00:13:11.015: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7450 Jun 18 00:13:11.019: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7450 Jun 18 00:13:11.022: INFO: deleting *v1.Role: csi-mock-volumes-7450-4823/external-attacher-cfg-csi-mock-volumes-7450 Jun 18 00:13:11.025: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7450-4823/csi-attacher-role-cfg Jun 18 00:13:11.029: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7450-4823/csi-provisioner Jun 18 00:13:11.033: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7450 Jun 18 00:13:11.037: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7450 Jun 18 00:13:11.040: INFO: deleting *v1.Role: csi-mock-volumes-7450-4823/external-provisioner-cfg-csi-mock-volumes-7450 Jun 18 00:13:11.043: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7450-4823/csi-provisioner-role-cfg Jun 18 00:13:11.046: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7450-4823/csi-resizer Jun 18 00:13:11.050: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7450 Jun 18 00:13:11.053: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7450 Jun 18 00:13:11.056: INFO: deleting *v1.Role: csi-mock-volumes-7450-4823/external-resizer-cfg-csi-mock-volumes-7450 Jun 18 00:13:11.059: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7450-4823/csi-resizer-role-cfg Jun 18 00:13:11.062: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7450-4823/csi-snapshotter Jun 18 00:13:11.065: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7450 Jun 18 00:13:11.068: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7450 Jun 18 00:13:11.071: INFO: deleting *v1.Role: csi-mock-volumes-7450-4823/external-snapshotter-leaderelection-csi-mock-volumes-7450 Jun 18 00:13:11.074: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7450-4823/external-snapshotter-leaderelection Jun 18 00:13:11.077: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7450-4823/csi-mock Jun 18 00:13:11.080: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7450 Jun 18 00:13:11.083: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7450 Jun 18 00:13:11.086: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7450 Jun 18 00:13:11.089: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7450 Jun 18 00:13:11.092: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7450 Jun 18 00:13:11.096: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7450 Jun 18 00:13:11.099: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7450 Jun 18 00:13:11.102: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7450-4823/csi-mockplugin Jun 18 00:13:11.106: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-7450 STEP: deleting the driver namespace: csi-mock-volumes-7450-4823 STEP: Waiting for namespaces [csi-mock-volumes-7450-4823] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:13:23.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:55.643 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 storage capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1022 unlimited /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1080 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume storage capacity unlimited","total":-1,"completed":14,"skipped":475,"failed":1,"failures":["[sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage final error"]} S ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:13:23.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Jun 18 00:13:23.166: INFO: The status of Pod test-hostpath-type-kvmkl is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:13:25.170: INFO: The status of Pod test-hostpath-type-kvmkl is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:13:27.169: INFO: The status of Pod test-hostpath-type-kvmkl is Running (Ready = true) STEP: running on node node1 STEP: Create a block device for further testing Jun 18 00:13:27.172: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-705 PodName:test-hostpath-type-kvmkl ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:13:27.172: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:350 [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:13:31.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-705" for this suite. • [SLOW TEST:8.159 seconds] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:350 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathUnset","total":-1,"completed":15,"skipped":476,"failed":1,"failures":["[sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage final error"]} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:13:03.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:391 STEP: Setting up local volumes on node "node1" STEP: Initializing test volumes Jun 18 00:13:05.791: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-2642995c-ed06-42ea-9e7d-b0c693125c9c] Namespace:persistent-local-volumes-test-9703 PodName:hostexec-node1-ns9xj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:13:05.791: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:13:05.885: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-70450eea-18eb-4012-a6dc-6da6bf6d9f26] Namespace:persistent-local-volumes-test-9703 PodName:hostexec-node1-ns9xj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:13:05.885: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:13:05.968: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-f5764719-2389-478c-9b2f-e797ff906d07] Namespace:persistent-local-volumes-test-9703 PodName:hostexec-node1-ns9xj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:13:05.968: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:13:06.052: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-10ce3de5-6334-425c-aa59-47943ff4d80e] Namespace:persistent-local-volumes-test-9703 PodName:hostexec-node1-ns9xj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:13:06.053: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:13:06.139: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-ae52f067-e715-4d0a-bab8-f88abf3966cf] Namespace:persistent-local-volumes-test-9703 PodName:hostexec-node1-ns9xj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:13:06.139: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:13:06.224: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-2e0fc363-d3e8-4cdd-9cac-f0848b383e72] Namespace:persistent-local-volumes-test-9703 PodName:hostexec-node1-ns9xj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:13:06.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:13:06.343: INFO: Creating a PV followed by a PVC Jun 18 00:13:06.351: INFO: Creating a PV followed by a PVC Jun 18 00:13:06.357: INFO: Creating a PV followed by a PVC Jun 18 00:13:06.362: INFO: Creating a PV followed by a PVC Jun 18 00:13:06.368: INFO: Creating a PV followed by a PVC Jun 18 00:13:06.374: INFO: Creating a PV followed by a PVC Jun 18 00:13:16.419: INFO: PVCs were not bound within 10s (that's good) STEP: Setting up local volumes on node "node2" STEP: Initializing test volumes Jun 18 00:13:20.436: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-c24a5777-c708-42ec-b96f-b90b30967d4d] Namespace:persistent-local-volumes-test-9703 PodName:hostexec-node2-6kvnw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:13:20.437: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:13:20.524: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-097ec6df-cf5f-41d5-8e56-78ec4df8fdf2] Namespace:persistent-local-volumes-test-9703 PodName:hostexec-node2-6kvnw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:13:20.524: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:13:20.614: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-802b3777-9dfe-4554-a4e7-93dc10893574] Namespace:persistent-local-volumes-test-9703 PodName:hostexec-node2-6kvnw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:13:20.614: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:13:20.698: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-5fad08b0-53c2-416a-85ee-c836a9ec3273] Namespace:persistent-local-volumes-test-9703 PodName:hostexec-node2-6kvnw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:13:20.698: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:13:20.784: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-ef644d8c-d71d-434f-acd7-05f6244884d1] Namespace:persistent-local-volumes-test-9703 PodName:hostexec-node2-6kvnw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:13:20.784: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:13:20.867: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-2f803784-0cd5-4b6c-a2e3-0c91345e846e] Namespace:persistent-local-volumes-test-9703 PodName:hostexec-node2-6kvnw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:13:20.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:13:20.958: INFO: Creating a PV followed by a PVC Jun 18 00:13:20.963: INFO: Creating a PV followed by a PVC Jun 18 00:13:20.970: INFO: Creating a PV followed by a PVC Jun 18 00:13:20.976: INFO: Creating a PV followed by a PVC Jun 18 00:13:20.982: INFO: Creating a PV followed by a PVC Jun 18 00:13:20.988: INFO: Creating a PV followed by a PVC Jun 18 00:13:31.031: INFO: PVCs were not bound within 10s (that's good) [It] should use volumes spread across nodes when pod has anti-affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:410 Jun 18 00:13:31.031: INFO: Runs only when number of nodes >= 3 [AfterEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:403 STEP: Cleaning up PVC and PV Jun 18 00:13:31.033: INFO: Deleting PersistentVolumeClaim "pvc-sfww4" Jun 18 00:13:31.036: INFO: Deleting PersistentVolume "local-pv96f4z" STEP: Cleaning up PVC and PV Jun 18 00:13:31.039: INFO: Deleting PersistentVolumeClaim "pvc-bfvhp" Jun 18 00:13:31.043: INFO: Deleting PersistentVolume "local-pvs9q4w" STEP: Cleaning up PVC and PV Jun 18 00:13:31.047: INFO: Deleting PersistentVolumeClaim "pvc-vfj6x" Jun 18 00:13:31.050: INFO: Deleting PersistentVolume "local-pv6rdfp" STEP: Cleaning up PVC and PV Jun 18 00:13:31.054: INFO: Deleting PersistentVolumeClaim "pvc-mngwf" Jun 18 00:13:31.058: INFO: Deleting PersistentVolume "local-pvtkh2t" STEP: Cleaning up PVC and PV Jun 18 00:13:31.062: INFO: Deleting PersistentVolumeClaim "pvc-stkpm" Jun 18 00:13:31.065: INFO: Deleting PersistentVolume "local-pv5nztw" STEP: Cleaning up PVC and PV Jun 18 00:13:31.069: INFO: Deleting PersistentVolumeClaim "pvc-6gqjn" Jun 18 00:13:31.072: INFO: Deleting PersistentVolume "local-pv7pdd6" STEP: Removing the test directory Jun 18 00:13:31.076: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-2642995c-ed06-42ea-9e7d-b0c693125c9c] Namespace:persistent-local-volumes-test-9703 PodName:hostexec-node1-ns9xj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:13:31.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:13:31.178: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-70450eea-18eb-4012-a6dc-6da6bf6d9f26] Namespace:persistent-local-volumes-test-9703 PodName:hostexec-node1-ns9xj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:13:31.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:13:31.268: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f5764719-2389-478c-9b2f-e797ff906d07] Namespace:persistent-local-volumes-test-9703 PodName:hostexec-node1-ns9xj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:13:31.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:13:31.468: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-10ce3de5-6334-425c-aa59-47943ff4d80e] Namespace:persistent-local-volumes-test-9703 PodName:hostexec-node1-ns9xj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:13:31.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:13:31.599: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ae52f067-e715-4d0a-bab8-f88abf3966cf] Namespace:persistent-local-volumes-test-9703 PodName:hostexec-node1-ns9xj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:13:31.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:13:32.091: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-2e0fc363-d3e8-4cdd-9cac-f0848b383e72] Namespace:persistent-local-volumes-test-9703 PodName:hostexec-node1-ns9xj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:13:32.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up PVC and PV Jun 18 00:13:32.194: INFO: Deleting PersistentVolumeClaim "pvc-st44p" Jun 18 00:13:32.199: INFO: Deleting PersistentVolume "local-pvkp46r" STEP: Cleaning up PVC and PV Jun 18 00:13:32.203: INFO: Deleting PersistentVolumeClaim "pvc-scxqq" Jun 18 00:13:32.207: INFO: Deleting PersistentVolume "local-pvhmms9" STEP: Cleaning up PVC and PV Jun 18 00:13:32.210: INFO: Deleting PersistentVolumeClaim "pvc-vlxjx" Jun 18 00:13:32.214: INFO: Deleting PersistentVolume "local-pvdscl2" STEP: Cleaning up PVC and PV Jun 18 00:13:32.218: INFO: Deleting PersistentVolumeClaim "pvc-n57qm" Jun 18 00:13:32.222: INFO: Deleting PersistentVolume "local-pvqh6l9" STEP: Cleaning up PVC and PV Jun 18 00:13:32.225: INFO: Deleting PersistentVolumeClaim "pvc-cf54b" Jun 18 00:13:32.229: INFO: Deleting PersistentVolume "local-pvcb8hn" STEP: Cleaning up PVC and PV Jun 18 00:13:32.233: INFO: Deleting PersistentVolumeClaim "pvc-xlthq" Jun 18 00:13:32.237: INFO: Deleting PersistentVolume "local-pvgzd9m" STEP: Removing the test directory Jun 18 00:13:32.241: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c24a5777-c708-42ec-b96f-b90b30967d4d] Namespace:persistent-local-volumes-test-9703 PodName:hostexec-node2-6kvnw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:13:32.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:13:32.326: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-097ec6df-cf5f-41d5-8e56-78ec4df8fdf2] Namespace:persistent-local-volumes-test-9703 PodName:hostexec-node2-6kvnw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:13:32.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:13:32.435: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-802b3777-9dfe-4554-a4e7-93dc10893574] Namespace:persistent-local-volumes-test-9703 PodName:hostexec-node2-6kvnw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:13:32.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:13:32.515: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-5fad08b0-53c2-416a-85ee-c836a9ec3273] Namespace:persistent-local-volumes-test-9703 PodName:hostexec-node2-6kvnw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:13:32.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:13:32.604: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ef644d8c-d71d-434f-acd7-05f6244884d1] Namespace:persistent-local-volumes-test-9703 PodName:hostexec-node2-6kvnw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:13:32.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:13:32.689: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-2f803784-0cd5-4b6c-a2e3-0c91345e846e] Namespace:persistent-local-volumes-test-9703 PodName:hostexec-node2-6kvnw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:13:32.689: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:13:32.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9703" for this suite. S [SKIPPING] [29.052 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:384 should use volumes spread across nodes when pod has anti-affinity [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:410 Runs only when number of nodes >= 3 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:412 ------------------------------ SSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:13:21.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 18 00:13:23.342: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-f97dd195-29f5-40b4-bc82-ed5bfc406e25-backend && mount --bind /tmp/local-volume-test-f97dd195-29f5-40b4-bc82-ed5bfc406e25-backend /tmp/local-volume-test-f97dd195-29f5-40b4-bc82-ed5bfc406e25-backend && ln -s /tmp/local-volume-test-f97dd195-29f5-40b4-bc82-ed5bfc406e25-backend /tmp/local-volume-test-f97dd195-29f5-40b4-bc82-ed5bfc406e25] Namespace:persistent-local-volumes-test-234 PodName:hostexec-node1-wsbzz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:13:23.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:13:23.452: INFO: Creating a PV followed by a PVC Jun 18 00:13:23.458: INFO: Waiting for PV local-pvf826j to bind to PVC pvc-d7wdl Jun 18 00:13:23.458: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-d7wdl] to have phase Bound Jun 18 00:13:23.460: INFO: PersistentVolumeClaim pvc-d7wdl found but phase is Pending instead of Bound. Jun 18 00:13:25.465: INFO: PersistentVolumeClaim pvc-d7wdl found but phase is Pending instead of Bound. Jun 18 00:13:27.469: INFO: PersistentVolumeClaim pvc-d7wdl found and phase=Bound (4.010222748s) Jun 18 00:13:27.469: INFO: Waiting up to 3m0s for PersistentVolume local-pvf826j to have phase Bound Jun 18 00:13:27.471: INFO: PersistentVolume local-pvf826j found and phase=Bound (2.128307ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Jun 18 00:13:33.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-234 exec pod-91b152cc-a8b0-4919-aadf-0777b4bd9203 --namespace=persistent-local-volumes-test-234 -- stat -c %g /mnt/volume1' Jun 18 00:13:33.735: INFO: stderr: "" Jun 18 00:13:33.735: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-91b152cc-a8b0-4919-aadf-0777b4bd9203 in namespace persistent-local-volumes-test-234 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:13:33.741: INFO: Deleting PersistentVolumeClaim "pvc-d7wdl" Jun 18 00:13:33.745: INFO: Deleting PersistentVolume "local-pvf826j" STEP: Removing the test directory Jun 18 00:13:33.750: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-f97dd195-29f5-40b4-bc82-ed5bfc406e25 && umount /tmp/local-volume-test-f97dd195-29f5-40b4-bc82-ed5bfc406e25-backend && rm -r /tmp/local-volume-test-f97dd195-29f5-40b4-bc82-ed5bfc406e25-backend] Namespace:persistent-local-volumes-test-234 PodName:hostexec-node1-wsbzz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:13:33.750: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:13:33.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-234" for this suite. • [SLOW TEST:12.570 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":18,"skipped":588,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:12:26.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity unused /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1299 STEP: Building a driver namespace object, basename csi-mock-volumes-4040 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 18 00:12:26.174: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4040-7132/csi-attacher Jun 18 00:12:26.177: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4040 Jun 18 00:12:26.177: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4040 Jun 18 00:12:26.179: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4040 Jun 18 00:12:26.182: INFO: creating *v1.Role: csi-mock-volumes-4040-7132/external-attacher-cfg-csi-mock-volumes-4040 Jun 18 00:12:26.185: INFO: creating *v1.RoleBinding: csi-mock-volumes-4040-7132/csi-attacher-role-cfg Jun 18 00:12:26.187: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4040-7132/csi-provisioner Jun 18 00:12:26.190: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4040 Jun 18 00:12:26.190: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4040 Jun 18 00:12:26.193: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4040 Jun 18 00:12:26.196: INFO: creating *v1.Role: csi-mock-volumes-4040-7132/external-provisioner-cfg-csi-mock-volumes-4040 Jun 18 00:12:26.201: INFO: creating *v1.RoleBinding: csi-mock-volumes-4040-7132/csi-provisioner-role-cfg Jun 18 00:12:26.205: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4040-7132/csi-resizer Jun 18 00:12:26.208: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4040 Jun 18 00:12:26.208: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4040 Jun 18 00:12:26.211: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4040 Jun 18 00:12:26.214: INFO: creating *v1.Role: csi-mock-volumes-4040-7132/external-resizer-cfg-csi-mock-volumes-4040 Jun 18 00:12:26.216: INFO: creating *v1.RoleBinding: csi-mock-volumes-4040-7132/csi-resizer-role-cfg Jun 18 00:12:26.219: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4040-7132/csi-snapshotter Jun 18 00:12:26.222: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4040 Jun 18 00:12:26.222: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4040 Jun 18 00:12:26.225: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4040 Jun 18 00:12:26.227: INFO: creating *v1.Role: csi-mock-volumes-4040-7132/external-snapshotter-leaderelection-csi-mock-volumes-4040 Jun 18 00:12:26.230: INFO: creating *v1.RoleBinding: csi-mock-volumes-4040-7132/external-snapshotter-leaderelection Jun 18 00:12:26.233: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4040-7132/csi-mock Jun 18 00:12:26.236: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4040 Jun 18 00:12:26.238: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4040 Jun 18 00:12:26.240: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4040 Jun 18 00:12:26.243: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4040 Jun 18 00:12:26.246: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4040 Jun 18 00:12:26.249: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4040 Jun 18 00:12:26.252: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4040 Jun 18 00:12:26.255: INFO: creating *v1.StatefulSet: csi-mock-volumes-4040-7132/csi-mockplugin Jun 18 00:12:26.259: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4040 Jun 18 00:12:26.262: INFO: creating *v1.StatefulSet: csi-mock-volumes-4040-7132/csi-mockplugin-attacher Jun 18 00:12:26.265: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4040" Jun 18 00:12:26.267: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4040 to register on node node2 STEP: Creating pod Jun 18 00:12:47.539: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Deleting the previously created pod Jun 18 00:13:03.568: INFO: Deleting pod "pvc-volume-tester-v5fv6" in namespace "csi-mock-volumes-4040" Jun 18 00:13:03.574: INFO: Wait up to 5m0s for pod "pvc-volume-tester-v5fv6" to be fully deleted STEP: Deleting pod pvc-volume-tester-v5fv6 Jun 18 00:13:09.580: INFO: Deleting pod "pvc-volume-tester-v5fv6" in namespace "csi-mock-volumes-4040" STEP: Deleting claim pvc-rnzjf Jun 18 00:13:09.590: INFO: Waiting up to 2m0s for PersistentVolume pvc-6be776ff-bcea-4de0-a66d-9456ef480c9e to get deleted Jun 18 00:13:09.592: INFO: PersistentVolume pvc-6be776ff-bcea-4de0-a66d-9456ef480c9e found and phase=Bound (1.976795ms) Jun 18 00:13:11.597: INFO: PersistentVolume pvc-6be776ff-bcea-4de0-a66d-9456ef480c9e found and phase=Released (2.007260731s) Jun 18 00:13:13.600: INFO: PersistentVolume pvc-6be776ff-bcea-4de0-a66d-9456ef480c9e found and phase=Released (4.010682276s) Jun 18 00:13:15.606: INFO: PersistentVolume pvc-6be776ff-bcea-4de0-a66d-9456ef480c9e found and phase=Released (6.01652658s) Jun 18 00:13:17.610: INFO: PersistentVolume pvc-6be776ff-bcea-4de0-a66d-9456ef480c9e was removed STEP: Deleting storageclass mock-csi-storage-capacity-csi-mock-volumes-4040 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4040 STEP: Waiting for namespaces [csi-mock-volumes-4040] to vanish STEP: uninstalling csi mock driver Jun 18 00:13:23.629: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4040-7132/csi-attacher Jun 18 00:13:23.633: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4040 Jun 18 00:13:23.637: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4040 Jun 18 00:13:23.640: INFO: deleting *v1.Role: csi-mock-volumes-4040-7132/external-attacher-cfg-csi-mock-volumes-4040 Jun 18 00:13:23.643: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4040-7132/csi-attacher-role-cfg Jun 18 00:13:23.647: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4040-7132/csi-provisioner Jun 18 00:13:23.650: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4040 Jun 18 00:13:23.654: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4040 Jun 18 00:13:23.657: INFO: deleting *v1.Role: csi-mock-volumes-4040-7132/external-provisioner-cfg-csi-mock-volumes-4040 Jun 18 00:13:23.660: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4040-7132/csi-provisioner-role-cfg Jun 18 00:13:23.663: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4040-7132/csi-resizer Jun 18 00:13:23.668: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4040 Jun 18 00:13:23.671: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4040 Jun 18 00:13:23.675: INFO: deleting *v1.Role: csi-mock-volumes-4040-7132/external-resizer-cfg-csi-mock-volumes-4040 Jun 18 00:13:23.678: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4040-7132/csi-resizer-role-cfg Jun 18 00:13:23.681: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4040-7132/csi-snapshotter Jun 18 00:13:23.685: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4040 Jun 18 00:13:23.689: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4040 Jun 18 00:13:23.693: INFO: deleting *v1.Role: csi-mock-volumes-4040-7132/external-snapshotter-leaderelection-csi-mock-volumes-4040 Jun 18 00:13:23.696: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4040-7132/external-snapshotter-leaderelection Jun 18 00:13:23.700: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4040-7132/csi-mock Jun 18 00:13:23.703: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4040 Jun 18 00:13:23.706: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4040 Jun 18 00:13:23.709: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4040 Jun 18 00:13:23.712: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4040 Jun 18 00:13:23.716: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4040 Jun 18 00:13:23.719: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4040 Jun 18 00:13:23.723: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4040 Jun 18 00:13:23.726: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4040-7132/csi-mockplugin Jun 18 00:13:23.730: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-4040 Jun 18 00:13:23.734: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4040-7132/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-4040-7132 STEP: Waiting for namespaces [csi-mock-volumes-4040-7132] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:13:35.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:69.638 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1256 CSIStorageCapacity unused /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1299 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity unused","total":-1,"completed":20,"skipped":589,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:12:33.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity disabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1299 STEP: Building a driver namespace object, basename csi-mock-volumes-6733 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 18 00:12:33.401: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6733-5619/csi-attacher Jun 18 00:12:33.404: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6733 Jun 18 00:12:33.404: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-6733 Jun 18 00:12:33.407: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6733 Jun 18 00:12:33.410: INFO: creating *v1.Role: csi-mock-volumes-6733-5619/external-attacher-cfg-csi-mock-volumes-6733 Jun 18 00:12:33.413: INFO: creating *v1.RoleBinding: csi-mock-volumes-6733-5619/csi-attacher-role-cfg Jun 18 00:12:33.416: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6733-5619/csi-provisioner Jun 18 00:12:33.419: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6733 Jun 18 00:12:33.419: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-6733 Jun 18 00:12:33.422: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6733 Jun 18 00:12:33.425: INFO: creating *v1.Role: csi-mock-volumes-6733-5619/external-provisioner-cfg-csi-mock-volumes-6733 Jun 18 00:12:33.428: INFO: creating *v1.RoleBinding: csi-mock-volumes-6733-5619/csi-provisioner-role-cfg Jun 18 00:12:33.431: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6733-5619/csi-resizer Jun 18 00:12:33.433: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6733 Jun 18 00:12:33.433: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-6733 Jun 18 00:12:33.436: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6733 Jun 18 00:12:33.438: INFO: creating *v1.Role: csi-mock-volumes-6733-5619/external-resizer-cfg-csi-mock-volumes-6733 Jun 18 00:12:33.442: INFO: creating *v1.RoleBinding: csi-mock-volumes-6733-5619/csi-resizer-role-cfg Jun 18 00:12:33.444: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6733-5619/csi-snapshotter Jun 18 00:12:33.447: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6733 Jun 18 00:12:33.447: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-6733 Jun 18 00:12:33.449: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6733 Jun 18 00:12:33.452: INFO: creating *v1.Role: csi-mock-volumes-6733-5619/external-snapshotter-leaderelection-csi-mock-volumes-6733 Jun 18 00:12:33.454: INFO: creating *v1.RoleBinding: csi-mock-volumes-6733-5619/external-snapshotter-leaderelection Jun 18 00:12:33.456: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6733-5619/csi-mock Jun 18 00:12:33.459: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6733 Jun 18 00:12:33.462: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6733 Jun 18 00:12:33.464: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6733 Jun 18 00:12:33.467: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6733 Jun 18 00:12:33.470: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6733 Jun 18 00:12:33.472: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6733 Jun 18 00:12:33.475: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6733 Jun 18 00:12:33.478: INFO: creating *v1.StatefulSet: csi-mock-volumes-6733-5619/csi-mockplugin Jun 18 00:12:33.485: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-6733 Jun 18 00:12:33.488: INFO: creating *v1.StatefulSet: csi-mock-volumes-6733-5619/csi-mockplugin-attacher Jun 18 00:12:33.492: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-6733" Jun 18 00:12:33.494: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-6733 to register on node node2 STEP: Creating pod Jun 18 00:12:48.017: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Deleting the previously created pod Jun 18 00:13:06.039: INFO: Deleting pod "pvc-volume-tester-qdm8p" in namespace "csi-mock-volumes-6733" Jun 18 00:13:06.043: INFO: Wait up to 5m0s for pod "pvc-volume-tester-qdm8p" to be fully deleted STEP: Deleting pod pvc-volume-tester-qdm8p Jun 18 00:13:10.050: INFO: Deleting pod "pvc-volume-tester-qdm8p" in namespace "csi-mock-volumes-6733" STEP: Deleting claim pvc-j2j9k Jun 18 00:13:10.057: INFO: Waiting up to 2m0s for PersistentVolume pvc-38229df3-fe87-4b3d-a1ab-65de2ca36a09 to get deleted Jun 18 00:13:10.059: INFO: PersistentVolume pvc-38229df3-fe87-4b3d-a1ab-65de2ca36a09 found and phase=Bound (1.722083ms) Jun 18 00:13:12.065: INFO: PersistentVolume pvc-38229df3-fe87-4b3d-a1ab-65de2ca36a09 found and phase=Released (2.007281549s) Jun 18 00:13:14.068: INFO: PersistentVolume pvc-38229df3-fe87-4b3d-a1ab-65de2ca36a09 found and phase=Released (4.010307103s) Jun 18 00:13:16.073: INFO: PersistentVolume pvc-38229df3-fe87-4b3d-a1ab-65de2ca36a09 found and phase=Released (6.016206838s) Jun 18 00:13:18.078: INFO: PersistentVolume pvc-38229df3-fe87-4b3d-a1ab-65de2ca36a09 was removed STEP: Deleting storageclass mock-csi-storage-capacity-csi-mock-volumes-6733 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-6733 STEP: Waiting for namespaces [csi-mock-volumes-6733] to vanish STEP: uninstalling csi mock driver Jun 18 00:13:24.091: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6733-5619/csi-attacher Jun 18 00:13:24.095: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6733 Jun 18 00:13:24.099: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6733 Jun 18 00:13:24.102: INFO: deleting *v1.Role: csi-mock-volumes-6733-5619/external-attacher-cfg-csi-mock-volumes-6733 Jun 18 00:13:24.106: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6733-5619/csi-attacher-role-cfg Jun 18 00:13:24.110: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6733-5619/csi-provisioner Jun 18 00:13:24.113: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6733 Jun 18 00:13:24.120: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6733 Jun 18 00:13:24.127: INFO: deleting *v1.Role: csi-mock-volumes-6733-5619/external-provisioner-cfg-csi-mock-volumes-6733 Jun 18 00:13:24.137: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6733-5619/csi-provisioner-role-cfg Jun 18 00:13:24.143: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6733-5619/csi-resizer Jun 18 00:13:24.146: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6733 Jun 18 00:13:24.150: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6733 Jun 18 00:13:24.153: INFO: deleting *v1.Role: csi-mock-volumes-6733-5619/external-resizer-cfg-csi-mock-volumes-6733 Jun 18 00:13:24.156: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6733-5619/csi-resizer-role-cfg Jun 18 00:13:24.158: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6733-5619/csi-snapshotter Jun 18 00:13:24.161: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6733 Jun 18 00:13:24.165: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6733 Jun 18 00:13:24.168: INFO: deleting *v1.Role: csi-mock-volumes-6733-5619/external-snapshotter-leaderelection-csi-mock-volumes-6733 Jun 18 00:13:24.172: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6733-5619/external-snapshotter-leaderelection Jun 18 00:13:24.176: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6733-5619/csi-mock Jun 18 00:13:24.179: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6733 Jun 18 00:13:24.183: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6733 Jun 18 00:13:24.186: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6733 Jun 18 00:13:24.189: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6733 Jun 18 00:13:24.193: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6733 Jun 18 00:13:24.196: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6733 Jun 18 00:13:24.200: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6733 Jun 18 00:13:24.203: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6733-5619/csi-mockplugin Jun 18 00:13:24.206: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-6733 Jun 18 00:13:24.209: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6733-5619/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-6733-5619 STEP: Waiting for namespaces [csi-mock-volumes-6733-5619] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:13:36.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:62.893 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1256 CSIStorageCapacity disabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1299 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity disabled","total":-1,"completed":9,"skipped":360,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:12:22.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] token should not be plumbed down when csiServiceAccountTokenEnabled=false /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1524 STEP: Building a driver namespace object, basename csi-mock-volumes-4171 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 18 00:12:22.098: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4171-256/csi-attacher Jun 18 00:12:22.101: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4171 Jun 18 00:12:22.101: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4171 Jun 18 00:12:22.103: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4171 Jun 18 00:12:22.106: INFO: creating *v1.Role: csi-mock-volumes-4171-256/external-attacher-cfg-csi-mock-volumes-4171 Jun 18 00:12:22.109: INFO: creating *v1.RoleBinding: csi-mock-volumes-4171-256/csi-attacher-role-cfg Jun 18 00:12:22.111: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4171-256/csi-provisioner Jun 18 00:12:22.114: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4171 Jun 18 00:12:22.114: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4171 Jun 18 00:12:22.117: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4171 Jun 18 00:12:22.120: INFO: creating *v1.Role: csi-mock-volumes-4171-256/external-provisioner-cfg-csi-mock-volumes-4171 Jun 18 00:12:22.122: INFO: creating *v1.RoleBinding: csi-mock-volumes-4171-256/csi-provisioner-role-cfg Jun 18 00:12:22.125: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4171-256/csi-resizer Jun 18 00:12:22.127: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4171 Jun 18 00:12:22.127: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4171 Jun 18 00:12:22.129: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4171 Jun 18 00:12:22.132: INFO: creating *v1.Role: csi-mock-volumes-4171-256/external-resizer-cfg-csi-mock-volumes-4171 Jun 18 00:12:22.134: INFO: creating *v1.RoleBinding: csi-mock-volumes-4171-256/csi-resizer-role-cfg Jun 18 00:12:22.137: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4171-256/csi-snapshotter Jun 18 00:12:22.140: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4171 Jun 18 00:12:22.140: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4171 Jun 18 00:12:22.142: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4171 Jun 18 00:12:22.145: INFO: creating *v1.Role: csi-mock-volumes-4171-256/external-snapshotter-leaderelection-csi-mock-volumes-4171 Jun 18 00:12:22.148: INFO: creating *v1.RoleBinding: csi-mock-volumes-4171-256/external-snapshotter-leaderelection Jun 18 00:12:22.150: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4171-256/csi-mock Jun 18 00:12:22.152: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4171 Jun 18 00:12:22.155: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4171 Jun 18 00:12:22.157: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4171 Jun 18 00:12:22.159: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4171 Jun 18 00:12:22.162: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4171 Jun 18 00:12:22.164: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4171 Jun 18 00:12:22.167: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4171 Jun 18 00:12:22.170: INFO: creating *v1.StatefulSet: csi-mock-volumes-4171-256/csi-mockplugin Jun 18 00:12:22.174: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4171 Jun 18 00:12:22.177: INFO: creating *v1.StatefulSet: csi-mock-volumes-4171-256/csi-mockplugin-attacher Jun 18 00:12:22.181: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4171" Jun 18 00:12:22.183: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4171 to register on node node2 STEP: Creating pod Jun 18 00:12:31.700: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 18 00:12:31.705: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-s2fkg] to have phase Bound Jun 18 00:12:31.707: INFO: PersistentVolumeClaim pvc-s2fkg found but phase is Pending instead of Bound. Jun 18 00:12:33.712: INFO: PersistentVolumeClaim pvc-s2fkg found and phase=Bound (2.006923087s) STEP: Deleting the previously created pod Jun 18 00:12:53.731: INFO: Deleting pod "pvc-volume-tester-mqq5f" in namespace "csi-mock-volumes-4171" Jun 18 00:12:53.736: INFO: Wait up to 5m0s for pod "pvc-volume-tester-mqq5f" to be fully deleted STEP: Checking CSI driver logs Jun 18 00:13:00.230: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/b5e26ef7-a19b-4ccd-ac71-91b0db3cee19/volumes/kubernetes.io~csi/pvc-2f6f4510-b794-4a10-8ca4-c3fe94803867/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-mqq5f Jun 18 00:13:00.230: INFO: Deleting pod "pvc-volume-tester-mqq5f" in namespace "csi-mock-volumes-4171" STEP: Deleting claim pvc-s2fkg Jun 18 00:13:00.240: INFO: Waiting up to 2m0s for PersistentVolume pvc-2f6f4510-b794-4a10-8ca4-c3fe94803867 to get deleted Jun 18 00:13:00.242: INFO: PersistentVolume pvc-2f6f4510-b794-4a10-8ca4-c3fe94803867 found and phase=Bound (2.407468ms) Jun 18 00:13:02.246: INFO: PersistentVolume pvc-2f6f4510-b794-4a10-8ca4-c3fe94803867 found and phase=Released (2.005530549s) Jun 18 00:13:04.250: INFO: PersistentVolume pvc-2f6f4510-b794-4a10-8ca4-c3fe94803867 found and phase=Released (4.009549807s) Jun 18 00:13:06.253: INFO: PersistentVolume pvc-2f6f4510-b794-4a10-8ca4-c3fe94803867 found and phase=Released (6.013283509s) Jun 18 00:13:08.258: INFO: PersistentVolume pvc-2f6f4510-b794-4a10-8ca4-c3fe94803867 was removed STEP: Deleting storageclass csi-mock-volumes-4171-scjpsdz STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4171 STEP: Waiting for namespaces [csi-mock-volumes-4171] to vanish STEP: uninstalling csi mock driver Jun 18 00:13:14.270: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4171-256/csi-attacher Jun 18 00:13:14.275: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4171 Jun 18 00:13:14.279: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4171 Jun 18 00:13:14.282: INFO: deleting *v1.Role: csi-mock-volumes-4171-256/external-attacher-cfg-csi-mock-volumes-4171 Jun 18 00:13:14.286: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4171-256/csi-attacher-role-cfg Jun 18 00:13:14.289: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4171-256/csi-provisioner Jun 18 00:13:14.293: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4171 Jun 18 00:13:14.297: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4171 Jun 18 00:13:14.300: INFO: deleting *v1.Role: csi-mock-volumes-4171-256/external-provisioner-cfg-csi-mock-volumes-4171 Jun 18 00:13:14.303: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4171-256/csi-provisioner-role-cfg Jun 18 00:13:14.307: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4171-256/csi-resizer Jun 18 00:13:14.310: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4171 Jun 18 00:13:14.313: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4171 Jun 18 00:13:14.317: INFO: deleting *v1.Role: csi-mock-volumes-4171-256/external-resizer-cfg-csi-mock-volumes-4171 Jun 18 00:13:14.323: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4171-256/csi-resizer-role-cfg Jun 18 00:13:14.330: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4171-256/csi-snapshotter Jun 18 00:13:14.337: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4171 Jun 18 00:13:14.343: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4171 Jun 18 00:13:14.347: INFO: deleting *v1.Role: csi-mock-volumes-4171-256/external-snapshotter-leaderelection-csi-mock-volumes-4171 Jun 18 00:13:14.351: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4171-256/external-snapshotter-leaderelection Jun 18 00:13:14.354: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4171-256/csi-mock Jun 18 00:13:14.358: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4171 Jun 18 00:13:14.361: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4171 Jun 18 00:13:14.364: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4171 Jun 18 00:13:14.368: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4171 Jun 18 00:13:14.372: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4171 Jun 18 00:13:14.375: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4171 Jun 18 00:13:14.378: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4171 Jun 18 00:13:14.381: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4171-256/csi-mockplugin Jun 18 00:13:14.385: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-4171 Jun 18 00:13:14.388: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4171-256/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-4171-256 STEP: Waiting for namespaces [csi-mock-volumes-4171-256] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:13:42.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:80.369 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIServiceAccountToken /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1496 token should not be plumbed down when csiServiceAccountTokenEnabled=false /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1524 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false","total":-1,"completed":7,"skipped":183,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:13:36.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Jun 18 00:13:36.307: INFO: The status of Pod test-hostpath-type-msvl9 is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:13:38.310: INFO: The status of Pod test-hostpath-type-msvl9 is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:13:40.310: INFO: The status of Pod test-hostpath-type-msvl9 is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:13:42.310: INFO: The status of Pod test-hostpath-type-msvl9 is Running (Ready = true) STEP: running on node node2 STEP: Create a character device for further testing Jun 18 00:13:42.312: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-9882 PodName:test-hostpath-type-msvl9 ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:13:42.312: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting non-existent character device 'does-not-exist-char-dev' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:271 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:13:44.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-9882" for this suite. • [SLOW TEST:8.398 seconds] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting non-existent character device 'does-not-exist-char-dev' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:271 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should fail on mounting non-existent character device 'does-not-exist-char-dev' when HostPathType is HostPathCharDev","total":-1,"completed":10,"skipped":380,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:13:35.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Jun 18 00:13:35.853: INFO: The status of Pod test-hostpath-type-jdxcd is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:13:37.857: INFO: The status of Pod test-hostpath-type-jdxcd is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:13:39.857: INFO: The status of Pod test-hostpath-type-jdxcd is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:13:41.859: INFO: The status of Pod test-hostpath-type-jdxcd is Running (Ready = true) STEP: running on node node2 STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should be able to mount file 'afile' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:147 [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:13:49.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-1656" for this suite. • [SLOW TEST:14.104 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount file 'afile' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:147 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should be able to mount file 'afile' successfully when HostPathType is HostPathUnset","total":-1,"completed":21,"skipped":615,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:13:42.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37 [It] should support r/w [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:65 STEP: Creating a pod to test hostPath r/w Jun 18 00:13:42.493: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-8200" to be "Succeeded or Failed" Jun 18 00:13:42.496: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.073092ms Jun 18 00:13:44.500: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006574454s Jun 18 00:13:46.505: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012064877s Jun 18 00:13:48.511: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017405241s Jun 18 00:13:50.516: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.02217653s Jun 18 00:13:52.520: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.026578495s STEP: Saw pod success Jun 18 00:13:52.520: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Jun 18 00:13:52.523: INFO: Trying to get logs from node node1 pod pod-host-path-test container test-container-2: STEP: delete the pod Jun 18 00:13:52.534: INFO: Waiting for pod pod-host-path-test to disappear Jun 18 00:13:52.536: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:13:52.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-8200" for this suite. • [SLOW TEST:10.084 seconds] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support r/w [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:65 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should support r/w [NodeConformance]","total":-1,"completed":8,"skipped":206,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:13:50.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Jun 18 00:13:54.096: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-9129 PodName:hostexec-node2-6pd29 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:13:54.096: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:13:54.183: INFO: exec node2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Jun 18 00:13:54.183: INFO: exec node2: stdout: "0\n" Jun 18 00:13:54.184: INFO: exec node2: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Jun 18 00:13:54.184: INFO: exec node2: exit code: 0 Jun 18 00:13:54.184: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:13:54.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9129" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [4.141 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1254 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:13:44.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 18 00:13:52.762: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-49746ab8-ca59-42d9-a9fc-cbc5bc0e14ce && mount --bind /tmp/local-volume-test-49746ab8-ca59-42d9-a9fc-cbc5bc0e14ce /tmp/local-volume-test-49746ab8-ca59-42d9-a9fc-cbc5bc0e14ce] Namespace:persistent-local-volumes-test-7519 PodName:hostexec-node1-22vkw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:13:52.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:13:52.954: INFO: Creating a PV followed by a PVC Jun 18 00:13:52.960: INFO: Waiting for PV local-pvvfthz to bind to PVC pvc-42kwd Jun 18 00:13:52.960: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-42kwd] to have phase Bound Jun 18 00:13:52.962: INFO: PersistentVolumeClaim pvc-42kwd found but phase is Pending instead of Bound. Jun 18 00:13:54.967: INFO: PersistentVolumeClaim pvc-42kwd found but phase is Pending instead of Bound. Jun 18 00:13:56.975: INFO: PersistentVolumeClaim pvc-42kwd found and phase=Bound (4.015390152s) Jun 18 00:13:56.975: INFO: Waiting up to 3m0s for PersistentVolume local-pvvfthz to have phase Bound Jun 18 00:13:56.979: INFO: PersistentVolume local-pvvfthz found and phase=Bound (3.082415ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jun 18 00:14:01.011: INFO: pod "pod-9d022ca7-6e1f-4afc-8a07-ce61d27d9f6e" created on Node "node1" STEP: Writing in pod1 Jun 18 00:14:01.011: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7519 PodName:pod-9d022ca7-6e1f-4afc-8a07-ce61d27d9f6e ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:14:01.011: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:14:01.189: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Jun 18 00:14:01.189: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7519 PodName:pod-9d022ca7-6e1f-4afc-8a07-ce61d27d9f6e ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:14:01.189: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:14:01.265: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Jun 18 00:14:01.265: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-49746ab8-ca59-42d9-a9fc-cbc5bc0e14ce > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7519 PodName:pod-9d022ca7-6e1f-4afc-8a07-ce61d27d9f6e ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:14:01.265: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:14:01.342: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-49746ab8-ca59-42d9-a9fc-cbc5bc0e14ce > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-9d022ca7-6e1f-4afc-8a07-ce61d27d9f6e in namespace persistent-local-volumes-test-7519 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:14:01.347: INFO: Deleting PersistentVolumeClaim "pvc-42kwd" Jun 18 00:14:01.350: INFO: Deleting PersistentVolume "local-pvvfthz" STEP: Removing the test directory Jun 18 00:14:01.354: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-49746ab8-ca59-42d9-a9fc-cbc5bc0e14ce && rm -r /tmp/local-volume-test-49746ab8-ca59-42d9-a9fc-cbc5bc0e14ce] Namespace:persistent-local-volumes-test-7519 PodName:hostexec-node1-22vkw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:14:01.354: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:14:01.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7519" for this suite. • [SLOW TEST:16.735 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":11,"skipped":400,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:14:01.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should not provision a volume in an unmanaged GCE zone. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:451 Jun 18 00:14:01.512: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:14:01.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-273" for this suite. S [SKIPPING] [0.033 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 DynamicProvisioner [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:152 should not provision a volume in an unmanaged GCE zone. [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:451 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:452 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:13:54.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Jun 18 00:13:54.297: INFO: The status of Pod test-hostpath-type-wzhmz is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:13:56.301: INFO: The status of Pod test-hostpath-type-wzhmz is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:13:58.304: INFO: The status of Pod test-hostpath-type-wzhmz is Running (Ready = true) STEP: running on node node2 STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should fail on mounting file 'afile' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:166 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:14:04.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-4270" for this suite. • [SLOW TEST:10.104 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting file 'afile' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:166 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathBlockDev","total":-1,"completed":22,"skipped":705,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:14:04.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Jun 18 00:14:04.485: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:14:04.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-7311" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create total pv count metrics for with plugin and volume mode labels after creating pv /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:513 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:13:05.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should call NodeStage after NodeUnstage success /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:961 STEP: Building a driver namespace object, basename csi-mock-volumes-1152 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 18 00:13:05.812: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1152-7788/csi-attacher Jun 18 00:13:05.815: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1152 Jun 18 00:13:05.815: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-1152 Jun 18 00:13:05.818: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1152 Jun 18 00:13:05.821: INFO: creating *v1.Role: csi-mock-volumes-1152-7788/external-attacher-cfg-csi-mock-volumes-1152 Jun 18 00:13:05.823: INFO: creating *v1.RoleBinding: csi-mock-volumes-1152-7788/csi-attacher-role-cfg Jun 18 00:13:05.825: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1152-7788/csi-provisioner Jun 18 00:13:05.828: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1152 Jun 18 00:13:05.828: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-1152 Jun 18 00:13:05.831: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1152 Jun 18 00:13:05.833: INFO: creating *v1.Role: csi-mock-volumes-1152-7788/external-provisioner-cfg-csi-mock-volumes-1152 Jun 18 00:13:05.836: INFO: creating *v1.RoleBinding: csi-mock-volumes-1152-7788/csi-provisioner-role-cfg Jun 18 00:13:05.839: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1152-7788/csi-resizer Jun 18 00:13:05.842: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1152 Jun 18 00:13:05.842: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-1152 Jun 18 00:13:05.844: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1152 Jun 18 00:13:05.848: INFO: creating *v1.Role: csi-mock-volumes-1152-7788/external-resizer-cfg-csi-mock-volumes-1152 Jun 18 00:13:05.851: INFO: creating *v1.RoleBinding: csi-mock-volumes-1152-7788/csi-resizer-role-cfg Jun 18 00:13:05.853: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1152-7788/csi-snapshotter Jun 18 00:13:05.855: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1152 Jun 18 00:13:05.855: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-1152 Jun 18 00:13:05.858: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1152 Jun 18 00:13:05.862: INFO: creating *v1.Role: csi-mock-volumes-1152-7788/external-snapshotter-leaderelection-csi-mock-volumes-1152 Jun 18 00:13:05.865: INFO: creating *v1.RoleBinding: csi-mock-volumes-1152-7788/external-snapshotter-leaderelection Jun 18 00:13:05.867: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1152-7788/csi-mock Jun 18 00:13:05.869: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1152 Jun 18 00:13:05.872: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1152 Jun 18 00:13:05.875: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1152 Jun 18 00:13:05.878: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1152 Jun 18 00:13:05.880: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1152 Jun 18 00:13:05.883: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1152 Jun 18 00:13:05.886: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1152 Jun 18 00:13:05.888: INFO: creating *v1.StatefulSet: csi-mock-volumes-1152-7788/csi-mockplugin Jun 18 00:13:05.894: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-1152 Jun 18 00:13:05.897: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-1152" Jun 18 00:13:05.899: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-1152 to register on node node1 STEP: Creating pod Jun 18 00:13:10.912: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 18 00:13:10.916: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-j4wxc] to have phase Bound Jun 18 00:13:10.918: INFO: PersistentVolumeClaim pvc-j4wxc found but phase is Pending instead of Bound. Jun 18 00:13:12.923: INFO: PersistentVolumeClaim pvc-j4wxc found and phase=Bound (2.006418864s) Jun 18 00:13:12.938: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-j4wxc] to have phase Bound Jun 18 00:13:12.940: INFO: PersistentVolumeClaim pvc-j4wxc found and phase=Bound (2.666743ms) Jun 18 00:13:16.947: INFO: Deleting pod "pvc-volume-tester-qflxs" in namespace "csi-mock-volumes-1152" Jun 18 00:13:16.951: INFO: Wait up to 5m0s for pod "pvc-volume-tester-qflxs" to be fully deleted Jun 18 00:13:34.975: INFO: Deleting pod "pvc-volume-tester-8lkkz" in namespace "csi-mock-volumes-1152" Jun 18 00:13:34.981: INFO: Wait up to 5m0s for pod "pvc-volume-tester-8lkkz" to be fully deleted STEP: Waiting for all remaining expected CSI calls STEP: Deleting pod pvc-volume-tester-qflxs Jun 18 00:13:52.006: INFO: Deleting pod "pvc-volume-tester-qflxs" in namespace "csi-mock-volumes-1152" STEP: Deleting pod pvc-volume-tester-8lkkz Jun 18 00:13:52.009: INFO: Deleting pod "pvc-volume-tester-8lkkz" in namespace "csi-mock-volumes-1152" STEP: Deleting claim pvc-j4wxc Jun 18 00:13:52.017: INFO: Waiting up to 2m0s for PersistentVolume pvc-18cf1cc2-ec88-46f8-99af-54ea8f1e0e08 to get deleted Jun 18 00:13:52.020: INFO: PersistentVolume pvc-18cf1cc2-ec88-46f8-99af-54ea8f1e0e08 found and phase=Bound (2.179009ms) Jun 18 00:13:54.023: INFO: PersistentVolume pvc-18cf1cc2-ec88-46f8-99af-54ea8f1e0e08 was removed STEP: Deleting storageclass csi-mock-volumes-1152-scmdtr6 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-1152 STEP: Waiting for namespaces [csi-mock-volumes-1152] to vanish STEP: uninstalling csi mock driver Jun 18 00:14:00.042: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1152-7788/csi-attacher Jun 18 00:14:00.046: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1152 Jun 18 00:14:00.051: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1152 Jun 18 00:14:00.054: INFO: deleting *v1.Role: csi-mock-volumes-1152-7788/external-attacher-cfg-csi-mock-volumes-1152 Jun 18 00:14:00.057: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1152-7788/csi-attacher-role-cfg Jun 18 00:14:00.060: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1152-7788/csi-provisioner Jun 18 00:14:00.063: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1152 Jun 18 00:14:00.067: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1152 Jun 18 00:14:00.069: INFO: deleting *v1.Role: csi-mock-volumes-1152-7788/external-provisioner-cfg-csi-mock-volumes-1152 Jun 18 00:14:00.073: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1152-7788/csi-provisioner-role-cfg Jun 18 00:14:00.076: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1152-7788/csi-resizer Jun 18 00:14:00.079: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1152 Jun 18 00:14:00.085: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1152 Jun 18 00:14:00.094: INFO: deleting *v1.Role: csi-mock-volumes-1152-7788/external-resizer-cfg-csi-mock-volumes-1152 Jun 18 00:14:00.098: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1152-7788/csi-resizer-role-cfg Jun 18 00:14:00.101: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1152-7788/csi-snapshotter Jun 18 00:14:00.107: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1152 Jun 18 00:14:00.111: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1152 Jun 18 00:14:00.114: INFO: deleting *v1.Role: csi-mock-volumes-1152-7788/external-snapshotter-leaderelection-csi-mock-volumes-1152 Jun 18 00:14:00.117: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1152-7788/external-snapshotter-leaderelection Jun 18 00:14:00.120: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1152-7788/csi-mock Jun 18 00:14:00.124: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1152 Jun 18 00:14:00.128: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1152 Jun 18 00:14:00.131: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1152 Jun 18 00:14:00.134: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1152 Jun 18 00:14:00.137: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1152 Jun 18 00:14:00.140: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1152 Jun 18 00:14:00.143: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1152 Jun 18 00:14:00.147: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1152-7788/csi-mockplugin Jun 18 00:14:00.150: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-1152 STEP: deleting the driver namespace: csi-mock-volumes-1152-7788 STEP: Waiting for namespaces [csi-mock-volumes-1152-7788] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:14:06.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:60.426 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI NodeUnstage error cases [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900 should call NodeStage after NodeUnstage success /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:961 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] should call NodeStage after NodeUnstage success","total":-1,"completed":12,"skipped":498,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:14:06.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:68 Jun 18 00:14:06.228: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:14:06.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-2068" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 GlusterFS [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:128 should be mountable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:129 Only supported for node OS distro [gci ubuntu custom] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:69 ------------------------------ SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:14:06.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:68 Jun 18 00:14:06.286: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:14:06.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-9638" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 NFSv3 [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:102 should be mountable for NFSv3 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:103 Only supported for node OS distro [gci ubuntu custom] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:69 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:12:44.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume without restarting pod if nodeExpansion=off /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 STEP: Building a driver namespace object, basename csi-mock-volumes-5292 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 18 00:12:44.707: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5292-3440/csi-attacher Jun 18 00:12:44.710: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5292 Jun 18 00:12:44.710: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-5292 Jun 18 00:12:44.713: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5292 Jun 18 00:12:44.716: INFO: creating *v1.Role: csi-mock-volumes-5292-3440/external-attacher-cfg-csi-mock-volumes-5292 Jun 18 00:12:44.718: INFO: creating *v1.RoleBinding: csi-mock-volumes-5292-3440/csi-attacher-role-cfg Jun 18 00:12:44.733: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5292-3440/csi-provisioner Jun 18 00:12:44.736: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5292 Jun 18 00:12:44.736: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-5292 Jun 18 00:12:44.739: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5292 Jun 18 00:12:44.743: INFO: creating *v1.Role: csi-mock-volumes-5292-3440/external-provisioner-cfg-csi-mock-volumes-5292 Jun 18 00:12:44.746: INFO: creating *v1.RoleBinding: csi-mock-volumes-5292-3440/csi-provisioner-role-cfg Jun 18 00:12:44.750: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5292-3440/csi-resizer Jun 18 00:12:44.753: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5292 Jun 18 00:12:44.753: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-5292 Jun 18 00:12:44.755: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5292 Jun 18 00:12:44.759: INFO: creating *v1.Role: csi-mock-volumes-5292-3440/external-resizer-cfg-csi-mock-volumes-5292 Jun 18 00:12:44.762: INFO: creating *v1.RoleBinding: csi-mock-volumes-5292-3440/csi-resizer-role-cfg Jun 18 00:12:44.765: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5292-3440/csi-snapshotter Jun 18 00:12:44.767: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5292 Jun 18 00:12:44.767: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-5292 Jun 18 00:12:44.770: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5292 Jun 18 00:12:44.772: INFO: creating *v1.Role: csi-mock-volumes-5292-3440/external-snapshotter-leaderelection-csi-mock-volumes-5292 Jun 18 00:12:44.775: INFO: creating *v1.RoleBinding: csi-mock-volumes-5292-3440/external-snapshotter-leaderelection Jun 18 00:12:44.778: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5292-3440/csi-mock Jun 18 00:12:44.780: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5292 Jun 18 00:12:44.782: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5292 Jun 18 00:12:44.785: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5292 Jun 18 00:12:44.788: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5292 Jun 18 00:12:44.790: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5292 Jun 18 00:12:44.793: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5292 Jun 18 00:12:44.795: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5292 Jun 18 00:12:44.798: INFO: creating *v1.StatefulSet: csi-mock-volumes-5292-3440/csi-mockplugin Jun 18 00:12:44.803: INFO: creating *v1.StatefulSet: csi-mock-volumes-5292-3440/csi-mockplugin-attacher Jun 18 00:12:44.806: INFO: creating *v1.StatefulSet: csi-mock-volumes-5292-3440/csi-mockplugin-resizer Jun 18 00:12:44.810: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5292 to register on node node1 STEP: Creating pod Jun 18 00:12:54.327: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 18 00:12:54.331: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-jjn2p] to have phase Bound Jun 18 00:12:54.333: INFO: PersistentVolumeClaim pvc-jjn2p found but phase is Pending instead of Bound. Jun 18 00:12:56.339: INFO: PersistentVolumeClaim pvc-jjn2p found and phase=Bound (2.007348566s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-zfhjw Jun 18 00:13:18.377: INFO: Deleting pod "pvc-volume-tester-zfhjw" in namespace "csi-mock-volumes-5292" Jun 18 00:13:18.382: INFO: Wait up to 5m0s for pod "pvc-volume-tester-zfhjw" to be fully deleted STEP: Deleting claim pvc-jjn2p Jun 18 00:13:30.394: INFO: Waiting up to 2m0s for PersistentVolume pvc-2407e147-b4f7-439e-886d-06136540a51c to get deleted Jun 18 00:13:30.397: INFO: PersistentVolume pvc-2407e147-b4f7-439e-886d-06136540a51c found and phase=Bound (2.331279ms) Jun 18 00:13:32.399: INFO: PersistentVolume pvc-2407e147-b4f7-439e-886d-06136540a51c was removed STEP: Deleting storageclass csi-mock-volumes-5292-scmxzjs STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-5292 STEP: Waiting for namespaces [csi-mock-volumes-5292] to vanish STEP: uninstalling csi mock driver Jun 18 00:13:38.411: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5292-3440/csi-attacher Jun 18 00:13:38.416: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5292 Jun 18 00:13:38.420: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5292 Jun 18 00:13:38.423: INFO: deleting *v1.Role: csi-mock-volumes-5292-3440/external-attacher-cfg-csi-mock-volumes-5292 Jun 18 00:13:38.426: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5292-3440/csi-attacher-role-cfg Jun 18 00:13:38.430: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5292-3440/csi-provisioner Jun 18 00:13:38.433: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5292 Jun 18 00:13:38.437: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5292 Jun 18 00:13:38.440: INFO: deleting *v1.Role: csi-mock-volumes-5292-3440/external-provisioner-cfg-csi-mock-volumes-5292 Jun 18 00:13:38.445: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5292-3440/csi-provisioner-role-cfg Jun 18 00:13:38.448: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5292-3440/csi-resizer Jun 18 00:13:38.451: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5292 Jun 18 00:13:38.455: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5292 Jun 18 00:13:38.458: INFO: deleting *v1.Role: csi-mock-volumes-5292-3440/external-resizer-cfg-csi-mock-volumes-5292 Jun 18 00:13:38.461: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5292-3440/csi-resizer-role-cfg Jun 18 00:13:38.464: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5292-3440/csi-snapshotter Jun 18 00:13:38.467: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5292 Jun 18 00:13:38.472: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5292 Jun 18 00:13:38.475: INFO: deleting *v1.Role: csi-mock-volumes-5292-3440/external-snapshotter-leaderelection-csi-mock-volumes-5292 Jun 18 00:13:38.478: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5292-3440/external-snapshotter-leaderelection Jun 18 00:13:38.481: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5292-3440/csi-mock Jun 18 00:13:38.485: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5292 Jun 18 00:13:38.488: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5292 Jun 18 00:13:38.491: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5292 Jun 18 00:13:38.495: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5292 Jun 18 00:13:38.498: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5292 Jun 18 00:13:38.501: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5292 Jun 18 00:13:38.506: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5292 Jun 18 00:13:38.509: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5292-3440/csi-mockplugin Jun 18 00:13:38.513: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5292-3440/csi-mockplugin-attacher Jun 18 00:13:38.516: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5292-3440/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-5292-3440 STEP: Waiting for namespaces [csi-mock-volumes-5292-3440] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:14:06.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:81.892 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI Volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561 should expand volume without restarting pod if nodeExpansion=off /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off","total":-1,"completed":10,"skipped":248,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:13:32.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:49 [It] should allow deletion of pod with invalid volume : projected /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 Jun 18 00:14:02.834: INFO: Deleting pod "pv-5737"/"pod-ephm-test-projected-xr95" Jun 18 00:14:02.834: INFO: Deleting pod "pod-ephm-test-projected-xr95" in namespace "pv-5737" Jun 18 00:14:02.839: INFO: Wait up to 5m0s for pod "pod-ephm-test-projected-xr95" to be fully deleted [AfterEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:14:08.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-5737" for this suite. • [SLOW TEST:36.070 seconds] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 When pod refers to non-existent ephemeral storage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53 should allow deletion of pod with invalid volume : projected /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 ------------------------------ {"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected","total":-1,"completed":14,"skipped":482,"failed":1,"failures":["[sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage transient error"]} SSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:13:21.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not be passed when podInfoOnMount=nil /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 STEP: Building a driver namespace object, basename csi-mock-volumes-3945 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 18 00:13:21.195: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3945-3590/csi-attacher Jun 18 00:13:21.198: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3945 Jun 18 00:13:21.198: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-3945 Jun 18 00:13:21.201: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3945 Jun 18 00:13:21.204: INFO: creating *v1.Role: csi-mock-volumes-3945-3590/external-attacher-cfg-csi-mock-volumes-3945 Jun 18 00:13:21.207: INFO: creating *v1.RoleBinding: csi-mock-volumes-3945-3590/csi-attacher-role-cfg Jun 18 00:13:21.209: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3945-3590/csi-provisioner Jun 18 00:13:21.214: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3945 Jun 18 00:13:21.214: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-3945 Jun 18 00:13:21.217: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3945 Jun 18 00:13:21.220: INFO: creating *v1.Role: csi-mock-volumes-3945-3590/external-provisioner-cfg-csi-mock-volumes-3945 Jun 18 00:13:21.223: INFO: creating *v1.RoleBinding: csi-mock-volumes-3945-3590/csi-provisioner-role-cfg Jun 18 00:13:21.225: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3945-3590/csi-resizer Jun 18 00:13:21.227: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3945 Jun 18 00:13:21.227: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-3945 Jun 18 00:13:21.230: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3945 Jun 18 00:13:21.232: INFO: creating *v1.Role: csi-mock-volumes-3945-3590/external-resizer-cfg-csi-mock-volumes-3945 Jun 18 00:13:21.234: INFO: creating *v1.RoleBinding: csi-mock-volumes-3945-3590/csi-resizer-role-cfg Jun 18 00:13:21.236: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3945-3590/csi-snapshotter Jun 18 00:13:21.239: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3945 Jun 18 00:13:21.239: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-3945 Jun 18 00:13:21.241: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3945 Jun 18 00:13:21.243: INFO: creating *v1.Role: csi-mock-volumes-3945-3590/external-snapshotter-leaderelection-csi-mock-volumes-3945 Jun 18 00:13:21.245: INFO: creating *v1.RoleBinding: csi-mock-volumes-3945-3590/external-snapshotter-leaderelection Jun 18 00:13:21.248: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3945-3590/csi-mock Jun 18 00:13:21.251: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3945 Jun 18 00:13:21.253: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3945 Jun 18 00:13:21.255: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3945 Jun 18 00:13:21.257: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3945 Jun 18 00:13:21.259: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3945 Jun 18 00:13:21.262: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3945 Jun 18 00:13:21.264: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3945 Jun 18 00:13:21.266: INFO: creating *v1.StatefulSet: csi-mock-volumes-3945-3590/csi-mockplugin Jun 18 00:13:21.270: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-3945 Jun 18 00:13:21.273: INFO: creating *v1.StatefulSet: csi-mock-volumes-3945-3590/csi-mockplugin-attacher Jun 18 00:13:21.277: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-3945" Jun 18 00:13:21.278: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-3945 to register on node node2 STEP: Creating pod Jun 18 00:13:30.793: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 18 00:13:30.797: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-9jrhx] to have phase Bound Jun 18 00:13:30.799: INFO: PersistentVolumeClaim pvc-9jrhx found but phase is Pending instead of Bound. Jun 18 00:13:32.803: INFO: PersistentVolumeClaim pvc-9jrhx found and phase=Bound (2.0055878s) STEP: Deleting the previously created pod Jun 18 00:13:42.822: INFO: Deleting pod "pvc-volume-tester-5c5pf" in namespace "csi-mock-volumes-3945" Jun 18 00:13:42.827: INFO: Wait up to 5m0s for pod "pvc-volume-tester-5c5pf" to be fully deleted STEP: Checking CSI driver logs Jun 18 00:13:58.848: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/25c51b3b-5266-43e0-8f07-50ac48986b4c/volumes/kubernetes.io~csi/pvc-d6feb781-51c0-406c-817b-8e0a8a5034fe/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-5c5pf Jun 18 00:13:58.848: INFO: Deleting pod "pvc-volume-tester-5c5pf" in namespace "csi-mock-volumes-3945" STEP: Deleting claim pvc-9jrhx Jun 18 00:13:58.855: INFO: Waiting up to 2m0s for PersistentVolume pvc-d6feb781-51c0-406c-817b-8e0a8a5034fe to get deleted Jun 18 00:13:58.857: INFO: PersistentVolume pvc-d6feb781-51c0-406c-817b-8e0a8a5034fe found and phase=Bound (2.110451ms) Jun 18 00:14:00.862: INFO: PersistentVolume pvc-d6feb781-51c0-406c-817b-8e0a8a5034fe was removed STEP: Deleting storageclass csi-mock-volumes-3945-sc4ktrk STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-3945 STEP: Waiting for namespaces [csi-mock-volumes-3945] to vanish STEP: uninstalling csi mock driver Jun 18 00:14:06.873: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3945-3590/csi-attacher Jun 18 00:14:06.876: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3945 Jun 18 00:14:06.880: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3945 Jun 18 00:14:06.883: INFO: deleting *v1.Role: csi-mock-volumes-3945-3590/external-attacher-cfg-csi-mock-volumes-3945 Jun 18 00:14:06.886: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3945-3590/csi-attacher-role-cfg Jun 18 00:14:06.889: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3945-3590/csi-provisioner Jun 18 00:14:06.893: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3945 Jun 18 00:14:06.896: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3945 Jun 18 00:14:06.900: INFO: deleting *v1.Role: csi-mock-volumes-3945-3590/external-provisioner-cfg-csi-mock-volumes-3945 Jun 18 00:14:06.903: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3945-3590/csi-provisioner-role-cfg Jun 18 00:14:06.906: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3945-3590/csi-resizer Jun 18 00:14:06.909: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3945 Jun 18 00:14:06.912: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3945 Jun 18 00:14:06.915: INFO: deleting *v1.Role: csi-mock-volumes-3945-3590/external-resizer-cfg-csi-mock-volumes-3945 Jun 18 00:14:06.918: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3945-3590/csi-resizer-role-cfg Jun 18 00:14:06.922: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3945-3590/csi-snapshotter Jun 18 00:14:06.925: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3945 Jun 18 00:14:06.928: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3945 Jun 18 00:14:06.931: INFO: deleting *v1.Role: csi-mock-volumes-3945-3590/external-snapshotter-leaderelection-csi-mock-volumes-3945 Jun 18 00:14:06.934: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3945-3590/external-snapshotter-leaderelection Jun 18 00:14:06.937: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3945-3590/csi-mock Jun 18 00:14:06.940: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3945 Jun 18 00:14:06.943: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3945 Jun 18 00:14:06.946: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3945 Jun 18 00:14:06.950: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3945 Jun 18 00:14:06.952: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3945 Jun 18 00:14:06.955: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3945 Jun 18 00:14:06.958: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3945 Jun 18 00:14:06.961: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3945-3590/csi-mockplugin Jun 18 00:14:06.965: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-3945 Jun 18 00:14:06.970: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3945-3590/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-3945-3590 STEP: Waiting for namespaces [csi-mock-volumes-3945-3590] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:14:12.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:51.855 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443 should not be passed when podInfoOnMount=nil /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil","total":-1,"completed":20,"skipped":792,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:14:01.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Jun 18 00:14:01.637: INFO: The status of Pod test-hostpath-type-xqtrw is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:14:03.641: INFO: The status of Pod test-hostpath-type-xqtrw is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:14:05.642: INFO: The status of Pod test-hostpath-type-xqtrw is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:14:07.640: INFO: The status of Pod test-hostpath-type-xqtrw is Running (Ready = true) STEP: running on node node1 STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should fail on mounting non-existent directory 'does-not-exist-dir' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:70 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:14:13.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-2068" for this suite. • [SLOW TEST:12.100 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting non-existent directory 'does-not-exist-dir' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:70 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should fail on mounting non-existent directory 'does-not-exist-dir' when HostPathType is HostPathDirectory","total":-1,"completed":12,"skipped":447,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:14:04.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-2e9e0a3e-568a-4061-b619-5534c6cab51c" Jun 18 00:14:06.557: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-2e9e0a3e-568a-4061-b619-5534c6cab51c && dd if=/dev/zero of=/tmp/local-volume-test-2e9e0a3e-568a-4061-b619-5534c6cab51c/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-2e9e0a3e-568a-4061-b619-5534c6cab51c/file] Namespace:persistent-local-volumes-test-2770 PodName:hostexec-node2-mdm7f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:14:06.557: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:14:06.671: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-2e9e0a3e-568a-4061-b619-5534c6cab51c/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-2770 PodName:hostexec-node2-mdm7f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:14:06.671: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:14:06.840: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-2e9e0a3e-568a-4061-b619-5534c6cab51c && chmod o+rwx /tmp/local-volume-test-2e9e0a3e-568a-4061-b619-5534c6cab51c] Namespace:persistent-local-volumes-test-2770 PodName:hostexec-node2-mdm7f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:14:06.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:14:07.066: INFO: Creating a PV followed by a PVC Jun 18 00:14:07.072: INFO: Waiting for PV local-pvmdvpn to bind to PVC pvc-w2qsr Jun 18 00:14:07.072: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-w2qsr] to have phase Bound Jun 18 00:14:07.075: INFO: PersistentVolumeClaim pvc-w2qsr found but phase is Pending instead of Bound. Jun 18 00:14:09.078: INFO: PersistentVolumeClaim pvc-w2qsr found and phase=Bound (2.005907041s) Jun 18 00:14:09.078: INFO: Waiting up to 3m0s for PersistentVolume local-pvmdvpn to have phase Bound Jun 18 00:14:09.081: INFO: PersistentVolume local-pvmdvpn found and phase=Bound (2.667995ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Jun 18 00:14:15.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-2770 exec pod-d5c2f9d9-a120-4cc3-a314-3ff33dc5e58e --namespace=persistent-local-volumes-test-2770 -- stat -c %g /mnt/volume1' Jun 18 00:14:15.370: INFO: stderr: "" Jun 18 00:14:15.370: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-d5c2f9d9-a120-4cc3-a314-3ff33dc5e58e in namespace persistent-local-volumes-test-2770 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:14:15.374: INFO: Deleting PersistentVolumeClaim "pvc-w2qsr" Jun 18 00:14:15.377: INFO: Deleting PersistentVolume "local-pvmdvpn" Jun 18 00:14:15.382: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-2e9e0a3e-568a-4061-b619-5534c6cab51c] Namespace:persistent-local-volumes-test-2770 PodName:hostexec-node2-mdm7f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:14:15.382: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:14:15.516: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-2e9e0a3e-568a-4061-b619-5534c6cab51c/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-2770 PodName:hostexec-node2-mdm7f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:14:15.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-2e9e0a3e-568a-4061-b619-5534c6cab51c/file Jun 18 00:14:15.641: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-2770 PodName:hostexec-node2-mdm7f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:14:15.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-2e9e0a3e-568a-4061-b619-5534c6cab51c Jun 18 00:14:15.734: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-2e9e0a3e-568a-4061-b619-5534c6cab51c] Namespace:persistent-local-volumes-test-2770 PodName:hostexec-node2-mdm7f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:14:15.734: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:14:15.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2770" for this suite. • [SLOW TEST:11.341 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":23,"skipped":750,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:14:06.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-3f1c870c-a5b4-4583-a2d8-c9893e84b3e8" Jun 18 00:14:10.699: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-3f1c870c-a5b4-4583-a2d8-c9893e84b3e8 && dd if=/dev/zero of=/tmp/local-volume-test-3f1c870c-a5b4-4583-a2d8-c9893e84b3e8/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-3f1c870c-a5b4-4583-a2d8-c9893e84b3e8/file] Namespace:persistent-local-volumes-test-6544 PodName:hostexec-node1-b2lp7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:14:10.699: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:14:10.836: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-3f1c870c-a5b4-4583-a2d8-c9893e84b3e8/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6544 PodName:hostexec-node1-b2lp7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:14:10.836: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:14:10.924: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-3f1c870c-a5b4-4583-a2d8-c9893e84b3e8 && chmod o+rwx /tmp/local-volume-test-3f1c870c-a5b4-4583-a2d8-c9893e84b3e8] Namespace:persistent-local-volumes-test-6544 PodName:hostexec-node1-b2lp7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:14:10.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:14:11.262: INFO: Creating a PV followed by a PVC Jun 18 00:14:11.269: INFO: Waiting for PV local-pvvzf29 to bind to PVC pvc-wjm7m Jun 18 00:14:11.269: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-wjm7m] to have phase Bound Jun 18 00:14:11.271: INFO: PersistentVolumeClaim pvc-wjm7m found but phase is Pending instead of Bound. Jun 18 00:14:13.275: INFO: PersistentVolumeClaim pvc-wjm7m found and phase=Bound (2.005808038s) Jun 18 00:14:13.275: INFO: Waiting up to 3m0s for PersistentVolume local-pvvzf29 to have phase Bound Jun 18 00:14:13.277: INFO: PersistentVolume local-pvvzf29 found and phase=Bound (2.14278ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jun 18 00:14:17.304: INFO: pod "pod-6f57d25d-b4f6-4089-926a-dbe630e36b8b" created on Node "node1" STEP: Writing in pod1 Jun 18 00:14:17.304: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6544 PodName:pod-6f57d25d-b4f6-4089-926a-dbe630e36b8b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:14:17.304: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:14:17.405: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Jun 18 00:14:17.405: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6544 PodName:pod-6f57d25d-b4f6-4089-926a-dbe630e36b8b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:14:17.405: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:14:17.494: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-6f57d25d-b4f6-4089-926a-dbe630e36b8b in namespace persistent-local-volumes-test-6544 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:14:17.498: INFO: Deleting PersistentVolumeClaim "pvc-wjm7m" Jun 18 00:14:17.502: INFO: Deleting PersistentVolume "local-pvvzf29" Jun 18 00:14:17.507: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-3f1c870c-a5b4-4583-a2d8-c9893e84b3e8] Namespace:persistent-local-volumes-test-6544 PodName:hostexec-node1-b2lp7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:14:17.507: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:14:17.622: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-3f1c870c-a5b4-4583-a2d8-c9893e84b3e8/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6544 PodName:hostexec-node1-b2lp7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:14:17.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node1" at path /tmp/local-volume-test-3f1c870c-a5b4-4583-a2d8-c9893e84b3e8/file Jun 18 00:14:17.739: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-6544 PodName:hostexec-node1-b2lp7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:14:17.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-3f1c870c-a5b4-4583-a2d8-c9893e84b3e8 Jun 18 00:14:17.866: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-3f1c870c-a5b4-4583-a2d8-c9893e84b3e8] Namespace:persistent-local-volumes-test-6544 PodName:hostexec-node1-b2lp7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:14:17.866: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:14:17.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6544" for this suite. • [SLOW TEST:11.322 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":11,"skipped":299,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:14:15.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Jun 18 00:14:17.967: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-6306 PodName:hostexec-node1-7pjg7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:14:17.967: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:14:18.116: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Jun 18 00:14:18.116: INFO: exec node1: stdout: "0\n" Jun 18 00:14:18.116: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Jun 18 00:14:18.116: INFO: exec node1: exit code: 0 Jun 18 00:14:18.116: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:14:18.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6306" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.209 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1254 ------------------------------ SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:14:18.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Jun 18 00:14:18.145: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:14:18.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-6905" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.040 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Serial] attach on previously attached volumes should work [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:458 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:14:06.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 18 00:14:08.416: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-ccaeea9e-af38-4956-b527-9938f3ce0898 && mount --bind /tmp/local-volume-test-ccaeea9e-af38-4956-b527-9938f3ce0898 /tmp/local-volume-test-ccaeea9e-af38-4956-b527-9938f3ce0898] Namespace:persistent-local-volumes-test-3631 PodName:hostexec-node2-ztfmg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:14:08.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:14:08.517: INFO: Creating a PV followed by a PVC Jun 18 00:14:08.524: INFO: Waiting for PV local-pvxjrnf to bind to PVC pvc-8j8bq Jun 18 00:14:08.524: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-8j8bq] to have phase Bound Jun 18 00:14:08.527: INFO: PersistentVolumeClaim pvc-8j8bq found but phase is Pending instead of Bound. Jun 18 00:14:10.532: INFO: PersistentVolumeClaim pvc-8j8bq found but phase is Pending instead of Bound. Jun 18 00:14:12.535: INFO: PersistentVolumeClaim pvc-8j8bq found and phase=Bound (4.01035711s) Jun 18 00:14:12.535: INFO: Waiting up to 3m0s for PersistentVolume local-pvxjrnf to have phase Bound Jun 18 00:14:12.537: INFO: PersistentVolume local-pvxjrnf found and phase=Bound (1.980225ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Jun 18 00:14:16.566: INFO: pod "pod-5601b75e-f1d7-47b0-8ebf-c4a2f5ca3eb5" created on Node "node2" STEP: Writing in pod1 Jun 18 00:14:16.566: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3631 PodName:pod-5601b75e-f1d7-47b0-8ebf-c4a2f5ca3eb5 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:14:16.566: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:14:16.650: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Jun 18 00:14:16.650: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3631 PodName:pod-5601b75e-f1d7-47b0-8ebf-c4a2f5ca3eb5 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:14:16.650: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:14:16.727: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-5601b75e-f1d7-47b0-8ebf-c4a2f5ca3eb5 in namespace persistent-local-volumes-test-3631 STEP: Creating pod2 STEP: Creating a pod Jun 18 00:14:22.756: INFO: pod "pod-6f48ee4b-47fd-4882-ac06-876b29a7ebd8" created on Node "node2" STEP: Reading in pod2 Jun 18 00:14:22.756: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3631 PodName:pod-6f48ee4b-47fd-4882-ac06-876b29a7ebd8 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:14:22.756: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:14:22.838: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-6f48ee4b-47fd-4882-ac06-876b29a7ebd8 in namespace persistent-local-volumes-test-3631 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:14:22.843: INFO: Deleting PersistentVolumeClaim "pvc-8j8bq" Jun 18 00:14:22.847: INFO: Deleting PersistentVolume "local-pvxjrnf" STEP: Removing the test directory Jun 18 00:14:22.851: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-ccaeea9e-af38-4956-b527-9938f3ce0898 && rm -r /tmp/local-volume-test-ccaeea9e-af38-4956-b527-9938f3ce0898] Namespace:persistent-local-volumes-test-3631 PodName:hostexec-node2-ztfmg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:14:22.851: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:14:22.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3631" for this suite. • [SLOW TEST:16.596 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":13,"skipped":559,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:14:18.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:75 STEP: Creating configMap with name configmap-test-volume-3c09f351-83a5-4839-9e9b-150269fcaf8d STEP: Creating a pod to test consume configMaps Jun 18 00:14:18.196: INFO: Waiting up to 5m0s for pod "pod-configmaps-c41053f8-431d-4b08-ad87-9e7a6c5891af" in namespace "configmap-9849" to be "Succeeded or Failed" Jun 18 00:14:18.198: INFO: Pod "pod-configmaps-c41053f8-431d-4b08-ad87-9e7a6c5891af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.242517ms Jun 18 00:14:20.201: INFO: Pod "pod-configmaps-c41053f8-431d-4b08-ad87-9e7a6c5891af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005344828s Jun 18 00:14:22.205: INFO: Pod "pod-configmaps-c41053f8-431d-4b08-ad87-9e7a6c5891af": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008851272s Jun 18 00:14:24.208: INFO: Pod "pod-configmaps-c41053f8-431d-4b08-ad87-9e7a6c5891af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012379169s STEP: Saw pod success Jun 18 00:14:24.208: INFO: Pod "pod-configmaps-c41053f8-431d-4b08-ad87-9e7a6c5891af" satisfied condition "Succeeded or Failed" Jun 18 00:14:24.210: INFO: Trying to get logs from node node2 pod pod-configmaps-c41053f8-431d-4b08-ad87-9e7a6c5891af container agnhost-container: STEP: delete the pod Jun 18 00:14:24.397: INFO: Waiting for pod pod-configmaps-c41053f8-431d-4b08-ad87-9e7a6c5891af to disappear Jun 18 00:14:24.399: INFO: Pod pod-configmaps-c41053f8-431d-4b08-ad87-9e7a6c5891af no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:14:24.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9849" for this suite. • [SLOW TEST:6.255 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:75 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":24,"skipped":790,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:14:18.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Jun 18 00:14:18.216: INFO: The status of Pod test-hostpath-type-vv8nt is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:14:20.220: INFO: The status of Pod test-hostpath-type-vv8nt is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:14:22.219: INFO: The status of Pod test-hostpath-type-vv8nt is Running (Ready = true) STEP: running on node node1 STEP: Create a character device for further testing Jun 18 00:14:22.221: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-453 PodName:test-hostpath-type-vv8nt ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:14:22.221: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to mount character device 'achardev' successfully when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:277 [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:14:26.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-453" for this suite. • [SLOW TEST:8.154 seconds] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount character device 'achardev' successfully when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:277 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should be able to mount character device 'achardev' successfully when HostPathType is HostPathCharDev","total":-1,"completed":12,"skipped":367,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:14:23.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Jun 18 00:14:23.045: INFO: The status of Pod test-hostpath-type-l767z is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:14:25.049: INFO: The status of Pod test-hostpath-type-l767z is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:14:27.050: INFO: The status of Pod test-hostpath-type-l767z is Running (Ready = true) STEP: running on node node2 STEP: Create a block device for further testing Jun 18 00:14:27.052: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-6496 PodName:test-hostpath-type-l767z ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:14:27.052: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:354 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:14:29.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-6496" for this suite. • [SLOW TEST:6.154 seconds] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting block device 'ablkdev' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:354 ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:14:13.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-66666035-bdc5-4811-ad8a-3bb593ec5a53" Jun 18 00:14:19.940: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-66666035-bdc5-4811-ad8a-3bb593ec5a53 && dd if=/dev/zero of=/tmp/local-volume-test-66666035-bdc5-4811-ad8a-3bb593ec5a53/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-66666035-bdc5-4811-ad8a-3bb593ec5a53/file] Namespace:persistent-local-volumes-test-1631 PodName:hostexec-node2-7br8k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:14:19.940: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:14:20.095: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-66666035-bdc5-4811-ad8a-3bb593ec5a53/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1631 PodName:hostexec-node2-7br8k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:14:20.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:14:20.277: INFO: Creating a PV followed by a PVC Jun 18 00:14:20.284: INFO: Waiting for PV local-pvkp2vm to bind to PVC pvc-zxbx5 Jun 18 00:14:20.284: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-zxbx5] to have phase Bound Jun 18 00:14:20.286: INFO: PersistentVolumeClaim pvc-zxbx5 found but phase is Pending instead of Bound. Jun 18 00:14:22.289: INFO: PersistentVolumeClaim pvc-zxbx5 found but phase is Pending instead of Bound. Jun 18 00:14:24.293: INFO: PersistentVolumeClaim pvc-zxbx5 found but phase is Pending instead of Bound. Jun 18 00:14:26.296: INFO: PersistentVolumeClaim pvc-zxbx5 found but phase is Pending instead of Bound. Jun 18 00:14:28.299: INFO: PersistentVolumeClaim pvc-zxbx5 found and phase=Bound (8.015182029s) Jun 18 00:14:28.299: INFO: Waiting up to 3m0s for PersistentVolume local-pvkp2vm to have phase Bound Jun 18 00:14:28.302: INFO: PersistentVolume local-pvkp2vm found and phase=Bound (2.241753ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Jun 18 00:14:32.328: INFO: pod "pod-c0f48283-d2c6-4106-8fd2-6c32948cda09" created on Node "node2" STEP: Writing in pod1 Jun 18 00:14:32.328: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1631 PodName:pod-c0f48283-d2c6-4106-8fd2-6c32948cda09 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:14:32.328: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:14:32.424: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Jun 18 00:14:32.424: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1631 PodName:pod-c0f48283-d2c6-4106-8fd2-6c32948cda09 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:14:32.424: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:14:32.502: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-c0f48283-d2c6-4106-8fd2-6c32948cda09 in namespace persistent-local-volumes-test-1631 STEP: Creating pod2 STEP: Creating a pod Jun 18 00:14:36.529: INFO: pod "pod-00d13cf4-5aaf-4f1d-bfee-899042e779df" created on Node "node2" STEP: Reading in pod2 Jun 18 00:14:36.529: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1631 PodName:pod-00d13cf4-5aaf-4f1d-bfee-899042e779df ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:14:36.529: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:14:36.603: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-00d13cf4-5aaf-4f1d-bfee-899042e779df in namespace persistent-local-volumes-test-1631 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:14:36.608: INFO: Deleting PersistentVolumeClaim "pvc-zxbx5" Jun 18 00:14:36.611: INFO: Deleting PersistentVolume "local-pvkp2vm" Jun 18 00:14:36.615: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-66666035-bdc5-4811-ad8a-3bb593ec5a53/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1631 PodName:hostexec-node2-7br8k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:14:36.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-66666035-bdc5-4811-ad8a-3bb593ec5a53/file Jun 18 00:14:36.729: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-1631 PodName:hostexec-node2-7br8k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:14:36.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-66666035-bdc5-4811-ad8a-3bb593ec5a53 Jun 18 00:14:36.814: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-66666035-bdc5-4811-ad8a-3bb593ec5a53] Namespace:persistent-local-volumes-test-1631 PodName:hostexec-node2-7br8k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:14:36.814: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:14:36.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1631" for this suite. • [SLOW TEST:23.024 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":13,"skipped":540,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:14:08.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 18 00:14:12.936: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-1438a3a2-b1cc-43dd-bb18-35181ecbd39d-backend && mount --bind /tmp/local-volume-test-1438a3a2-b1cc-43dd-bb18-35181ecbd39d-backend /tmp/local-volume-test-1438a3a2-b1cc-43dd-bb18-35181ecbd39d-backend && ln -s /tmp/local-volume-test-1438a3a2-b1cc-43dd-bb18-35181ecbd39d-backend /tmp/local-volume-test-1438a3a2-b1cc-43dd-bb18-35181ecbd39d] Namespace:persistent-local-volumes-test-7974 PodName:hostexec-node1-v4rx5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:14:12.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:14:13.026: INFO: Creating a PV followed by a PVC Jun 18 00:14:13.032: INFO: Waiting for PV local-pvnkwdm to bind to PVC pvc-p6nl6 Jun 18 00:14:13.032: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-p6nl6] to have phase Bound Jun 18 00:14:13.034: INFO: PersistentVolumeClaim pvc-p6nl6 found but phase is Pending instead of Bound. Jun 18 00:14:15.039: INFO: PersistentVolumeClaim pvc-p6nl6 found but phase is Pending instead of Bound. Jun 18 00:14:17.045: INFO: PersistentVolumeClaim pvc-p6nl6 found but phase is Pending instead of Bound. Jun 18 00:14:19.049: INFO: PersistentVolumeClaim pvc-p6nl6 found but phase is Pending instead of Bound. Jun 18 00:14:21.055: INFO: PersistentVolumeClaim pvc-p6nl6 found but phase is Pending instead of Bound. Jun 18 00:14:23.058: INFO: PersistentVolumeClaim pvc-p6nl6 found but phase is Pending instead of Bound. Jun 18 00:14:25.064: INFO: PersistentVolumeClaim pvc-p6nl6 found but phase is Pending instead of Bound. Jun 18 00:14:27.069: INFO: PersistentVolumeClaim pvc-p6nl6 found and phase=Bound (14.037585014s) Jun 18 00:14:27.069: INFO: Waiting up to 3m0s for PersistentVolume local-pvnkwdm to have phase Bound Jun 18 00:14:27.074: INFO: PersistentVolume local-pvnkwdm found and phase=Bound (4.353137ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Jun 18 00:14:31.103: INFO: pod "pod-a7c93103-4d96-48df-ad7f-d7ee124c35fd" created on Node "node1" STEP: Writing in pod1 Jun 18 00:14:31.103: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7974 PodName:pod-a7c93103-4d96-48df-ad7f-d7ee124c35fd ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:14:31.103: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:14:31.183: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Jun 18 00:14:31.183: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7974 PodName:pod-a7c93103-4d96-48df-ad7f-d7ee124c35fd ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:14:31.183: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:14:31.269: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Jun 18 00:14:37.288: INFO: pod "pod-4a863400-d717-4cdb-aaa1-1bf14a7b7a5f" created on Node "node1" Jun 18 00:14:37.288: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7974 PodName:pod-4a863400-d717-4cdb-aaa1-1bf14a7b7a5f ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:14:37.288: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:14:37.386: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Jun 18 00:14:37.386: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-1438a3a2-b1cc-43dd-bb18-35181ecbd39d > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7974 PodName:pod-4a863400-d717-4cdb-aaa1-1bf14a7b7a5f ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:14:37.386: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:14:37.470: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-1438a3a2-b1cc-43dd-bb18-35181ecbd39d > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Jun 18 00:14:37.470: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7974 PodName:pod-a7c93103-4d96-48df-ad7f-d7ee124c35fd ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:14:37.470: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:14:37.554: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-1438a3a2-b1cc-43dd-bb18-35181ecbd39d", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-a7c93103-4d96-48df-ad7f-d7ee124c35fd in namespace persistent-local-volumes-test-7974 STEP: Deleting pod2 STEP: Deleting pod pod-4a863400-d717-4cdb-aaa1-1bf14a7b7a5f in namespace persistent-local-volumes-test-7974 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:14:37.563: INFO: Deleting PersistentVolumeClaim "pvc-p6nl6" Jun 18 00:14:37.566: INFO: Deleting PersistentVolume "local-pvnkwdm" STEP: Removing the test directory Jun 18 00:14:37.570: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-1438a3a2-b1cc-43dd-bb18-35181ecbd39d && umount /tmp/local-volume-test-1438a3a2-b1cc-43dd-bb18-35181ecbd39d-backend && rm -r /tmp/local-volume-test-1438a3a2-b1cc-43dd-bb18-35181ecbd39d-backend] Namespace:persistent-local-volumes-test-7974 PodName:hostexec-node1-v4rx5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:14:37.570: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:14:37.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7974" for this suite. • [SLOW TEST:28.834 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":15,"skipped":488,"failed":1,"failures":["[sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage transient error"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:14:13.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 18 00:14:15.053: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-ee5eec4e-d7d4-4ac0-a806-73caba246c74-backend && ln -s /tmp/local-volume-test-ee5eec4e-d7d4-4ac0-a806-73caba246c74-backend /tmp/local-volume-test-ee5eec4e-d7d4-4ac0-a806-73caba246c74] Namespace:persistent-local-volumes-test-4961 PodName:hostexec-node1-kpqwh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:14:15.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:14:15.216: INFO: Creating a PV followed by a PVC Jun 18 00:14:15.224: INFO: Waiting for PV local-pv5c4d6 to bind to PVC pvc-msnqv Jun 18 00:14:15.224: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-msnqv] to have phase Bound Jun 18 00:14:15.226: INFO: PersistentVolumeClaim pvc-msnqv found but phase is Pending instead of Bound. Jun 18 00:14:17.230: INFO: PersistentVolumeClaim pvc-msnqv found but phase is Pending instead of Bound. Jun 18 00:14:19.234: INFO: PersistentVolumeClaim pvc-msnqv found but phase is Pending instead of Bound. Jun 18 00:14:21.241: INFO: PersistentVolumeClaim pvc-msnqv found but phase is Pending instead of Bound. Jun 18 00:14:23.245: INFO: PersistentVolumeClaim pvc-msnqv found but phase is Pending instead of Bound. Jun 18 00:14:25.247: INFO: PersistentVolumeClaim pvc-msnqv found but phase is Pending instead of Bound. Jun 18 00:14:27.251: INFO: PersistentVolumeClaim pvc-msnqv found and phase=Bound (12.026701727s) Jun 18 00:14:27.251: INFO: Waiting up to 3m0s for PersistentVolume local-pv5c4d6 to have phase Bound Jun 18 00:14:27.253: INFO: PersistentVolume local-pv5c4d6 found and phase=Bound (2.476632ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Jun 18 00:14:31.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-4961 exec pod-ba463ac5-f8a8-4314-9714-341ac343a879 --namespace=persistent-local-volumes-test-4961 -- stat -c %g /mnt/volume1' Jun 18 00:14:31.585: INFO: stderr: "" Jun 18 00:14:31.585: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Jun 18 00:14:37.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-4961 exec pod-c09bb66c-4902-48c2-b876-852ad8dc12a5 --namespace=persistent-local-volumes-test-4961 -- stat -c %g /mnt/volume1' Jun 18 00:14:37.863: INFO: stderr: "" Jun 18 00:14:37.863: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod pod-ba463ac5-f8a8-4314-9714-341ac343a879 in namespace persistent-local-volumes-test-4961 STEP: Deleting second pod STEP: Deleting pod pod-c09bb66c-4902-48c2-b876-852ad8dc12a5 in namespace persistent-local-volumes-test-4961 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:14:37.879: INFO: Deleting PersistentVolumeClaim "pvc-msnqv" Jun 18 00:14:37.884: INFO: Deleting PersistentVolume "local-pv5c4d6" STEP: Removing the test directory Jun 18 00:14:37.887: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ee5eec4e-d7d4-4ac0-a806-73caba246c74 && rm -r /tmp/local-volume-test-ee5eec4e-d7d4-4ac0-a806-73caba246c74-backend] Namespace:persistent-local-volumes-test-4961 PodName:hostexec-node1-kpqwh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:14:37.887: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:14:37.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4961" for this suite. • [SLOW TEST:24.995 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":21,"skipped":797,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:14:36.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:354 STEP: Initializing test volumes Jun 18 00:14:40.998: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-d6d29350-a8f2-4809-86c3-7f48548d325c] Namespace:persistent-local-volumes-test-6400 PodName:hostexec-node2-l722d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:14:40.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:14:41.078: INFO: Creating a PV followed by a PVC Jun 18 00:14:41.087: INFO: Waiting for PV local-pvv57jb to bind to PVC pvc-5tmmn Jun 18 00:14:41.087: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-5tmmn] to have phase Bound Jun 18 00:14:41.090: INFO: PersistentVolumeClaim pvc-5tmmn found but phase is Pending instead of Bound. Jun 18 00:14:43.093: INFO: PersistentVolumeClaim pvc-5tmmn found and phase=Bound (2.005734193s) Jun 18 00:14:43.093: INFO: Waiting up to 3m0s for PersistentVolume local-pvv57jb to have phase Bound Jun 18 00:14:43.095: INFO: PersistentVolume local-pvv57jb found and phase=Bound (2.372584ms) [It] should fail scheduling due to different NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375 STEP: local-volume-type: dir Jun 18 00:14:43.110: INFO: Waiting up to 5m0s for pod "pod-81cfeb8f-e1fd-439b-89d0-52d99f1d486b" in namespace "persistent-local-volumes-test-6400" to be "Unschedulable" Jun 18 00:14:43.113: INFO: Pod "pod-81cfeb8f-e1fd-439b-89d0-52d99f1d486b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.646052ms Jun 18 00:14:45.118: INFO: Pod "pod-81cfeb8f-e1fd-439b-89d0-52d99f1d486b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007901342s Jun 18 00:14:45.118: INFO: Pod "pod-81cfeb8f-e1fd-439b-89d0-52d99f1d486b" satisfied condition "Unschedulable" [AfterEach] Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:370 STEP: Cleaning up PVC and PV Jun 18 00:14:45.118: INFO: Deleting PersistentVolumeClaim "pvc-5tmmn" Jun 18 00:14:45.122: INFO: Deleting PersistentVolume "local-pvv57jb" STEP: Removing the test directory Jun 18 00:14:45.126: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-d6d29350-a8f2-4809-86c3-7f48548d325c] Namespace:persistent-local-volumes-test-6400 PodName:hostexec-node2-l722d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:14:45.126: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:14:45.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6400" for this suite. • [SLOW TEST:8.279 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347 should fail scheduling due to different NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity","total":-1,"completed":14,"skipped":556,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:14:38.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Jun 18 00:14:38.041: INFO: The status of Pod test-hostpath-type-5rgxr is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:14:40.044: INFO: The status of Pod test-hostpath-type-5rgxr is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:14:42.046: INFO: The status of Pod test-hostpath-type-5rgxr is Running (Ready = true) STEP: running on node node2 STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should fail on mounting file 'afile' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:151 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:14:48.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-567" for this suite. • [SLOW TEST:10.099 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting file 'afile' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:151 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathDirectory","total":-1,"completed":22,"skipped":798,"failed":0} SSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathDirectory","total":-1,"completed":14,"skipped":575,"failed":0} [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:14:29.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:42 [It] should be mountable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:48 STEP: starting configmap-client STEP: Checking that text file contents are perfect. Jun 18 00:14:33.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=volume-8840 exec configmap-client --namespace=volume-8840 -- cat /opt/0/firstfile' Jun 18 00:14:33.650: INFO: stderr: "" Jun 18 00:14:33.650: INFO: stdout: "this is the first file" Jun 18 00:14:33.650: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /opt/0] Namespace:volume-8840 PodName:configmap-client ContainerName:configmap-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:14:33.650: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:14:33.927: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /opt/0] Namespace:volume-8840 PodName:configmap-client ContainerName:configmap-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:14:33.927: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:14:34.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=volume-8840 exec configmap-client --namespace=volume-8840 -- cat /opt/1/secondfile' Jun 18 00:14:34.260: INFO: stderr: "" Jun 18 00:14:34.260: INFO: stdout: "this is the second file" Jun 18 00:14:34.260: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /opt/1] Namespace:volume-8840 PodName:configmap-client ContainerName:configmap-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:14:34.260: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:14:34.409: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /opt/1] Namespace:volume-8840 PodName:configmap-client ContainerName:configmap-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:14:34.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod configmap-client in namespace volume-8840 Jun 18 00:14:34.508: INFO: Waiting for pod configmap-client to disappear Jun 18 00:14:34.510: INFO: Pod configmap-client still exists Jun 18 00:14:36.513: INFO: Waiting for pod configmap-client to disappear Jun 18 00:14:36.516: INFO: Pod configmap-client still exists Jun 18 00:14:38.511: INFO: Waiting for pod configmap-client to disappear Jun 18 00:14:38.514: INFO: Pod configmap-client still exists Jun 18 00:14:40.512: INFO: Waiting for pod configmap-client to disappear Jun 18 00:14:40.515: INFO: Pod configmap-client still exists Jun 18 00:14:42.511: INFO: Waiting for pod configmap-client to disappear Jun 18 00:14:42.514: INFO: Pod configmap-client still exists Jun 18 00:14:44.511: INFO: Waiting for pod configmap-client to disappear Jun 18 00:14:44.514: INFO: Pod configmap-client still exists Jun 18 00:14:46.515: INFO: Waiting for pod configmap-client to disappear Jun 18 00:14:46.518: INFO: Pod configmap-client still exists Jun 18 00:14:48.510: INFO: Waiting for pod configmap-client to disappear Jun 18 00:14:48.513: INFO: Pod configmap-client no longer exists [AfterEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:14:48.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-8840" for this suite. • [SLOW TEST:19.359 seconds] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:47 should be mountable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:48 ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:14:45.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Jun 18 00:14:45.279: INFO: The status of Pod test-hostpath-type-ncs2w is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:14:47.283: INFO: The status of Pod test-hostpath-type-ncs2w is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:14:49.284: INFO: The status of Pod test-hostpath-type-ncs2w is Running (Ready = true) STEP: running on node node2 [It] Should fail on mounting socket 'asocket' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:216 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:14:51.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-2693" for this suite. • [SLOW TEST:6.085 seconds] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting socket 'asocket' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:216 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathDirectory","total":-1,"completed":15,"skipped":561,"failed":0} SSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Volumes ConfigMap should be mountable","total":-1,"completed":15,"skipped":575,"failed":0} [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:14:48.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Jun 18 00:14:52.567: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-6505 PodName:hostexec-node2-5bcqc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:14:52.567: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:14:52.668: INFO: exec node2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Jun 18 00:14:52.668: INFO: exec node2: stdout: "0\n" Jun 18 00:14:52.668: INFO: exec node2: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Jun 18 00:14:52.668: INFO: exec node2: exit code: 0 Jun 18 00:14:52.668: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:14:52.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6505" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [4.154 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1254 ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:14:48.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Jun 18 00:14:48.168: INFO: The status of Pod test-hostpath-type-jhkbz is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:14:50.173: INFO: The status of Pod test-hostpath-type-jhkbz is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:14:52.172: INFO: The status of Pod test-hostpath-type-jhkbz is Running (Ready = true) STEP: running on node node2 STEP: Create a block device for further testing Jun 18 00:14:52.174: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-2784 PodName:test-hostpath-type-jhkbz ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:14:52.174: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:364 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:14:54.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-2784" for this suite. • [SLOW TEST:6.286 seconds] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting block device 'ablkdev' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:364 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathSocket","total":-1,"completed":23,"skipped":808,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:14:26.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-577827c5-1d05-498d-9d03-2bbfb9b50ece" Jun 18 00:14:30.521: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-577827c5-1d05-498d-9d03-2bbfb9b50ece && dd if=/dev/zero of=/tmp/local-volume-test-577827c5-1d05-498d-9d03-2bbfb9b50ece/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-577827c5-1d05-498d-9d03-2bbfb9b50ece/file] Namespace:persistent-local-volumes-test-8443 PodName:hostexec-node1-4n24j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:14:30.521: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:14:30.672: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-577827c5-1d05-498d-9d03-2bbfb9b50ece/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-8443 PodName:hostexec-node1-4n24j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:14:30.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:14:30.765: INFO: Creating a PV followed by a PVC Jun 18 00:14:30.772: INFO: Waiting for PV local-pv6b84v to bind to PVC pvc-k6lt6 Jun 18 00:14:30.772: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-k6lt6] to have phase Bound Jun 18 00:14:30.774: INFO: PersistentVolumeClaim pvc-k6lt6 found but phase is Pending instead of Bound. Jun 18 00:14:32.777: INFO: PersistentVolumeClaim pvc-k6lt6 found but phase is Pending instead of Bound. Jun 18 00:14:34.782: INFO: PersistentVolumeClaim pvc-k6lt6 found but phase is Pending instead of Bound. Jun 18 00:14:36.787: INFO: PersistentVolumeClaim pvc-k6lt6 found but phase is Pending instead of Bound. Jun 18 00:14:38.792: INFO: PersistentVolumeClaim pvc-k6lt6 found but phase is Pending instead of Bound. Jun 18 00:14:40.796: INFO: PersistentVolumeClaim pvc-k6lt6 found but phase is Pending instead of Bound. Jun 18 00:14:42.800: INFO: PersistentVolumeClaim pvc-k6lt6 found and phase=Bound (12.027574283s) Jun 18 00:14:42.800: INFO: Waiting up to 3m0s for PersistentVolume local-pv6b84v to have phase Bound Jun 18 00:14:42.802: INFO: PersistentVolume local-pv6b84v found and phase=Bound (2.031347ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Jun 18 00:14:48.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-8443 exec pod-79376cde-fdcc-463d-8a90-73d269998f25 --namespace=persistent-local-volumes-test-8443 -- stat -c %g /mnt/volume1' Jun 18 00:14:49.117: INFO: stderr: "" Jun 18 00:14:49.117: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Jun 18 00:14:55.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-8443 exec pod-a033d286-305b-4f29-a44a-600e2ed15ec7 --namespace=persistent-local-volumes-test-8443 -- stat -c %g /mnt/volume1' Jun 18 00:14:55.390: INFO: stderr: "" Jun 18 00:14:55.391: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod pod-79376cde-fdcc-463d-8a90-73d269998f25 in namespace persistent-local-volumes-test-8443 STEP: Deleting second pod STEP: Deleting pod pod-a033d286-305b-4f29-a44a-600e2ed15ec7 in namespace persistent-local-volumes-test-8443 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:14:55.400: INFO: Deleting PersistentVolumeClaim "pvc-k6lt6" Jun 18 00:14:55.404: INFO: Deleting PersistentVolume "local-pv6b84v" Jun 18 00:14:55.408: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-577827c5-1d05-498d-9d03-2bbfb9b50ece/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-8443 PodName:hostexec-node1-4n24j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:14:55.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node1" at path /tmp/local-volume-test-577827c5-1d05-498d-9d03-2bbfb9b50ece/file Jun 18 00:14:55.492: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-8443 PodName:hostexec-node1-4n24j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:14:55.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-577827c5-1d05-498d-9d03-2bbfb9b50ece Jun 18 00:14:55.600: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-577827c5-1d05-498d-9d03-2bbfb9b50ece] Namespace:persistent-local-volumes-test-8443 PodName:hostexec-node1-4n24j ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:14:55.601: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:14:55.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8443" for this suite. • [SLOW TEST:29.229 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":13,"skipped":428,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:14:37.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 18 00:14:41.872: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-3b29e42e-920c-44d6-8cb1-a0b2164c8f87 && mount --bind /tmp/local-volume-test-3b29e42e-920c-44d6-8cb1-a0b2164c8f87 /tmp/local-volume-test-3b29e42e-920c-44d6-8cb1-a0b2164c8f87] Namespace:persistent-local-volumes-test-3529 PodName:hostexec-node1-m7bln ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:14:41.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:14:41.960: INFO: Creating a PV followed by a PVC Jun 18 00:14:41.972: INFO: Waiting for PV local-pvwsrlj to bind to PVC pvc-rbjhg Jun 18 00:14:41.972: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-rbjhg] to have phase Bound Jun 18 00:14:41.975: INFO: PersistentVolumeClaim pvc-rbjhg found but phase is Pending instead of Bound. Jun 18 00:14:43.978: INFO: PersistentVolumeClaim pvc-rbjhg found but phase is Pending instead of Bound. Jun 18 00:14:45.982: INFO: PersistentVolumeClaim pvc-rbjhg found but phase is Pending instead of Bound. Jun 18 00:14:47.985: INFO: PersistentVolumeClaim pvc-rbjhg found but phase is Pending instead of Bound. Jun 18 00:14:49.989: INFO: PersistentVolumeClaim pvc-rbjhg found but phase is Pending instead of Bound. Jun 18 00:14:51.993: INFO: PersistentVolumeClaim pvc-rbjhg found but phase is Pending instead of Bound. Jun 18 00:14:53.996: INFO: PersistentVolumeClaim pvc-rbjhg found but phase is Pending instead of Bound. Jun 18 00:14:56.000: INFO: PersistentVolumeClaim pvc-rbjhg found but phase is Pending instead of Bound. Jun 18 00:14:58.003: INFO: PersistentVolumeClaim pvc-rbjhg found and phase=Bound (16.030825012s) Jun 18 00:14:58.003: INFO: Waiting up to 3m0s for PersistentVolume local-pvwsrlj to have phase Bound Jun 18 00:14:58.005: INFO: PersistentVolume local-pvwsrlj found and phase=Bound (2.444879ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Jun 18 00:14:58.010: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:14:58.011: INFO: Deleting PersistentVolumeClaim "pvc-rbjhg" Jun 18 00:14:58.016: INFO: Deleting PersistentVolume "local-pvwsrlj" STEP: Removing the test directory Jun 18 00:14:58.019: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-3b29e42e-920c-44d6-8cb1-a0b2164c8f87 && rm -r /tmp/local-volume-test-3b29e42e-920c-44d6-8cb1-a0b2164c8f87] Namespace:persistent-local-volumes-test-3529 PodName:hostexec-node1-m7bln ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:14:58.019: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:14:58.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3529" for this suite. S [SKIPPING] [20.337 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:14:51.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48 STEP: Creating a pod to test hostPath mode Jun 18 00:14:51.395: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9839" to be "Succeeded or Failed" Jun 18 00:14:51.398: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.904642ms Jun 18 00:14:53.402: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006603303s Jun 18 00:14:55.406: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010519796s Jun 18 00:14:57.410: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014518985s STEP: Saw pod success Jun 18 00:14:57.410: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Jun 18 00:14:57.412: INFO: Trying to get logs from node node2 pod pod-host-path-test container test-container-1: STEP: delete the pod Jun 18 00:14:57.424: INFO: Waiting for pod pod-host-path-test to disappear Jun 18 00:14:57.426: INFO: Pod pod-host-path-test no longer exists Jun 18 00:14:57.426: FAIL: Unexpected error: <*errors.errorString | 0xc000c112d0>: { s: "expected \"mode of file \\\"/test-volume\\\": dtrwxrwx\" in container output: Expected\n : mount type of \"/test-volume\": 61267\n mode of file \"/test-volume\": dgtrwxrwxrwx\n \nto contain substring\n : mode of file \"/test-volume\": dtrwxrwx", } expected "mode of file \"/test-volume\": dtrwxrwx" in container output: Expected : mount type of "/test-volume": 61267 mode of file "/test-volume": dgtrwxrwxrwx to contain substring : mode of file "/test-volume": dtrwxrwx occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc0012a4000, 0x6ef5302, 0xd, 0xc001e7e800, 0x0, 0xc0006751c0, 0x1, 0x1, 0x70fd948) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:742 +0x1e5 k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:564 k8s.io/kubernetes/test/e2e/common/storage.glob..func5.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:59 +0x299 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000eff680) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000eff680) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc000eff680, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "hostpath-9839". STEP: Found 9 events. Jun 18 00:14:57.431: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-host-path-test: { } Scheduled: Successfully assigned hostpath-9839/pod-host-path-test to node2 Jun 18 00:14:57.431: INFO: At 2022-06-18 00:14:53 +0000 UTC - event for pod-host-path-test: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 18 00:14:57.431: INFO: At 2022-06-18 00:14:53 +0000 UTC - event for pod-host-path-test: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 288.21634ms Jun 18 00:14:57.431: INFO: At 2022-06-18 00:14:53 +0000 UTC - event for pod-host-path-test: {kubelet node2} Created: Created container test-container-1 Jun 18 00:14:57.431: INFO: At 2022-06-18 00:14:54 +0000 UTC - event for pod-host-path-test: {kubelet node2} Started: Started container test-container-1 Jun 18 00:14:57.431: INFO: At 2022-06-18 00:14:54 +0000 UTC - event for pod-host-path-test: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Jun 18 00:14:57.431: INFO: At 2022-06-18 00:14:54 +0000 UTC - event for pod-host-path-test: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 492.668063ms Jun 18 00:14:57.431: INFO: At 2022-06-18 00:14:54 +0000 UTC - event for pod-host-path-test: {kubelet node2} Created: Created container test-container-2 Jun 18 00:14:57.431: INFO: At 2022-06-18 00:14:54 +0000 UTC - event for pod-host-path-test: {kubelet node2} Started: Started container test-container-2 Jun 18 00:14:57.433: INFO: POD NODE PHASE GRACE CONDITIONS Jun 18 00:14:57.433: INFO: Jun 18 00:14:57.438: INFO: Logging node info for node master1 Jun 18 00:14:57.440: INFO: Node Info: &Node{ObjectMeta:{master1 47691bb2-4ee9-4386-8bec-0f9db1917afd 105470 0 2022-06-17 19:59:00 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-17 19:59:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-17 20:01:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-17 20:01:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-17 20:06:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:36 +0000 UTC,LastTransitionTime:2022-06-17 20:04:36 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-18 00:14:49 +0000 UTC,LastTransitionTime:2022-06-17 19:58:57 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-18 00:14:49 +0000 UTC,LastTransitionTime:2022-06-17 19:58:57 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-18 00:14:49 +0000 UTC,LastTransitionTime:2022-06-17 19:58:57 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-18 00:14:49 +0000 UTC,LastTransitionTime:2022-06-17 20:01:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f59e69c8e0cc41ff966b02f015e9cf30,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:81e1dc93-cb0d-4bf9-b7c4-28e0b4aef603,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:a226a9c613b9eeed89115dd78ba697306e50d1b4466033c8415371714720c861 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 18 00:14:57.441: INFO: Logging kubelet events for node master1 Jun 18 00:14:57.443: INFO: Logging pods the kubelet thinks is on node master1 Jun 18 00:14:57.476: INFO: kube-proxy-b2xlr started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:57.477: INFO: Container kube-proxy ready: true, restart count 2 Jun 18 00:14:57.477: INFO: container-registry-65d7c44b96-hq7rp started at 2022-06-17 20:06:17 +0000 UTC (0+2 container statuses recorded) Jun 18 00:14:57.477: INFO: Container docker-registry ready: true, restart count 0 Jun 18 00:14:57.477: INFO: Container nginx ready: true, restart count 0 Jun 18 00:14:57.477: INFO: node-exporter-bts5h started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded) Jun 18 00:14:57.477: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 18 00:14:57.477: INFO: Container node-exporter ready: true, restart count 0 Jun 18 00:14:57.477: INFO: kube-scheduler-master1 started at 2022-06-17 20:08:08 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:57.477: INFO: Container kube-scheduler ready: true, restart count 0 Jun 18 00:14:57.477: INFO: kube-controller-manager-master1 started at 2022-06-17 20:08:08 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:57.477: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 18 00:14:57.477: INFO: kube-flannel-z9nqz started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded) Jun 18 00:14:57.477: INFO: Init container install-cni ready: true, restart count 2 Jun 18 00:14:57.477: INFO: Container kube-flannel ready: true, restart count 2 Jun 18 00:14:57.477: INFO: kube-multus-ds-amd64-rqb4r started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:57.477: INFO: Container kube-multus ready: true, restart count 1 Jun 18 00:14:57.477: INFO: kube-apiserver-master1 started at 2022-06-17 20:00:04 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:57.477: INFO: Container kube-apiserver ready: true, restart count 0 Jun 18 00:14:57.557: INFO: Latency metrics for node master1 Jun 18 00:14:57.557: INFO: Logging node info for node master2 Jun 18 00:14:57.560: INFO: Node Info: &Node{ObjectMeta:{master2 71ab7827-6f85-4ecf-82ce-5b27d8ba1a11 105647 0 2022-06-17 19:59:29 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-17 19:59:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-17 20:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-17 20:01:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2022-06-17 20:09:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2022-06-17 20:09:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:35 +0000 UTC,LastTransitionTime:2022-06-17 20:04:35 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-18 00:14:56 +0000 UTC,LastTransitionTime:2022-06-17 19:59:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-18 00:14:56 +0000 UTC,LastTransitionTime:2022-06-17 19:59:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-18 00:14:56 +0000 UTC,LastTransitionTime:2022-06-17 19:59:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-18 00:14:56 +0000 UTC,LastTransitionTime:2022-06-17 20:01:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ba0363db4fd2476098c500989c8b1fd5,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:cafb2298-e9e8-4bc9-82ab-0feb6c416066,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 18 00:14:57.561: INFO: Logging kubelet events for node master2 Jun 18 00:14:57.564: INFO: Logging pods the kubelet thinks is on node master2 Jun 18 00:14:57.581: INFO: kube-apiserver-master2 started at 2022-06-17 20:00:05 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:57.581: INFO: Container kube-apiserver ready: true, restart count 0 Jun 18 00:14:57.581: INFO: kube-proxy-52p78 started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:57.581: INFO: Container kube-proxy ready: true, restart count 1 Jun 18 00:14:57.581: INFO: kube-multus-ds-amd64-spg7h started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:57.581: INFO: Container kube-multus ready: true, restart count 1 Jun 18 00:14:57.581: INFO: coredns-8474476ff8-55pd7 started at 2022-06-17 20:02:14 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:57.581: INFO: Container coredns ready: true, restart count 1 Jun 18 00:14:57.581: INFO: dns-autoscaler-7df78bfcfb-ml447 started at 2022-06-17 20:02:16 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:57.581: INFO: Container autoscaler ready: true, restart count 1 Jun 18 00:14:57.581: INFO: kube-controller-manager-master2 started at 2022-06-17 20:08:05 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:57.581: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 18 00:14:57.581: INFO: kube-scheduler-master2 started at 2022-06-17 20:00:05 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:57.581: INFO: Container kube-scheduler ready: true, restart count 2 Jun 18 00:14:57.581: INFO: kube-flannel-kmc7f started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded) Jun 18 00:14:57.581: INFO: Init container install-cni ready: true, restart count 2 Jun 18 00:14:57.581: INFO: Container kube-flannel ready: true, restart count 2 Jun 18 00:14:57.581: INFO: node-feature-discovery-controller-cff799f9f-zlzkd started at 2022-06-17 20:09:28 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:57.581: INFO: Container nfd-controller ready: true, restart count 0 Jun 18 00:14:57.581: INFO: node-exporter-ccmb2 started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded) Jun 18 00:14:57.581: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 18 00:14:57.581: INFO: Container node-exporter ready: true, restart count 0 Jun 18 00:14:57.665: INFO: Latency metrics for node master2 Jun 18 00:14:57.665: INFO: Logging node info for node master3 Jun 18 00:14:57.668: INFO: Node Info: &Node{ObjectMeta:{master3 4495d2b3-3dc7-45fa-93e4-2ad5ef91730e 105629 0 2022-06-17 19:59:37 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-17 19:59:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-06-17 20:00:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-06-17 20:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-06-17 20:12:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:33 +0000 UTC,LastTransitionTime:2022-06-17 20:04:33 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-18 00:14:55 +0000 UTC,LastTransitionTime:2022-06-17 19:59:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-18 00:14:55 +0000 UTC,LastTransitionTime:2022-06-17 19:59:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-18 00:14:55 +0000 UTC,LastTransitionTime:2022-06-17 19:59:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-18 00:14:55 +0000 UTC,LastTransitionTime:2022-06-17 20:01:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e420146228b341cbbaf470c338ef023e,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:88e9c5d2-4324-4e63-8acf-ee80e9511e70,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 18 00:14:57.668: INFO: Logging kubelet events for node master3 Jun 18 00:14:57.671: INFO: Logging pods the kubelet thinks is on node master3 Jun 18 00:14:57.687: INFO: kube-controller-manager-master3 started at 2022-06-17 20:08:07 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:57.687: INFO: Container kube-controller-manager ready: true, restart count 2 Jun 18 00:14:57.687: INFO: coredns-8474476ff8-plfdq started at 2022-06-17 20:02:18 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:57.687: INFO: Container coredns ready: true, restart count 1 Jun 18 00:14:57.687: INFO: prometheus-operator-585ccfb458-kz9ss started at 2022-06-17 20:14:47 +0000 UTC (0+2 container statuses recorded) Jun 18 00:14:57.687: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 18 00:14:57.687: INFO: Container prometheus-operator ready: true, restart count 0 Jun 18 00:14:57.687: INFO: kube-multus-ds-amd64-vtvhp started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:57.687: INFO: Container kube-multus ready: true, restart count 1 Jun 18 00:14:57.687: INFO: node-exporter-tv8q4 started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded) Jun 18 00:14:57.687: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 18 00:14:57.687: INFO: Container node-exporter ready: true, restart count 0 Jun 18 00:14:57.687: INFO: kube-apiserver-master3 started at 2022-06-17 20:00:05 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:57.687: INFO: Container kube-apiserver ready: true, restart count 0 Jun 18 00:14:57.687: INFO: kube-scheduler-master3 started at 2022-06-17 20:08:07 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:57.687: INFO: Container kube-scheduler ready: true, restart count 2 Jun 18 00:14:57.687: INFO: kube-proxy-qw2lh started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:57.687: INFO: Container kube-proxy ready: true, restart count 1 Jun 18 00:14:57.687: INFO: kube-flannel-7sp2w started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded) Jun 18 00:14:57.687: INFO: Init container install-cni ready: true, restart count 0 Jun 18 00:14:57.687: INFO: Container kube-flannel ready: true, restart count 2 Jun 18 00:14:57.778: INFO: Latency metrics for node master3 Jun 18 00:14:57.778: INFO: Logging node info for node node1 Jun 18 00:14:57.780: INFO: Node Info: &Node{ObjectMeta:{node1 2db3a28c-448f-4511-9db8-4ef739b681b1 105516 0 2022-06-17 20:00:39 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-17 20:00:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-17 20:01:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-17 20:09:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-17 20:13:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2022-06-17 23:59:46 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{}}}}} {kube-controller-manager Update v1 2022-06-18 00:12:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubelet Update v1 2022-06-18 00:13:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:34 +0000 UTC,LastTransitionTime:2022-06-17 20:04:34 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-18 00:14:51 +0000 UTC,LastTransitionTime:2022-06-17 20:00:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-18 00:14:51 +0000 UTC,LastTransitionTime:2022-06-17 20:00:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-18 00:14:51 +0000 UTC,LastTransitionTime:2022-06-17 20:00:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-18 00:14:51 +0000 UTC,LastTransitionTime:2022-06-17 20:01:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b4b206100a5d45e9959c4a79c836676a,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:5a19e1a7-8d9a-4724-83a4-bd77b1a0f8f4,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1007077455,},ContainerImage{Names:[localhost:30500/cmk@sha256:7227e64d78c2a9dd290de0ec1cbbaf536dad977fc3efca629dc87d6ffb97071e cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[k8s.gcr.io/sig-storage/nfs-provisioner@sha256:c1bedac8758029948afe060bf8f6ee63ea489b5e08d29745f44fab68ee0d46ca k8s.gcr.io/sig-storage/nfs-provisioner:v2.2.2],SizeBytes:391772778,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:160595fccf5ad4e41cc0a7acf56027802bf1a2310e704f6505baf0f88746e277 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.7],SizeBytes:60182103,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:51645752,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:46131354,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:46041582,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:21d7abd21ac65aac7d19aaa2b1b05a71e496b7bf6251c76df58855be9c3aaa59 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[localhost:30500/tasextender@sha256:a226a9c613b9eeed89115dd78ba697306e50d1b4466033c8415371714720c861 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:d8d3bc2c183ed2f9f10e7258f84971202325ee6011ba137112e01e30f206de67 k8s.gcr.io/busybox:latest],SizeBytes:2433303,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 18 00:14:57.781: INFO: Logging kubelet events for node node1 Jun 18 00:14:57.783: INFO: Logging pods the kubelet thinks is on node node1 Jun 18 00:14:57.809: INFO: hostexec-node1-m7bln started at 2022-06-18 00:14:37 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:57.809: INFO: Container agnhost-container ready: true, restart count 0 Jun 18 00:14:57.809: INFO: cmk-webhook-6c9d5f8578-qcmrd started at 2022-06-17 20:13:52 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:57.809: INFO: Container cmk-webhook ready: true, restart count 0 Jun 18 00:14:57.809: INFO: hostexec-node1-6l7bl started at 2022-06-18 00:14:55 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:57.809: INFO: Container agnhost-container ready: false, restart count 0 Jun 18 00:14:57.809: INFO: kube-proxy-t4lqk started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:57.809: INFO: Container kube-proxy ready: true, restart count 2 Jun 18 00:14:57.809: INFO: cmk-xh247 started at 2022-06-17 20:13:51 +0000 UTC (0+2 container statuses recorded) Jun 18 00:14:57.809: INFO: Container nodereport ready: true, restart count 0 Jun 18 00:14:57.809: INFO: Container reconcile ready: true, restart count 0 Jun 18 00:14:57.809: INFO: nginx-proxy-node1 started at 2022-06-17 20:00:39 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:57.809: INFO: Container nginx-proxy ready: true, restart count 2 Jun 18 00:14:57.809: INFO: kube-multus-ds-amd64-m6vf8 started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:57.809: INFO: Container kube-multus ready: true, restart count 1 Jun 18 00:14:57.809: INFO: hostexec-node1-4n24j started at 2022-06-18 00:14:26 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:57.809: INFO: Container agnhost-container ready: true, restart count 0 Jun 18 00:14:57.809: INFO: kubernetes-dashboard-785dcbb76d-26kg6 started at 2022-06-17 20:02:19 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:57.809: INFO: Container kubernetes-dashboard ready: true, restart count 2 Jun 18 00:14:57.809: INFO: tas-telemetry-aware-scheduling-84ff454dfb-tbvjv started at 2022-06-17 20:17:57 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:57.809: INFO: Container tas-extender ready: true, restart count 0 Jun 18 00:14:57.809: INFO: pod-79376cde-fdcc-463d-8a90-73d269998f25 started at 2022-06-18 00:14:42 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:57.809: INFO: Container write-pod ready: true, restart count 0 Jun 18 00:14:57.809: INFO: node-feature-discovery-worker-dgp4b started at 2022-06-17 20:09:28 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:57.809: INFO: Container nfd-worker ready: true, restart count 0 Jun 18 00:14:57.809: INFO: prometheus-k8s-0 started at 2022-06-17 20:14:56 +0000 UTC (0+4 container statuses recorded) Jun 18 00:14:57.809: INFO: Container config-reloader ready: true, restart count 0 Jun 18 00:14:57.809: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 18 00:14:57.809: INFO: Container grafana ready: true, restart count 0 Jun 18 00:14:57.809: INFO: Container prometheus ready: true, restart count 1 Jun 18 00:14:57.809: INFO: collectd-5src2 started at 2022-06-17 20:18:47 +0000 UTC (0+3 container statuses recorded) Jun 18 00:14:57.809: INFO: Container collectd ready: true, restart count 0 Jun 18 00:14:57.809: INFO: Container collectd-exporter ready: true, restart count 0 Jun 18 00:14:57.809: INFO: Container rbac-proxy ready: true, restart count 0 Jun 18 00:14:57.809: INFO: kube-flannel-wqcwq started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded) Jun 18 00:14:57.809: INFO: Init container install-cni ready: true, restart count 2 Jun 18 00:14:57.809: INFO: Container kube-flannel ready: true, restart count 2 Jun 18 00:14:57.809: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-whtq2 started at 2022-06-17 20:10:41 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:57.809: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 18 00:14:57.809: INFO: pod-a033d286-305b-4f29-a44a-600e2ed15ec7 started at 2022-06-18 00:14:49 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:57.809: INFO: Container write-pod ready: true, restart count 0 Jun 18 00:14:57.809: INFO: cmk-init-discover-node1-bvmrv started at 2022-06-17 20:13:02 +0000 UTC (0+3 container statuses recorded) Jun 18 00:14:57.809: INFO: Container discover ready: false, restart count 0 Jun 18 00:14:57.809: INFO: Container init ready: false, restart count 0 Jun 18 00:14:57.809: INFO: Container install ready: false, restart count 0 Jun 18 00:14:57.809: INFO: node-exporter-8ftgl started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded) Jun 18 00:14:57.809: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 18 00:14:57.809: INFO: Container node-exporter ready: true, restart count 0 Jun 18 00:14:58.853: INFO: Latency metrics for node node1 Jun 18 00:14:58.853: INFO: Logging node info for node node2 Jun 18 00:14:58.856: INFO: Node Info: &Node{ObjectMeta:{node2 467d2582-10f7-475b-9f20-5b7c2e46267a 105606 0 2022-06-17 20:00:37 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-06-17 20:00:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-17 20:01:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-17 20:09:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-17 20:13:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2022-06-17 23:59:46 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{}}}}} {kube-controller-manager Update v1 2022-06-18 00:13:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubelet Update v1 2022-06-18 00:13:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:33 +0000 UTC,LastTransitionTime:2022-06-17 20:04:33 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-18 00:14:55 +0000 UTC,LastTransitionTime:2022-06-17 20:00:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-18 00:14:55 +0000 UTC,LastTransitionTime:2022-06-17 20:00:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-18 00:14:55 +0000 UTC,LastTransitionTime:2022-06-17 20:00:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-18 00:14:55 +0000 UTC,LastTransitionTime:2022-06-17 20:04:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3b9e31fbb30d4e48b9ac063755a76deb,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:5cd4c1a7-c6ca-496c-9122-4f944da708e6,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[localhost:30500/cmk@sha256:7227e64d78c2a9dd290de0ec1cbbaf536dad977fc3efca629dc87d6ffb97071e localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:373281573,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:51645752,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:46131354,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:46041582,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:21d7abd21ac65aac7d19aaa2b1b05a71e496b7bf6251c76df58855be9c3aaa59 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:42321438,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:d8d3bc2c183ed2f9f10e7258f84971202325ee6011ba137112e01e30f206de67 k8s.gcr.io/busybox:latest],SizeBytes:2433303,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jun 18 00:14:58.857: INFO: Logging kubelet events for node node2 Jun 18 00:14:58.859: INFO: Logging pods the kubelet thinks is on node node2 Jun 18 00:14:58.873: INFO: cmk-init-discover-node2-z2vgz started at 2022-06-17 20:13:25 +0000 UTC (0+3 container statuses recorded) Jun 18 00:14:58.873: INFO: Container discover ready: false, restart count 0 Jun 18 00:14:58.873: INFO: Container init ready: false, restart count 0 Jun 18 00:14:58.873: INFO: Container install ready: false, restart count 0 Jun 18 00:14:58.873: INFO: hostexec-node2-dnxqp started at 2022-06-18 00:14:58 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:58.873: INFO: Container agnhost-container ready: false, restart count 0 Jun 18 00:14:58.873: INFO: test-hostpath-type-f4mwn started at 2022-06-18 00:14:54 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:58.873: INFO: Container host-path-testing ready: true, restart count 0 Jun 18 00:14:58.873: INFO: node-feature-discovery-worker-82r46 started at 2022-06-17 20:09:28 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:58.873: INFO: Container nfd-worker ready: true, restart count 0 Jun 18 00:14:58.873: INFO: kube-proxy-pvtj6 started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:58.873: INFO: Container kube-proxy ready: true, restart count 2 Jun 18 00:14:58.873: INFO: kube-multus-ds-amd64-hblk4 started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:58.873: INFO: Container kube-multus ready: true, restart count 1 Jun 18 00:14:58.873: INFO: cmk-5gtjq started at 2022-06-17 20:13:52 +0000 UTC (0+2 container statuses recorded) Jun 18 00:14:58.873: INFO: Container nodereport ready: true, restart count 0 Jun 18 00:14:58.873: INFO: Container reconcile ready: true, restart count 0 Jun 18 00:14:58.873: INFO: collectd-6bcqz started at 2022-06-17 20:18:47 +0000 UTC (0+3 container statuses recorded) Jun 18 00:14:58.873: INFO: Container collectd ready: true, restart count 0 Jun 18 00:14:58.873: INFO: Container collectd-exporter ready: true, restart count 0 Jun 18 00:14:58.873: INFO: Container rbac-proxy ready: true, restart count 0 Jun 18 00:14:58.873: INFO: test-hostpath-type-jhkbz started at 2022-06-18 00:14:48 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:58.873: INFO: Container host-path-testing ready: true, restart count 0 Jun 18 00:14:58.873: INFO: nginx-proxy-node2 started at 2022-06-17 20:00:37 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:58.873: INFO: Container nginx-proxy ready: true, restart count 2 Jun 18 00:14:58.873: INFO: pod-configmaps-10f858bb-756d-4dc3-b2ff-0e59a587fc20 started at 2022-06-18 00:11:50 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:58.873: INFO: Container agnhost-container ready: false, restart count 0 Jun 18 00:14:58.873: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-xr9c5 started at 2022-06-17 20:10:41 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:58.873: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 18 00:14:58.873: INFO: kube-flannel-plbl8 started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded) Jun 18 00:14:58.873: INFO: Init container install-cni ready: true, restart count 2 Jun 18 00:14:58.873: INFO: Container kube-flannel ready: true, restart count 2 Jun 18 00:14:58.873: INFO: pod-configmaps-d14c2478-4706-4343-9a5d-8c7c81717492 started at 2022-06-18 00:13:33 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:58.873: INFO: Container agnhost-container ready: false, restart count 0 Jun 18 00:14:58.873: INFO: pod-secrets-f3648cc6-341e-494a-93b9-058e388c6bf7 started at 2022-06-18 00:13:31 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:58.873: INFO: Container creates-volume-test ready: false, restart count 0 Jun 18 00:14:58.873: INFO: hostexec-node2-vscm5 started at 2022-06-18 00:14:52 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:58.873: INFO: Container agnhost-container ready: true, restart count 0 Jun 18 00:14:58.873: INFO: kubernetes-metrics-scraper-5558854cb-w4nk8 started at 2022-06-17 20:02:19 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:58.873: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 18 00:14:58.873: INFO: test-hostpath-type-ncs2w started at 2022-06-18 00:14:45 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:58.873: INFO: Container host-path-sh-testing ready: true, restart count 0 Jun 18 00:14:58.874: INFO: pod-secrets-578ad629-13f0-432e-9653-3ad13fe494da started at 2022-06-18 00:14:24 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:58.874: INFO: Container creates-volume-test ready: false, restart count 0 Jun 18 00:14:58.874: INFO: node-exporter-xgz6d started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded) Jun 18 00:14:58.874: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 18 00:14:58.874: INFO: Container node-exporter ready: true, restart count 0 Jun 18 00:14:58.874: INFO: test-hostpath-type-5lf9l started at 2022-06-18 00:14:58 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:58.874: INFO: Container host-path-testing ready: false, restart count 0 Jun 18 00:14:58.874: INFO: test-hostpath-type-5rgxr started at 2022-06-18 00:14:38 +0000 UTC (0+1 container statuses recorded) Jun 18 00:14:58.874: INFO: Container host-path-testing ready: false, restart count 0 Jun 18 00:14:59.212: INFO: Latency metrics for node node2 Jun 18 00:14:59.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-9839" for this suite. • Failure [7.863 seconds] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should give a volume the correct mode [LinuxOnly] [NodeConformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48 Jun 18 00:14:57.426: Unexpected error: <*errors.errorString | 0xc000c112d0>: { s: "expected \"mode of file \\\"/test-volume\\\": dtrwxrwx\" in container output: Expected\n : mount type of \"/test-volume\": 61267\n mode of file \"/test-volume\": dgtrwxrwxrwx\n \nto contain substring\n : mode of file \"/test-volume\": dtrwxrwx", } expected "mode of file \"/test-volume\": dtrwxrwx" in container output: Expected : mount type of "/test-volume": 61267 mode of file "/test-volume": dgtrwxrwxrwx to contain substring : mode of file "/test-volume": dtrwxrwx occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:742 ------------------------------ {"msg":"FAILED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","total":-1,"completed":15,"skipped":574,"failed":1,"failures":["[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:13:52.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should call NodeUnstage after NodeStage ephemeral error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 STEP: Building a driver namespace object, basename csi-mock-volumes-6743 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Jun 18 00:13:52.717: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6743-2301/csi-attacher Jun 18 00:13:52.721: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6743 Jun 18 00:13:52.721: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-6743 Jun 18 00:13:52.723: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6743 Jun 18 00:13:52.725: INFO: creating *v1.Role: csi-mock-volumes-6743-2301/external-attacher-cfg-csi-mock-volumes-6743 Jun 18 00:13:52.729: INFO: creating *v1.RoleBinding: csi-mock-volumes-6743-2301/csi-attacher-role-cfg Jun 18 00:13:52.731: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6743-2301/csi-provisioner Jun 18 00:13:52.733: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6743 Jun 18 00:13:52.733: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-6743 Jun 18 00:13:52.736: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6743 Jun 18 00:13:52.740: INFO: creating *v1.Role: csi-mock-volumes-6743-2301/external-provisioner-cfg-csi-mock-volumes-6743 Jun 18 00:13:52.742: INFO: creating *v1.RoleBinding: csi-mock-volumes-6743-2301/csi-provisioner-role-cfg Jun 18 00:13:52.745: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6743-2301/csi-resizer Jun 18 00:13:52.747: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6743 Jun 18 00:13:52.748: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-6743 Jun 18 00:13:52.750: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6743 Jun 18 00:13:52.754: INFO: creating *v1.Role: csi-mock-volumes-6743-2301/external-resizer-cfg-csi-mock-volumes-6743 Jun 18 00:13:52.757: INFO: creating *v1.RoleBinding: csi-mock-volumes-6743-2301/csi-resizer-role-cfg Jun 18 00:13:52.759: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6743-2301/csi-snapshotter Jun 18 00:13:52.762: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6743 Jun 18 00:13:52.762: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-6743 Jun 18 00:13:52.764: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6743 Jun 18 00:13:52.767: INFO: creating *v1.Role: csi-mock-volumes-6743-2301/external-snapshotter-leaderelection-csi-mock-volumes-6743 Jun 18 00:13:52.770: INFO: creating *v1.RoleBinding: csi-mock-volumes-6743-2301/external-snapshotter-leaderelection Jun 18 00:13:52.772: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6743-2301/csi-mock Jun 18 00:13:52.774: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6743 Jun 18 00:13:52.777: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6743 Jun 18 00:13:52.779: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6743 Jun 18 00:13:52.782: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6743 Jun 18 00:13:52.784: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6743 Jun 18 00:13:52.787: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6743 Jun 18 00:13:52.790: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6743 Jun 18 00:13:52.792: INFO: creating *v1.StatefulSet: csi-mock-volumes-6743-2301/csi-mockplugin Jun 18 00:13:52.798: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-6743 Jun 18 00:13:52.800: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-6743" Jun 18 00:13:52.802: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-6743 to register on node node1 I0618 00:13:58.884205 38 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-6743","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0618 00:13:58.992939 38 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I0618 00:13:58.996328 38 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-6743","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0618 00:13:58.997864 38 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I0618 00:13:59.000647 38 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I0618 00:13:59.240850 38 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-6743"},"Error":"","FullError":null} STEP: Creating pod Jun 18 00:14:02.318: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 18 00:14:02.323: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-t6zz5] to have phase Bound Jun 18 00:14:02.325: INFO: PersistentVolumeClaim pvc-t6zz5 found but phase is Pending instead of Bound. I0618 00:14:02.331775 38 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-bccfb940-c7b2-478c-84cc-01656bff93e7","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-bccfb940-c7b2-478c-84cc-01656bff93e7"}}},"Error":"","FullError":null} Jun 18 00:14:04.328: INFO: PersistentVolumeClaim pvc-t6zz5 found and phase=Bound (2.005215997s) Jun 18 00:14:04.343: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-t6zz5] to have phase Bound Jun 18 00:14:04.345: INFO: PersistentVolumeClaim pvc-t6zz5 found and phase=Bound (2.144277ms) STEP: Waiting for expected CSI calls I0618 00:14:04.971190 38 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0618 00:14:04.973905 38 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-bccfb940-c7b2-478c-84cc-01656bff93e7/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-bccfb940-c7b2-478c-84cc-01656bff93e7","storage.kubernetes.io/csiProvisionerIdentity":"1655511239004-8081-csi-mock-csi-mock-volumes-6743"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} STEP: Deleting the previously created pod Jun 18 00:14:05.346: INFO: Deleting pod "pvc-volume-tester-zc52w" in namespace "csi-mock-volumes-6743" Jun 18 00:14:05.350: INFO: Wait up to 5m0s for pod "pvc-volume-tester-zc52w" to be fully deleted I0618 00:14:05.574690 38 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0618 00:14:05.576516 38 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-bccfb940-c7b2-478c-84cc-01656bff93e7/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-bccfb940-c7b2-478c-84cc-01656bff93e7","storage.kubernetes.io/csiProvisionerIdentity":"1655511239004-8081-csi-mock-csi-mock-volumes-6743"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I0618 00:14:06.588074 38 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0618 00:14:06.589875 38 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-bccfb940-c7b2-478c-84cc-01656bff93e7/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-bccfb940-c7b2-478c-84cc-01656bff93e7","storage.kubernetes.io/csiProvisionerIdentity":"1655511239004-8081-csi-mock-csi-mock-volumes-6743"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I0618 00:14:08.682849 38 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0618 00:14:08.690843 38 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-bccfb940-c7b2-478c-84cc-01656bff93e7/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-bccfb940-c7b2-478c-84cc-01656bff93e7","storage.kubernetes.io/csiProvisionerIdentity":"1655511239004-8081-csi-mock-csi-mock-volumes-6743"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I0618 00:14:11.060762 38 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0618 00:14:11.062393 38 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-bccfb940-c7b2-478c-84cc-01656bff93e7/globalmount"},"Response":{},"Error":"","FullError":null} STEP: Waiting for all remaining expected CSI calls STEP: Deleting pod pvc-volume-tester-zc52w Jun 18 00:14:12.356: INFO: Deleting pod "pvc-volume-tester-zc52w" in namespace "csi-mock-volumes-6743" STEP: Deleting claim pvc-t6zz5 Jun 18 00:14:12.366: INFO: Waiting up to 2m0s for PersistentVolume pvc-bccfb940-c7b2-478c-84cc-01656bff93e7 to get deleted Jun 18 00:14:12.368: INFO: PersistentVolume pvc-bccfb940-c7b2-478c-84cc-01656bff93e7 found and phase=Bound (2.305555ms) I0618 00:14:12.378047 38 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} Jun 18 00:14:14.371: INFO: PersistentVolume pvc-bccfb940-c7b2-478c-84cc-01656bff93e7 was removed STEP: Deleting storageclass csi-mock-volumes-6743-scplskt STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-6743 STEP: Waiting for namespaces [csi-mock-volumes-6743] to vanish STEP: uninstalling csi mock driver Jun 18 00:14:21.402: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6743-2301/csi-attacher Jun 18 00:14:21.408: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6743 Jun 18 00:14:21.412: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6743 Jun 18 00:14:21.416: INFO: deleting *v1.Role: csi-mock-volumes-6743-2301/external-attacher-cfg-csi-mock-volumes-6743 Jun 18 00:14:21.420: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6743-2301/csi-attacher-role-cfg Jun 18 00:14:21.423: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6743-2301/csi-provisioner Jun 18 00:14:21.426: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6743 Jun 18 00:14:21.430: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6743 Jun 18 00:14:21.433: INFO: deleting *v1.Role: csi-mock-volumes-6743-2301/external-provisioner-cfg-csi-mock-volumes-6743 Jun 18 00:14:21.436: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6743-2301/csi-provisioner-role-cfg Jun 18 00:14:21.439: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6743-2301/csi-resizer Jun 18 00:14:21.443: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6743 Jun 18 00:14:21.446: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6743 Jun 18 00:14:21.449: INFO: deleting *v1.Role: csi-mock-volumes-6743-2301/external-resizer-cfg-csi-mock-volumes-6743 Jun 18 00:14:21.452: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6743-2301/csi-resizer-role-cfg Jun 18 00:14:21.455: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6743-2301/csi-snapshotter Jun 18 00:14:21.458: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6743 Jun 18 00:14:21.461: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6743 Jun 18 00:14:21.465: INFO: deleting *v1.Role: csi-mock-volumes-6743-2301/external-snapshotter-leaderelection-csi-mock-volumes-6743 Jun 18 00:14:21.470: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6743-2301/external-snapshotter-leaderelection Jun 18 00:14:21.474: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6743-2301/csi-mock Jun 18 00:14:21.478: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6743 Jun 18 00:14:21.481: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6743 Jun 18 00:14:21.485: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6743 Jun 18 00:14:21.488: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6743 Jun 18 00:14:21.492: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6743 Jun 18 00:14:21.495: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6743 Jun 18 00:14:21.498: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6743 Jun 18 00:14:21.502: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6743-2301/csi-mockplugin Jun 18 00:14:21.506: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-6743 STEP: deleting the driver namespace: csi-mock-volumes-6743-2301 STEP: Waiting for namespaces [csi-mock-volumes-6743-2301] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:15:05.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:72.870 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI NodeStage error cases [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:734 should call NodeUnstage after NodeStage ephemeral error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should call NodeUnstage after NodeStage ephemeral error","total":-1,"completed":9,"skipped":259,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:14:55.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-45503cd3-aae1-4073-a5e5-a7aef870a424" Jun 18 00:14:59.812: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-45503cd3-aae1-4073-a5e5-a7aef870a424 && dd if=/dev/zero of=/tmp/local-volume-test-45503cd3-aae1-4073-a5e5-a7aef870a424/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-45503cd3-aae1-4073-a5e5-a7aef870a424/file] Namespace:persistent-local-volumes-test-6048 PodName:hostexec-node1-6l7bl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:14:59.812: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:14:59.930: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-45503cd3-aae1-4073-a5e5-a7aef870a424/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6048 PodName:hostexec-node1-6l7bl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:14:59.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:15:00.021: INFO: Creating a PV followed by a PVC Jun 18 00:15:00.028: INFO: Waiting for PV local-pv47wb9 to bind to PVC pvc-8ndhq Jun 18 00:15:00.028: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-8ndhq] to have phase Bound Jun 18 00:15:00.030: INFO: PersistentVolumeClaim pvc-8ndhq found but phase is Pending instead of Bound. Jun 18 00:15:02.036: INFO: PersistentVolumeClaim pvc-8ndhq found and phase=Bound (2.007922359s) Jun 18 00:15:02.036: INFO: Waiting up to 3m0s for PersistentVolume local-pv47wb9 to have phase Bound Jun 18 00:15:02.038: INFO: PersistentVolume local-pv47wb9 found and phase=Bound (2.212759ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Jun 18 00:15:06.066: INFO: pod "pod-2f3063e0-c2c1-4225-a5f5-a360b2496814" created on Node "node1" STEP: Writing in pod1 Jun 18 00:15:06.067: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6048 PodName:pod-2f3063e0-c2c1-4225-a5f5-a360b2496814 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:15:06.067: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:15:06.153: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Jun 18 00:15:06.153: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6048 PodName:pod-2f3063e0-c2c1-4225-a5f5-a360b2496814 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jun 18 00:15:06.153: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:15:06.463: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-2f3063e0-c2c1-4225-a5f5-a360b2496814 in namespace persistent-local-volumes-test-6048 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:15:06.467: INFO: Deleting PersistentVolumeClaim "pvc-8ndhq" Jun 18 00:15:06.471: INFO: Deleting PersistentVolume "local-pv47wb9" Jun 18 00:15:06.475: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-45503cd3-aae1-4073-a5e5-a7aef870a424/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6048 PodName:hostexec-node1-6l7bl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:15:06.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop1" on node "node1" at path /tmp/local-volume-test-45503cd3-aae1-4073-a5e5-a7aef870a424/file Jun 18 00:15:06.598: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop1] Namespace:persistent-local-volumes-test-6048 PodName:hostexec-node1-6l7bl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:15:06.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-45503cd3-aae1-4073-a5e5-a7aef870a424 Jun 18 00:15:06.706: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-45503cd3-aae1-4073-a5e5-a7aef870a424] Namespace:persistent-local-volumes-test-6048 PodName:hostexec-node1-6l7bl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:15:06.706: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:15:06.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6048" for this suite. • [SLOW TEST:11.054 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":14,"skipped":451,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:14:54.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Jun 18 00:14:54.489: INFO: The status of Pod test-hostpath-type-f4mwn is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:14:56.494: INFO: The status of Pod test-hostpath-type-f4mwn is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:14:58.493: INFO: The status of Pod test-hostpath-type-f4mwn is Running (Ready = true) STEP: running on node node2 STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should be able to mount file 'afile' successfully when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:143 [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:15:08.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-4638" for this suite. • [SLOW TEST:14.096 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount file 'afile' successfully when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:143 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should be able to mount file 'afile' successfully when HostPathType is HostPathFile","total":-1,"completed":24,"skipped":822,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:15:05.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Jun 18 00:15:05.689: INFO: The status of Pod test-hostpath-type-n2l56 is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:15:07.693: INFO: The status of Pod test-hostpath-type-n2l56 is Pending, waiting for it to be Running (with Ready = true) Jun 18 00:15:09.692: INFO: The status of Pod test-hostpath-type-n2l56 is Running (Ready = true) STEP: running on node node1 [It] Should fail on mounting non-existent socket 'does-not-exist-socket' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:202 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:15:11.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-4892" for this suite. • [SLOW TEST:6.075 seconds] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting non-existent socket 'does-not-exist-socket' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:202 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should fail on mounting non-existent socket 'does-not-exist-socket' when HostPathType is HostPathSocket","total":-1,"completed":10,"skipped":320,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Volume limits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:15:11.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-limits-on-node STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volume limits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_limits.go:35 Jun 18 00:15:11.876: INFO: Only supported for providers [aws gce gke] (not local) [AfterEach] [sig-storage] Volume limits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:15:11.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-limits-on-node-2414" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds] [sig-storage] Volume limits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should verify that all nodes have volume limits [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_limits.go:41 Only supported for providers [aws gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_limits.go:36 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:15:08.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] files with FSGroup ownership should support (root,0644,tmpfs) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:67 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 18 00:15:08.689: INFO: Waiting up to 5m0s for pod "pod-e9d294c1-8a73-47bc-90bf-f9d5aef1a78b" in namespace "emptydir-8889" to be "Succeeded or Failed" Jun 18 00:15:08.691: INFO: Pod "pod-e9d294c1-8a73-47bc-90bf-f9d5aef1a78b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.158105ms Jun 18 00:15:10.694: INFO: Pod "pod-e9d294c1-8a73-47bc-90bf-f9d5aef1a78b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005672656s Jun 18 00:15:12.699: INFO: Pod "pod-e9d294c1-8a73-47bc-90bf-f9d5aef1a78b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010038673s STEP: Saw pod success Jun 18 00:15:12.699: INFO: Pod "pod-e9d294c1-8a73-47bc-90bf-f9d5aef1a78b" satisfied condition "Succeeded or Failed" Jun 18 00:15:12.701: INFO: Trying to get logs from node node2 pod pod-e9d294c1-8a73-47bc-90bf-f9d5aef1a78b container test-container: STEP: delete the pod Jun 18 00:15:12.736: INFO: Waiting for pod pod-e9d294c1-8a73-47bc-90bf-f9d5aef1a78b to disappear Jun 18 00:15:12.738: INFO: Pod pod-e9d294c1-8a73-47bc-90bf-f9d5aef1a78b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:15:12.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8889" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)","total":-1,"completed":25,"skipped":873,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:14:58.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 18 00:15:02.235: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-b0b6b97a-cae1-4d32-b202-81ca2674190d-backend && mount --bind /tmp/local-volume-test-b0b6b97a-cae1-4d32-b202-81ca2674190d-backend /tmp/local-volume-test-b0b6b97a-cae1-4d32-b202-81ca2674190d-backend && ln -s /tmp/local-volume-test-b0b6b97a-cae1-4d32-b202-81ca2674190d-backend /tmp/local-volume-test-b0b6b97a-cae1-4d32-b202-81ca2674190d] Namespace:persistent-local-volumes-test-9795 PodName:hostexec-node2-dnxqp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:15:02.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:15:02.598: INFO: Creating a PV followed by a PVC Jun 18 00:15:02.605: INFO: Waiting for PV local-pvcnwxd to bind to PVC pvc-bptxj Jun 18 00:15:02.605: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-bptxj] to have phase Bound Jun 18 00:15:02.607: INFO: PersistentVolumeClaim pvc-bptxj found but phase is Pending instead of Bound. Jun 18 00:15:04.611: INFO: PersistentVolumeClaim pvc-bptxj found and phase=Bound (2.006358488s) Jun 18 00:15:04.611: INFO: Waiting up to 3m0s for PersistentVolume local-pvcnwxd to have phase Bound Jun 18 00:15:04.613: INFO: PersistentVolume local-pvcnwxd found and phase=Bound (1.742823ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Jun 18 00:15:10.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-9795 exec pod-5f9d3023-0e20-4f7b-8ae1-e427fd6c862c --namespace=persistent-local-volumes-test-9795 -- stat -c %g /mnt/volume1' Jun 18 00:15:11.009: INFO: stderr: "" Jun 18 00:15:11.009: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Jun 18 00:15:15.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-9795 exec pod-6c76871f-5d70-439a-beb8-2e6726ef3610 --namespace=persistent-local-volumes-test-9795 -- stat -c %g /mnt/volume1' Jun 18 00:15:15.310: INFO: stderr: "" Jun 18 00:15:15.310: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod pod-5f9d3023-0e20-4f7b-8ae1-e427fd6c862c in namespace persistent-local-volumes-test-9795 STEP: Deleting second pod STEP: Deleting pod pod-6c76871f-5d70-439a-beb8-2e6726ef3610 in namespace persistent-local-volumes-test-9795 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:15:15.322: INFO: Deleting PersistentVolumeClaim "pvc-bptxj" Jun 18 00:15:15.325: INFO: Deleting PersistentVolume "local-pvcnwxd" STEP: Removing the test directory Jun 18 00:15:15.330: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-b0b6b97a-cae1-4d32-b202-81ca2674190d && umount /tmp/local-volume-test-b0b6b97a-cae1-4d32-b202-81ca2674190d-backend && rm -r /tmp/local-volume-test-b0b6b97a-cae1-4d32-b202-81ca2674190d-backend] Namespace:persistent-local-volumes-test-9795 PodName:hostexec-node2-dnxqp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:15:15.330: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:15:15.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9795" for this suite. • [SLOW TEST:17.631 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":16,"skipped":542,"failed":1,"failures":["[sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage transient error"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:15:12.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-afb36030-3829-41f1-a4f3-f861ead5332f" Jun 18 00:15:20.813: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-afb36030-3829-41f1-a4f3-f861ead5332f && dd if=/dev/zero of=/tmp/local-volume-test-afb36030-3829-41f1-a4f3-f861ead5332f/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-afb36030-3829-41f1-a4f3-f861ead5332f/file] Namespace:persistent-local-volumes-test-8672 PodName:hostexec-node2-qbtgt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:15:20.813: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:15:20.930: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-afb36030-3829-41f1-a4f3-f861ead5332f/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-8672 PodName:hostexec-node2-qbtgt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:15:20.930: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:15:21.086: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-afb36030-3829-41f1-a4f3-f861ead5332f && chmod o+rwx /tmp/local-volume-test-afb36030-3829-41f1-a4f3-f861ead5332f] Namespace:persistent-local-volumes-test-8672 PodName:hostexec-node2-qbtgt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:15:21.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:15:21.348: INFO: Creating a PV followed by a PVC Jun 18 00:15:21.355: INFO: Waiting for PV local-pv94fbr to bind to PVC pvc-zjkw9 Jun 18 00:15:21.355: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-zjkw9] to have phase Bound Jun 18 00:15:21.359: INFO: PersistentVolumeClaim pvc-zjkw9 found but phase is Pending instead of Bound. Jun 18 00:15:23.362: INFO: PersistentVolumeClaim pvc-zjkw9 found but phase is Pending instead of Bound. Jun 18 00:15:25.366: INFO: PersistentVolumeClaim pvc-zjkw9 found but phase is Pending instead of Bound. Jun 18 00:15:27.370: INFO: PersistentVolumeClaim pvc-zjkw9 found and phase=Bound (6.014377281s) Jun 18 00:15:27.370: INFO: Waiting up to 3m0s for PersistentVolume local-pv94fbr to have phase Bound Jun 18 00:15:27.373: INFO: PersistentVolume local-pv94fbr found and phase=Bound (3.227589ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Jun 18 00:15:27.378: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:15:27.380: INFO: Deleting PersistentVolumeClaim "pvc-zjkw9" Jun 18 00:15:27.385: INFO: Deleting PersistentVolume "local-pv94fbr" Jun 18 00:15:27.388: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-afb36030-3829-41f1-a4f3-f861ead5332f] Namespace:persistent-local-volumes-test-8672 PodName:hostexec-node2-qbtgt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:15:27.389: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:15:27.730: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-afb36030-3829-41f1-a4f3-f861ead5332f/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-8672 PodName:hostexec-node2-qbtgt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:15:27.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-afb36030-3829-41f1-a4f3-f861ead5332f/file Jun 18 00:15:27.884: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-8672 PodName:hostexec-node2-qbtgt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:15:27.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-afb36030-3829-41f1-a4f3-f861ead5332f Jun 18 00:15:28.215: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-afb36030-3829-41f1-a4f3-f861ead5332f] Namespace:persistent-local-volumes-test-8672 PodName:hostexec-node2-qbtgt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:15:28.215: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:15:28.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8672" for this suite. S [SKIPPING] [15.706 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:15:28.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename regional-pd STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:68 Jun 18 00:15:28.571: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:15:28.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "regional-pd-7929" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 RegionalPD [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:76 should provision storage with delayed binding [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:81 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:72 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:14:52.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-fd49a81b-1fd8-4e0f-b2ff-78ef70d8df1d" Jun 18 00:14:56.737: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-fd49a81b-1fd8-4e0f-b2ff-78ef70d8df1d" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-fd49a81b-1fd8-4e0f-b2ff-78ef70d8df1d" "/tmp/local-volume-test-fd49a81b-1fd8-4e0f-b2ff-78ef70d8df1d"] Namespace:persistent-local-volumes-test-8952 PodName:hostexec-node2-vscm5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:14:56.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:14:56.867: INFO: Creating a PV followed by a PVC Jun 18 00:14:56.874: INFO: Waiting for PV local-pvrhn9c to bind to PVC pvc-zrns6 Jun 18 00:14:56.874: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-zrns6] to have phase Bound Jun 18 00:14:56.876: INFO: PersistentVolumeClaim pvc-zrns6 found but phase is Pending instead of Bound. Jun 18 00:14:58.879: INFO: PersistentVolumeClaim pvc-zrns6 found but phase is Pending instead of Bound. Jun 18 00:15:00.883: INFO: PersistentVolumeClaim pvc-zrns6 found but phase is Pending instead of Bound. Jun 18 00:15:02.887: INFO: PersistentVolumeClaim pvc-zrns6 found but phase is Pending instead of Bound. Jun 18 00:15:04.892: INFO: PersistentVolumeClaim pvc-zrns6 found but phase is Pending instead of Bound. Jun 18 00:15:06.895: INFO: PersistentVolumeClaim pvc-zrns6 found but phase is Pending instead of Bound. Jun 18 00:15:08.900: INFO: PersistentVolumeClaim pvc-zrns6 found but phase is Pending instead of Bound. Jun 18 00:15:10.904: INFO: PersistentVolumeClaim pvc-zrns6 found but phase is Pending instead of Bound. Jun 18 00:15:12.908: INFO: PersistentVolumeClaim pvc-zrns6 found and phase=Bound (16.034219538s) Jun 18 00:15:12.908: INFO: Waiting up to 3m0s for PersistentVolume local-pvrhn9c to have phase Bound Jun 18 00:15:12.910: INFO: PersistentVolume local-pvrhn9c found and phase=Bound (1.913902ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Jun 18 00:15:20.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-8952 exec pod-ac10587e-f27c-470d-81c1-06324c4f8970 --namespace=persistent-local-volumes-test-8952 -- stat -c %g /mnt/volume1' Jun 18 00:15:21.193: INFO: stderr: "" Jun 18 00:15:21.193: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Jun 18 00:15:37.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-8952 exec pod-2b0d0786-14ee-4288-a4fe-2c8ce4d82c99 --namespace=persistent-local-volumes-test-8952 -- stat -c %g /mnt/volume1' Jun 18 00:15:37.462: INFO: stderr: "" Jun 18 00:15:37.462: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod pod-ac10587e-f27c-470d-81c1-06324c4f8970 in namespace persistent-local-volumes-test-8952 STEP: Deleting second pod STEP: Deleting pod pod-2b0d0786-14ee-4288-a4fe-2c8ce4d82c99 in namespace persistent-local-volumes-test-8952 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:15:37.471: INFO: Deleting PersistentVolumeClaim "pvc-zrns6" Jun 18 00:15:37.474: INFO: Deleting PersistentVolume "local-pvrhn9c" STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-fd49a81b-1fd8-4e0f-b2ff-78ef70d8df1d" Jun 18 00:15:37.479: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-fd49a81b-1fd8-4e0f-b2ff-78ef70d8df1d"] Namespace:persistent-local-volumes-test-8952 PodName:hostexec-node2-vscm5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:15:37.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:15:37.591: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-fd49a81b-1fd8-4e0f-b2ff-78ef70d8df1d] Namespace:persistent-local-volumes-test-8952 PodName:hostexec-node2-vscm5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:15:37.591: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:15:37.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8952" for this suite. • [SLOW TEST:45.006 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":16,"skipped":576,"failed":0} SSSSSSSSSSSSS ------------------------------ Jun 18 00:15:37.723: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:15:28.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Jun 18 00:15:38.666: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-7b527690-6408-47b5-8c8f-09e982f34d13-backend && ln -s /tmp/local-volume-test-7b527690-6408-47b5-8c8f-09e982f34d13-backend /tmp/local-volume-test-7b527690-6408-47b5-8c8f-09e982f34d13] Namespace:persistent-local-volumes-test-9176 PodName:hostexec-node2-5zts9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:15:38.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Jun 18 00:15:38.762: INFO: Creating a PV followed by a PVC Jun 18 00:15:38.770: INFO: Waiting for PV local-pvgcnsh to bind to PVC pvc-2f7v8 Jun 18 00:15:38.770: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-2f7v8] to have phase Bound Jun 18 00:15:38.773: INFO: PersistentVolumeClaim pvc-2f7v8 found but phase is Pending instead of Bound. Jun 18 00:15:40.778: INFO: PersistentVolumeClaim pvc-2f7v8 found but phase is Pending instead of Bound. Jun 18 00:15:42.783: INFO: PersistentVolumeClaim pvc-2f7v8 found and phase=Bound (4.013249207s) Jun 18 00:15:42.783: INFO: Waiting up to 3m0s for PersistentVolume local-pvgcnsh to have phase Bound Jun 18 00:15:42.786: INFO: PersistentVolume local-pvgcnsh found and phase=Bound (2.18323ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Jun 18 00:15:42.790: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Jun 18 00:15:42.792: INFO: Deleting PersistentVolumeClaim "pvc-2f7v8" Jun 18 00:15:42.796: INFO: Deleting PersistentVolume "local-pvgcnsh" STEP: Removing the test directory Jun 18 00:15:42.800: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-7b527690-6408-47b5-8c8f-09e982f34d13 && rm -r /tmp/local-volume-test-7b527690-6408-47b5-8c8f-09e982f34d13-backend] Namespace:persistent-local-volumes-test-9176 PodName:hostexec-node2-5zts9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:15:42.800: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:15:42.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9176" for this suite. S [SKIPPING] [14.359 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:15:11.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should preserve attachment policy when no CSIDriver present /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 STEP: Building a driver namespace object, basename csi-mock-volumes-1152 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 18 00:15:12.042: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1152-6686/csi-attacher Jun 18 00:15:12.044: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1152 Jun 18 00:15:12.045: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-1152 Jun 18 00:15:12.047: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1152 Jun 18 00:15:12.051: INFO: creating *v1.Role: csi-mock-volumes-1152-6686/external-attacher-cfg-csi-mock-volumes-1152 Jun 18 00:15:12.054: INFO: creating *v1.RoleBinding: csi-mock-volumes-1152-6686/csi-attacher-role-cfg Jun 18 00:15:12.056: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1152-6686/csi-provisioner Jun 18 00:15:12.059: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1152 Jun 18 00:15:12.059: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-1152 Jun 18 00:15:12.062: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1152 Jun 18 00:15:12.065: INFO: creating *v1.Role: csi-mock-volumes-1152-6686/external-provisioner-cfg-csi-mock-volumes-1152 Jun 18 00:15:12.068: INFO: creating *v1.RoleBinding: csi-mock-volumes-1152-6686/csi-provisioner-role-cfg Jun 18 00:15:12.071: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1152-6686/csi-resizer Jun 18 00:15:12.073: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1152 Jun 18 00:15:12.073: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-1152 Jun 18 00:15:12.076: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1152 Jun 18 00:15:12.078: INFO: creating *v1.Role: csi-mock-volumes-1152-6686/external-resizer-cfg-csi-mock-volumes-1152 Jun 18 00:15:12.081: INFO: creating *v1.RoleBinding: csi-mock-volumes-1152-6686/csi-resizer-role-cfg Jun 18 00:15:12.083: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1152-6686/csi-snapshotter Jun 18 00:15:12.086: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1152 Jun 18 00:15:12.086: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-1152 Jun 18 00:15:12.088: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1152 Jun 18 00:15:12.092: INFO: creating *v1.Role: csi-mock-volumes-1152-6686/external-snapshotter-leaderelection-csi-mock-volumes-1152 Jun 18 00:15:12.095: INFO: creating *v1.RoleBinding: csi-mock-volumes-1152-6686/external-snapshotter-leaderelection Jun 18 00:15:12.097: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1152-6686/csi-mock Jun 18 00:15:12.101: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1152 Jun 18 00:15:12.104: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1152 Jun 18 00:15:12.107: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1152 Jun 18 00:15:12.111: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1152 Jun 18 00:15:12.113: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1152 Jun 18 00:15:12.116: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1152 Jun 18 00:15:12.119: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1152 Jun 18 00:15:12.123: INFO: creating *v1.StatefulSet: csi-mock-volumes-1152-6686/csi-mockplugin Jun 18 00:15:12.128: INFO: creating *v1.StatefulSet: csi-mock-volumes-1152-6686/csi-mockplugin-attacher Jun 18 00:15:12.131: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-1152 to register on node node1 STEP: Creating pod Jun 18 00:15:21.647: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 18 00:15:21.651: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-wrww4] to have phase Bound Jun 18 00:15:21.654: INFO: PersistentVolumeClaim pvc-wrww4 found but phase is Pending instead of Bound. Jun 18 00:15:23.658: INFO: PersistentVolumeClaim pvc-wrww4 found and phase=Bound (2.006874818s) STEP: Checking if VolumeAttachment was created for the pod STEP: Deleting pod pvc-volume-tester-vqlb7 Jun 18 00:15:43.690: INFO: Deleting pod "pvc-volume-tester-vqlb7" in namespace "csi-mock-volumes-1152" Jun 18 00:15:43.695: INFO: Wait up to 5m0s for pod "pvc-volume-tester-vqlb7" to be fully deleted STEP: Deleting claim pvc-wrww4 Jun 18 00:15:49.707: INFO: Waiting up to 2m0s for PersistentVolume pvc-d2e4268c-8764-4cd7-badd-da66654a5772 to get deleted Jun 18 00:15:49.709: INFO: PersistentVolume pvc-d2e4268c-8764-4cd7-badd-da66654a5772 found and phase=Bound (1.861279ms) Jun 18 00:15:51.712: INFO: PersistentVolume pvc-d2e4268c-8764-4cd7-badd-da66654a5772 found and phase=Released (2.005530286s) Jun 18 00:15:53.715: INFO: PersistentVolume pvc-d2e4268c-8764-4cd7-badd-da66654a5772 was removed STEP: Deleting storageclass csi-mock-volumes-1152-scll8hd STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-1152 STEP: Waiting for namespaces [csi-mock-volumes-1152] to vanish STEP: uninstalling csi mock driver Jun 18 00:15:59.731: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1152-6686/csi-attacher Jun 18 00:15:59.735: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1152 Jun 18 00:15:59.739: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1152 Jun 18 00:15:59.743: INFO: deleting *v1.Role: csi-mock-volumes-1152-6686/external-attacher-cfg-csi-mock-volumes-1152 Jun 18 00:15:59.747: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1152-6686/csi-attacher-role-cfg Jun 18 00:15:59.750: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1152-6686/csi-provisioner Jun 18 00:15:59.753: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1152 Jun 18 00:15:59.760: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1152 Jun 18 00:15:59.763: INFO: deleting *v1.Role: csi-mock-volumes-1152-6686/external-provisioner-cfg-csi-mock-volumes-1152 Jun 18 00:15:59.767: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1152-6686/csi-provisioner-role-cfg Jun 18 00:15:59.771: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1152-6686/csi-resizer Jun 18 00:15:59.774: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1152 Jun 18 00:15:59.777: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1152 Jun 18 00:15:59.780: INFO: deleting *v1.Role: csi-mock-volumes-1152-6686/external-resizer-cfg-csi-mock-volumes-1152 Jun 18 00:15:59.783: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1152-6686/csi-resizer-role-cfg Jun 18 00:15:59.786: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1152-6686/csi-snapshotter Jun 18 00:15:59.789: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1152 Jun 18 00:15:59.793: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1152 Jun 18 00:15:59.796: INFO: deleting *v1.Role: csi-mock-volumes-1152-6686/external-snapshotter-leaderelection-csi-mock-volumes-1152 Jun 18 00:15:59.799: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1152-6686/external-snapshotter-leaderelection Jun 18 00:15:59.803: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1152-6686/csi-mock Jun 18 00:15:59.806: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1152 Jun 18 00:15:59.809: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1152 Jun 18 00:15:59.813: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1152 Jun 18 00:15:59.816: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1152 Jun 18 00:15:59.820: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1152 Jun 18 00:15:59.824: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1152 Jun 18 00:15:59.827: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1152 Jun 18 00:15:59.830: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1152-6686/csi-mockplugin Jun 18 00:15:59.833: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1152-6686/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-1152-6686 STEP: Waiting for namespaces [csi-mock-volumes-1152-6686] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:16:15.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:63.867 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI attach test using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:316 should preserve attachment policy when no CSIDriver present /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present","total":-1,"completed":11,"skipped":428,"failed":0} Jun 18 00:16:15.851: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:14:59.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity used, have capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1299 STEP: Building a driver namespace object, basename csi-mock-volumes-253 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Jun 18 00:14:59.355: INFO: creating *v1.ServiceAccount: csi-mock-volumes-253-1233/csi-attacher Jun 18 00:14:59.358: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-253 Jun 18 00:14:59.358: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-253 Jun 18 00:14:59.361: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-253 Jun 18 00:14:59.364: INFO: creating *v1.Role: csi-mock-volumes-253-1233/external-attacher-cfg-csi-mock-volumes-253 Jun 18 00:14:59.366: INFO: creating *v1.RoleBinding: csi-mock-volumes-253-1233/csi-attacher-role-cfg Jun 18 00:14:59.369: INFO: creating *v1.ServiceAccount: csi-mock-volumes-253-1233/csi-provisioner Jun 18 00:14:59.371: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-253 Jun 18 00:14:59.371: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-253 Jun 18 00:14:59.374: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-253 Jun 18 00:14:59.377: INFO: creating *v1.Role: csi-mock-volumes-253-1233/external-provisioner-cfg-csi-mock-volumes-253 Jun 18 00:14:59.380: INFO: creating *v1.RoleBinding: csi-mock-volumes-253-1233/csi-provisioner-role-cfg Jun 18 00:14:59.383: INFO: creating *v1.ServiceAccount: csi-mock-volumes-253-1233/csi-resizer Jun 18 00:14:59.386: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-253 Jun 18 00:14:59.386: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-253 Jun 18 00:14:59.388: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-253 Jun 18 00:14:59.391: INFO: creating *v1.Role: csi-mock-volumes-253-1233/external-resizer-cfg-csi-mock-volumes-253 Jun 18 00:14:59.394: INFO: creating *v1.RoleBinding: csi-mock-volumes-253-1233/csi-resizer-role-cfg Jun 18 00:14:59.397: INFO: creating *v1.ServiceAccount: csi-mock-volumes-253-1233/csi-snapshotter Jun 18 00:14:59.399: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-253 Jun 18 00:14:59.399: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-253 Jun 18 00:14:59.402: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-253 Jun 18 00:14:59.405: INFO: creating *v1.Role: csi-mock-volumes-253-1233/external-snapshotter-leaderelection-csi-mock-volumes-253 Jun 18 00:14:59.408: INFO: creating *v1.RoleBinding: csi-mock-volumes-253-1233/external-snapshotter-leaderelection Jun 18 00:14:59.411: INFO: creating *v1.ServiceAccount: csi-mock-volumes-253-1233/csi-mock Jun 18 00:14:59.413: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-253 Jun 18 00:14:59.416: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-253 Jun 18 00:14:59.419: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-253 Jun 18 00:14:59.421: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-253 Jun 18 00:14:59.424: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-253 Jun 18 00:14:59.427: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-253 Jun 18 00:14:59.429: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-253 Jun 18 00:14:59.431: INFO: creating *v1.StatefulSet: csi-mock-volumes-253-1233/csi-mockplugin Jun 18 00:14:59.435: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-253 Jun 18 00:14:59.439: INFO: creating *v1.StatefulSet: csi-mock-volumes-253-1233/csi-mockplugin-attacher Jun 18 00:14:59.443: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-253" Jun 18 00:14:59.445: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-253 to register on node node2 STEP: Creating pod Jun 18 00:15:13.971: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Deleting the previously created pod Jun 18 00:15:41.996: INFO: Deleting pod "pvc-volume-tester-l9ld6" in namespace "csi-mock-volumes-253" Jun 18 00:15:42.002: INFO: Wait up to 5m0s for pod "pvc-volume-tester-l9ld6" to be fully deleted STEP: Deleting pod pvc-volume-tester-l9ld6 Jun 18 00:15:50.008: INFO: Deleting pod "pvc-volume-tester-l9ld6" in namespace "csi-mock-volumes-253" STEP: Deleting claim pvc-vjstw Jun 18 00:15:50.019: INFO: Waiting up to 2m0s for PersistentVolume pvc-b176c4b1-8332-40c2-8add-08ddd2e97059 to get deleted Jun 18 00:15:50.021: INFO: PersistentVolume pvc-b176c4b1-8332-40c2-8add-08ddd2e97059 found and phase=Bound (2.309174ms) Jun 18 00:15:52.027: INFO: PersistentVolume pvc-b176c4b1-8332-40c2-8add-08ddd2e97059 found and phase=Released (2.008254235s) Jun 18 00:15:54.031: INFO: PersistentVolume pvc-b176c4b1-8332-40c2-8add-08ddd2e97059 found and phase=Released (4.012535655s) Jun 18 00:15:56.036: INFO: PersistentVolume pvc-b176c4b1-8332-40c2-8add-08ddd2e97059 found and phase=Released (6.016745232s) Jun 18 00:15:58.039: INFO: PersistentVolume pvc-b176c4b1-8332-40c2-8add-08ddd2e97059 was removed STEP: Deleting storageclass mock-csi-storage-capacity-csi-mock-volumes-253 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-253 STEP: Waiting for namespaces [csi-mock-volumes-253] to vanish STEP: uninstalling csi mock driver Jun 18 00:16:04.050: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-253-1233/csi-attacher Jun 18 00:16:04.055: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-253 Jun 18 00:16:04.059: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-253 Jun 18 00:16:04.063: INFO: deleting *v1.Role: csi-mock-volumes-253-1233/external-attacher-cfg-csi-mock-volumes-253 Jun 18 00:16:04.066: INFO: deleting *v1.RoleBinding: csi-mock-volumes-253-1233/csi-attacher-role-cfg Jun 18 00:16:04.070: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-253-1233/csi-provisioner Jun 18 00:16:04.073: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-253 Jun 18 00:16:04.076: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-253 Jun 18 00:16:04.081: INFO: deleting *v1.Role: csi-mock-volumes-253-1233/external-provisioner-cfg-csi-mock-volumes-253 Jun 18 00:16:04.086: INFO: deleting *v1.RoleBinding: csi-mock-volumes-253-1233/csi-provisioner-role-cfg Jun 18 00:16:04.097: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-253-1233/csi-resizer Jun 18 00:16:04.105: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-253 Jun 18 00:16:04.111: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-253 Jun 18 00:16:04.114: INFO: deleting *v1.Role: csi-mock-volumes-253-1233/external-resizer-cfg-csi-mock-volumes-253 Jun 18 00:16:04.118: INFO: deleting *v1.RoleBinding: csi-mock-volumes-253-1233/csi-resizer-role-cfg Jun 18 00:16:04.121: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-253-1233/csi-snapshotter Jun 18 00:16:04.124: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-253 Jun 18 00:16:04.128: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-253 Jun 18 00:16:04.131: INFO: deleting *v1.Role: csi-mock-volumes-253-1233/external-snapshotter-leaderelection-csi-mock-volumes-253 Jun 18 00:16:04.135: INFO: deleting *v1.RoleBinding: csi-mock-volumes-253-1233/external-snapshotter-leaderelection Jun 18 00:16:04.138: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-253-1233/csi-mock Jun 18 00:16:04.141: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-253 Jun 18 00:16:04.144: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-253 Jun 18 00:16:04.148: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-253 Jun 18 00:16:04.152: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-253 Jun 18 00:16:04.155: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-253 Jun 18 00:16:04.159: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-253 Jun 18 00:16:04.162: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-253 Jun 18 00:16:04.166: INFO: deleting *v1.StatefulSet: csi-mock-volumes-253-1233/csi-mockplugin Jun 18 00:16:04.170: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-253 Jun 18 00:16:04.174: INFO: deleting *v1.StatefulSet: csi-mock-volumes-253-1233/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-253-1233 STEP: Waiting for namespaces [csi-mock-volumes-253-1233] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:16:16.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:76.897 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1256 CSIStorageCapacity used, have capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1299 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","total":-1,"completed":16,"skipped":606,"failed":1,"failures":["[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]"]} Jun 18 00:16:16.196: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:15:06.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:455 STEP: Setting up 10 local volumes on node "node1" STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-0348c472-c80c-4f0d-ba3f-3f380ee358fc" Jun 18 00:15:08.942: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-0348c472-c80c-4f0d-ba3f-3f380ee358fc" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-0348c472-c80c-4f0d-ba3f-3f380ee358fc" "/tmp/local-volume-test-0348c472-c80c-4f0d-ba3f-3f380ee358fc"] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node1-9j9p4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:15:08.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-61ab4274-d9de-447e-a44d-e94117cb9aa1" Jun 18 00:15:09.129: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-61ab4274-d9de-447e-a44d-e94117cb9aa1" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-61ab4274-d9de-447e-a44d-e94117cb9aa1" "/tmp/local-volume-test-61ab4274-d9de-447e-a44d-e94117cb9aa1"] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node1-9j9p4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:15:09.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-8cfb9f2f-611f-48db-9801-1f8a2631fbb4" Jun 18 00:15:09.249: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-8cfb9f2f-611f-48db-9801-1f8a2631fbb4" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-8cfb9f2f-611f-48db-9801-1f8a2631fbb4" "/tmp/local-volume-test-8cfb9f2f-611f-48db-9801-1f8a2631fbb4"] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node1-9j9p4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:15:09.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-e148220c-2246-4adc-a9cc-c64fe13172a9" Jun 18 00:15:09.348: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-e148220c-2246-4adc-a9cc-c64fe13172a9" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-e148220c-2246-4adc-a9cc-c64fe13172a9" "/tmp/local-volume-test-e148220c-2246-4adc-a9cc-c64fe13172a9"] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node1-9j9p4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:15:09.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-40aad9c5-4dd1-456a-9bda-2df06fabd434" Jun 18 00:15:09.433: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-40aad9c5-4dd1-456a-9bda-2df06fabd434" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-40aad9c5-4dd1-456a-9bda-2df06fabd434" "/tmp/local-volume-test-40aad9c5-4dd1-456a-9bda-2df06fabd434"] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node1-9j9p4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:15:09.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-6ad28f85-9e61-487f-908f-d8d60f801cc9" Jun 18 00:15:09.538: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-6ad28f85-9e61-487f-908f-d8d60f801cc9" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-6ad28f85-9e61-487f-908f-d8d60f801cc9" "/tmp/local-volume-test-6ad28f85-9e61-487f-908f-d8d60f801cc9"] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node1-9j9p4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:15:09.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-a89db4be-21c5-4b8a-ab80-ee303c865d1b" Jun 18 00:15:09.661: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-a89db4be-21c5-4b8a-ab80-ee303c865d1b" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-a89db4be-21c5-4b8a-ab80-ee303c865d1b" "/tmp/local-volume-test-a89db4be-21c5-4b8a-ab80-ee303c865d1b"] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node1-9j9p4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:15:09.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-f24fdf31-2364-4aba-b1dd-5ba2db5dec8a" Jun 18 00:15:09.780: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-f24fdf31-2364-4aba-b1dd-5ba2db5dec8a" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-f24fdf31-2364-4aba-b1dd-5ba2db5dec8a" "/tmp/local-volume-test-f24fdf31-2364-4aba-b1dd-5ba2db5dec8a"] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node1-9j9p4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:15:09.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-53244ef3-8ff8-456e-a4e7-b698c0f30a6a" Jun 18 00:15:09.873: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-53244ef3-8ff8-456e-a4e7-b698c0f30a6a" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-53244ef3-8ff8-456e-a4e7-b698c0f30a6a" "/tmp/local-volume-test-53244ef3-8ff8-456e-a4e7-b698c0f30a6a"] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node1-9j9p4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:15:09.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-05a75871-d630-4bc4-b55c-cc66a5d9ffa1" Jun 18 00:15:09.976: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-05a75871-d630-4bc4-b55c-cc66a5d9ffa1" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-05a75871-d630-4bc4-b55c-cc66a5d9ffa1" "/tmp/local-volume-test-05a75871-d630-4bc4-b55c-cc66a5d9ffa1"] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node1-9j9p4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:15:09.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Setting up 10 local volumes on node "node2" STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-7e0f2c12-6938-41dc-ae73-bee8f7c718dd" Jun 18 00:15:14.115: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-7e0f2c12-6938-41dc-ae73-bee8f7c718dd" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-7e0f2c12-6938-41dc-ae73-bee8f7c718dd" "/tmp/local-volume-test-7e0f2c12-6938-41dc-ae73-bee8f7c718dd"] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node2-vbznx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:15:14.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-cfa12f70-5aaa-450d-bdf1-bb14301dd027" Jun 18 00:15:14.292: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-cfa12f70-5aaa-450d-bdf1-bb14301dd027" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-cfa12f70-5aaa-450d-bdf1-bb14301dd027" "/tmp/local-volume-test-cfa12f70-5aaa-450d-bdf1-bb14301dd027"] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node2-vbznx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:15:14.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-8994c814-e7d0-4ba2-a09f-68e5cb98b973" Jun 18 00:15:14.433: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-8994c814-e7d0-4ba2-a09f-68e5cb98b973" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-8994c814-e7d0-4ba2-a09f-68e5cb98b973" "/tmp/local-volume-test-8994c814-e7d0-4ba2-a09f-68e5cb98b973"] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node2-vbznx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:15:14.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-daa25590-f424-4597-8d96-df175a89e2b6" Jun 18 00:15:14.624: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-daa25590-f424-4597-8d96-df175a89e2b6" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-daa25590-f424-4597-8d96-df175a89e2b6" "/tmp/local-volume-test-daa25590-f424-4597-8d96-df175a89e2b6"] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node2-vbznx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:15:14.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-10c51bf5-9431-4f5a-8533-f691ab7b603a" Jun 18 00:15:14.772: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-10c51bf5-9431-4f5a-8533-f691ab7b603a" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-10c51bf5-9431-4f5a-8533-f691ab7b603a" "/tmp/local-volume-test-10c51bf5-9431-4f5a-8533-f691ab7b603a"] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node2-vbznx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:15:14.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-fd9d0453-2d78-4f47-9652-4bd1f3612320" Jun 18 00:15:14.936: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-fd9d0453-2d78-4f47-9652-4bd1f3612320" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-fd9d0453-2d78-4f47-9652-4bd1f3612320" "/tmp/local-volume-test-fd9d0453-2d78-4f47-9652-4bd1f3612320"] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node2-vbznx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:15:14.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-24d95cb0-e891-4862-aa2c-f8039e4e7df0" Jun 18 00:15:15.169: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-24d95cb0-e891-4862-aa2c-f8039e4e7df0" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-24d95cb0-e891-4862-aa2c-f8039e4e7df0" "/tmp/local-volume-test-24d95cb0-e891-4862-aa2c-f8039e4e7df0"] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node2-vbznx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:15:15.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-95b68d7b-4b9d-4143-bdf8-b97721e25b90" Jun 18 00:15:15.300: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-95b68d7b-4b9d-4143-bdf8-b97721e25b90" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-95b68d7b-4b9d-4143-bdf8-b97721e25b90" "/tmp/local-volume-test-95b68d7b-4b9d-4143-bdf8-b97721e25b90"] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node2-vbznx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:15:15.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-5380afc7-1e3c-44e4-8a89-cf36679163a2" Jun 18 00:15:15.387: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-5380afc7-1e3c-44e4-8a89-cf36679163a2" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-5380afc7-1e3c-44e4-8a89-cf36679163a2" "/tmp/local-volume-test-5380afc7-1e3c-44e4-8a89-cf36679163a2"] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node2-vbznx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:15:15.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-2b946854-f925-4ebc-815b-ee438cb9a3f0" Jun 18 00:15:15.496: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-2b946854-f925-4ebc-815b-ee438cb9a3f0" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-2b946854-f925-4ebc-815b-ee438cb9a3f0" "/tmp/local-volume-test-2b946854-f925-4ebc-815b-ee438cb9a3f0"] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node2-vbznx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:15:15.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Create 20 PVs STEP: Start a goroutine to recycle unbound PVs [It] should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:531 STEP: Creating 7 pods periodically STEP: Waiting for all pods to complete successfully STEP: Delete "local-pvcnwxd" and create a new PV for same local volume storage Jun 18 00:15:24.111: INFO: Deleting pod pod-beaeea91-2ec5-4a74-a2cc-173d3e82b0d8 Jun 18 00:15:24.119: INFO: Deleting PersistentVolumeClaim "pvc-xssfm" Jun 18 00:15:24.123: INFO: Deleting PersistentVolumeClaim "pvc-t2zlq" Jun 18 00:15:24.126: INFO: Deleting PersistentVolumeClaim "pvc-nnj7n" Jun 18 00:15:24.130: INFO: 1/28 pods finished STEP: Delete "local-pvt7vwk" and create a new PV for same local volume storage STEP: Delete "local-pvgxmkh" and create a new PV for same local volume storage STEP: Delete "local-pvw8fh7" and create a new PV for same local volume storage Jun 18 00:15:27.112: INFO: Deleting pod pod-b7abc360-98b6-4ed4-b663-7c04334d2b47 Jun 18 00:15:27.119: INFO: Deleting PersistentVolumeClaim "pvc-6x592" Jun 18 00:15:27.123: INFO: Deleting PersistentVolumeClaim "pvc-b6ltn" Jun 18 00:15:27.127: INFO: Deleting PersistentVolumeClaim "pvc-9s879" Jun 18 00:15:27.131: INFO: 2/28 pods finished STEP: Delete "local-pvx6xwv" and create a new PV for same local volume storage STEP: Delete "local-pv4p7v2" and create a new PV for same local volume storage STEP: Delete "local-pvlmfl2" and create a new PV for same local volume storage STEP: Delete "local-pv94fbr" and create a new PV for same local volume storage Jun 18 00:15:28.112: INFO: Deleting pod pod-92776186-79d0-4b97-8415-767fc669fd7a Jun 18 00:15:28.119: INFO: Deleting PersistentVolumeClaim "pvc-flvh2" Jun 18 00:15:28.122: INFO: Deleting PersistentVolumeClaim "pvc-bfhvg" Jun 18 00:15:28.125: INFO: Deleting PersistentVolumeClaim "pvc-2kwcb" Jun 18 00:15:28.130: INFO: 3/28 pods finished STEP: Delete "local-pvtl9gh" and create a new PV for same local volume storage STEP: Delete "local-pvhtpv2" and create a new PV for same local volume storage STEP: Delete "local-pvw6bt5" and create a new PV for same local volume storage Jun 18 00:15:29.110: INFO: Deleting pod pod-2bbd5796-c243-4e46-881a-61a30fec51fd Jun 18 00:15:29.115: INFO: Deleting PersistentVolumeClaim "pvc-9rlgh" Jun 18 00:15:29.120: INFO: Deleting PersistentVolumeClaim "pvc-25w4q" Jun 18 00:15:29.124: INFO: Deleting PersistentVolumeClaim "pvc-rjqb6" Jun 18 00:15:29.128: INFO: 4/28 pods finished STEP: Delete "local-pv74stt" and create a new PV for same local volume storage STEP: Delete "local-pvn87kz" and create a new PV for same local volume storage STEP: Delete "local-pvhd5w8" and create a new PV for same local volume storage Jun 18 00:15:30.242: INFO: Deleting pod pod-fd1a1fb9-89ed-4141-ac0b-ae1ce161e93d Jun 18 00:15:30.248: INFO: Deleting PersistentVolumeClaim "pvc-hvcwg" Jun 18 00:15:30.251: INFO: Deleting PersistentVolumeClaim "pvc-5ht5x" Jun 18 00:15:30.255: INFO: Deleting PersistentVolumeClaim "pvc-8psfl" Jun 18 00:15:30.259: INFO: 5/28 pods finished STEP: Delete "local-pvj5mr7" and create a new PV for same local volume storage STEP: Delete "local-pvd84gq" and create a new PV for same local volume storage STEP: Delete "local-pvhx4ld" and create a new PV for same local volume storage Jun 18 00:15:37.111: INFO: Deleting pod pod-bce9258e-e099-4465-8c74-efb2d2ca07b0 Jun 18 00:15:37.119: INFO: Deleting PersistentVolumeClaim "pvc-ztnnp" Jun 18 00:15:37.122: INFO: Deleting PersistentVolumeClaim "pvc-mj6d5" Jun 18 00:15:37.126: INFO: Deleting PersistentVolumeClaim "pvc-cn9tt" Jun 18 00:15:37.129: INFO: 6/28 pods finished Jun 18 00:15:37.129: INFO: Deleting pod pod-d670c404-d4fa-47dc-9a33-d556010d576f Jun 18 00:15:37.135: INFO: Deleting PersistentVolumeClaim "pvc-zh2gw" Jun 18 00:15:37.139: INFO: Deleting PersistentVolumeClaim "pvc-rkjn7" STEP: Delete "local-pvv62tg" and create a new PV for same local volume storage Jun 18 00:15:37.143: INFO: Deleting PersistentVolumeClaim "pvc-skgsk" Jun 18 00:15:37.146: INFO: 7/28 pods finished STEP: Delete "local-pvph68s" and create a new PV for same local volume storage STEP: Delete "local-pvk29h8" and create a new PV for same local volume storage STEP: Delete "local-pvn586w" and create a new PV for same local volume storage STEP: Delete "local-pvz7mhx" and create a new PV for same local volume storage STEP: Delete "local-pvrc2jx" and create a new PV for same local volume storage STEP: Delete "local-pvgcnsh" and create a new PV for same local volume storage STEP: Delete "local-pvrhn9c" and create a new PV for same local volume storage Jun 18 00:15:44.111: INFO: Deleting pod pod-ee710dbe-9881-402a-8505-2685ec3feb08 Jun 18 00:15:44.119: INFO: Deleting PersistentVolumeClaim "pvc-jnb8d" Jun 18 00:15:44.124: INFO: Deleting PersistentVolumeClaim "pvc-fpsjd" Jun 18 00:15:44.127: INFO: Deleting PersistentVolumeClaim "pvc-pbqx8" Jun 18 00:15:44.130: INFO: 8/28 pods finished STEP: Delete "local-pvj7h4c" and create a new PV for same local volume storage STEP: Delete "local-pvpg5lm" and create a new PV for same local volume storage STEP: Delete "local-pvjzcvw" and create a new PV for same local volume storage Jun 18 00:15:45.112: INFO: Deleting pod pod-33a60350-9fd3-4ee0-8ec6-293a0b6e4b89 Jun 18 00:15:45.120: INFO: Deleting PersistentVolumeClaim "pvc-wtfzh" Jun 18 00:15:45.123: INFO: Deleting PersistentVolumeClaim "pvc-5x5sj" Jun 18 00:15:45.127: INFO: Deleting PersistentVolumeClaim "pvc-7qxfc" Jun 18 00:15:45.131: INFO: 9/28 pods finished Jun 18 00:15:45.131: INFO: Deleting pod pod-9c905d6b-ec16-4049-ae74-d605f0f9d98a Jun 18 00:15:45.137: INFO: Deleting PersistentVolumeClaim "pvc-l7d2s" STEP: Delete "local-pvhrxs5" and create a new PV for same local volume storage Jun 18 00:15:45.141: INFO: Deleting PersistentVolumeClaim "pvc-tclbp" Jun 18 00:15:45.145: INFO: Deleting PersistentVolumeClaim "pvc-x2rd5" STEP: Delete "local-pvhcdn6" and create a new PV for same local volume storage Jun 18 00:15:45.148: INFO: 10/28 pods finished STEP: Delete "local-pvgj654" and create a new PV for same local volume storage STEP: Delete "local-pvckqtw" and create a new PV for same local volume storage STEP: Delete "local-pv92lsw" and create a new PV for same local volume storage STEP: Delete "local-pvpvh9j" and create a new PV for same local volume storage Jun 18 00:15:49.111: INFO: Deleting pod pod-f62ac0ce-a328-4045-8ef5-67ce16e3facc Jun 18 00:15:49.121: INFO: Deleting PersistentVolumeClaim "pvc-ftksh" Jun 18 00:15:49.125: INFO: Deleting PersistentVolumeClaim "pvc-tlg46" Jun 18 00:15:49.129: INFO: Deleting PersistentVolumeClaim "pvc-mjxlq" Jun 18 00:15:49.133: INFO: 11/28 pods finished STEP: Delete "local-pvhgr87" and create a new PV for same local volume storage STEP: Delete "local-pv6fjnz" and create a new PV for same local volume storage STEP: Delete "local-pv9nzxj" and create a new PV for same local volume storage STEP: Delete "pvc-d2e4268c-8764-4cd7-badd-da66654a5772" and create a new PV for same local volume storage STEP: Delete "pvc-b176c4b1-8332-40c2-8add-08ddd2e97059" and create a new PV for same local volume storage Jun 18 00:15:50.110: INFO: Deleting pod pod-bbef6a35-0885-4748-bcba-21c18252ddb2 Jun 18 00:15:50.115: INFO: Deleting PersistentVolumeClaim "pvc-d2zb6" Jun 18 00:15:50.119: INFO: Deleting PersistentVolumeClaim "pvc-95g2j" Jun 18 00:15:50.122: INFO: Deleting PersistentVolumeClaim "pvc-t5cnm" Jun 18 00:15:50.126: INFO: 12/28 pods finished STEP: Delete "local-pvc9dxz" and create a new PV for same local volume storage STEP: Delete "local-pvlfvt8" and create a new PV for same local volume storage STEP: Delete "local-pvwdpf4" and create a new PV for same local volume storage Jun 18 00:15:52.111: INFO: Deleting pod pod-232647c5-6c34-4d42-b102-7982aceb6ac4 Jun 18 00:15:52.119: INFO: Deleting PersistentVolumeClaim "pvc-j59fn" Jun 18 00:15:52.123: INFO: Deleting PersistentVolumeClaim "pvc-ll8mn" Jun 18 00:15:52.127: INFO: Deleting PersistentVolumeClaim "pvc-bpls5" Jun 18 00:15:52.130: INFO: 13/28 pods finished STEP: Delete "local-pvv8lhg" and create a new PV for same local volume storage STEP: Delete "local-pvzgbhb" and create a new PV for same local volume storage STEP: Delete "local-pv4d4rt" and create a new PV for same local volume storage STEP: Delete "pvc-d37e8bf1-2e66-4e59-9846-b73f41f2d0cc" and create a new PV for same local volume storage STEP: Delete "pvc-d37e8bf1-2e66-4e59-9846-b73f41f2d0cc" and create a new PV for same local volume storage STEP: Delete "pvc-d2e4268c-8764-4cd7-badd-da66654a5772" and create a new PV for same local volume storage STEP: Delete "pvc-d2e4268c-8764-4cd7-badd-da66654a5772" and create a new PV for same local volume storage STEP: Delete "pvc-b176c4b1-8332-40c2-8add-08ddd2e97059" and create a new PV for same local volume storage STEP: Delete "pvc-b176c4b1-8332-40c2-8add-08ddd2e97059" and create a new PV for same local volume storage Jun 18 00:15:58.110: INFO: Deleting pod pod-1811e659-50e4-416e-9812-39e7c8ba04d0 Jun 18 00:15:58.118: INFO: Deleting PersistentVolumeClaim "pvc-wlw5k" Jun 18 00:15:58.122: INFO: Deleting PersistentVolumeClaim "pvc-9cflc" Jun 18 00:15:58.126: INFO: Deleting PersistentVolumeClaim "pvc-wshl7" Jun 18 00:15:58.130: INFO: 14/28 pods finished STEP: Delete "local-pv4ksjh" and create a new PV for same local volume storage STEP: Delete "local-pvrvw72" and create a new PV for same local volume storage STEP: Delete "local-pvpwxkb" and create a new PV for same local volume storage Jun 18 00:16:00.111: INFO: Deleting pod pod-e9fc2074-8194-4691-98ee-a89e9af4b450 Jun 18 00:16:00.116: INFO: Deleting PersistentVolumeClaim "pvc-2ntwc" Jun 18 00:16:00.122: INFO: Deleting PersistentVolumeClaim "pvc-646vd" Jun 18 00:16:00.129: INFO: Deleting PersistentVolumeClaim "pvc-fxpjl" Jun 18 00:16:00.136: INFO: 15/28 pods finished STEP: Delete "local-pvzgj2c" and create a new PV for same local volume storage STEP: Delete "local-pvn6kjg" and create a new PV for same local volume storage STEP: Delete "local-pvq7s2z" and create a new PV for same local volume storage Jun 18 00:16:01.110: INFO: Deleting pod pod-f217a00b-2568-41d6-bdc7-10f62e39f3e1 Jun 18 00:16:01.117: INFO: Deleting PersistentVolumeClaim "pvc-n8l26" Jun 18 00:16:01.121: INFO: Deleting PersistentVolumeClaim "pvc-zntjs" Jun 18 00:16:01.124: INFO: Deleting PersistentVolumeClaim "pvc-lndjp" Jun 18 00:16:01.128: INFO: 16/28 pods finished STEP: Delete "local-pvzbh2d" and create a new PV for same local volume storage STEP: Delete "local-pvpgv49" and create a new PV for same local volume storage STEP: Delete "local-pvzq94l" and create a new PV for same local volume storage Jun 18 00:16:03.115: INFO: Deleting pod pod-276bc59a-6d33-44f2-8523-7f3f3388e3f8 Jun 18 00:16:03.125: INFO: Deleting PersistentVolumeClaim "pvc-crx84" Jun 18 00:16:03.128: INFO: Deleting PersistentVolumeClaim "pvc-cjwgr" Jun 18 00:16:03.132: INFO: Deleting PersistentVolumeClaim "pvc-xmxn2" Jun 18 00:16:03.135: INFO: 17/28 pods finished STEP: Delete "local-pv9sw9k" and create a new PV for same local volume storage STEP: Delete "local-pvkwxqr" and create a new PV for same local volume storage STEP: Delete "local-pv7tx8m" and create a new PV for same local volume storage Jun 18 00:16:04.112: INFO: Deleting pod pod-c1a6c184-d66a-4b8b-9378-72844d09a150 Jun 18 00:16:04.117: INFO: Deleting PersistentVolumeClaim "pvc-6hvmk" Jun 18 00:16:04.120: INFO: Deleting PersistentVolumeClaim "pvc-kpvv6" Jun 18 00:16:04.123: INFO: Deleting PersistentVolumeClaim "pvc-lrcg4" Jun 18 00:16:04.127: INFO: 18/28 pods finished STEP: Delete "local-pvrj4rx" and create a new PV for same local volume storage STEP: Delete "local-pvzfgcb" and create a new PV for same local volume storage STEP: Delete "local-pv7jr6c" and create a new PV for same local volume storage Jun 18 00:16:10.111: INFO: Deleting pod pod-2c4bd2d2-37c0-404c-b8e7-85e6bac94997 Jun 18 00:16:10.118: INFO: Deleting PersistentVolumeClaim "pvc-86dlx" Jun 18 00:16:10.122: INFO: Deleting PersistentVolumeClaim "pvc-555l5" Jun 18 00:16:10.126: INFO: Deleting PersistentVolumeClaim "pvc-fp445" Jun 18 00:16:10.129: INFO: 19/28 pods finished STEP: Delete "local-pv4b97p" and create a new PV for same local volume storage STEP: Delete "local-pvgn22p" and create a new PV for same local volume storage STEP: Delete "local-pvspf7z" and create a new PV for same local volume storage Jun 18 00:16:11.111: INFO: Deleting pod pod-f65b25a8-43b7-4aa6-ac1d-f81213d0b574 Jun 18 00:16:11.119: INFO: Deleting PersistentVolumeClaim "pvc-mxggk" Jun 18 00:16:11.123: INFO: Deleting PersistentVolumeClaim "pvc-nckvq" Jun 18 00:16:11.127: INFO: Deleting PersistentVolumeClaim "pvc-n8wn6" Jun 18 00:16:11.131: INFO: 20/28 pods finished STEP: Delete "local-pvdcj7l" and create a new PV for same local volume storage STEP: Delete "local-pvvggcc" and create a new PV for same local volume storage STEP: Delete "local-pvs5h2l" and create a new PV for same local volume storage Jun 18 00:16:13.113: INFO: Deleting pod pod-53e14d9b-618e-452d-9a99-aa4994baf6d6 Jun 18 00:16:13.123: INFO: Deleting PersistentVolumeClaim "pvc-tzr88" Jun 18 00:16:13.138: INFO: Deleting PersistentVolumeClaim "pvc-6gf6h" Jun 18 00:16:13.142: INFO: Deleting PersistentVolumeClaim "pvc-4p744" Jun 18 00:16:13.145: INFO: 21/28 pods finished Jun 18 00:16:13.145: INFO: Deleting pod pod-5af8121d-9860-461e-9db5-0d208e799053 STEP: Delete "local-pvgr44j" and create a new PV for same local volume storage Jun 18 00:16:13.153: INFO: Deleting PersistentVolumeClaim "pvc-m2td8" Jun 18 00:16:13.157: INFO: Deleting PersistentVolumeClaim "pvc-6d7c5" Jun 18 00:16:13.160: INFO: Deleting PersistentVolumeClaim "pvc-m8hwh" STEP: Delete "local-pv8h9gg" and create a new PV for same local volume storage Jun 18 00:16:13.164: INFO: 22/28 pods finished STEP: Delete "local-pvh58x4" and create a new PV for same local volume storage STEP: Delete "local-pv8glzp" and create a new PV for same local volume storage STEP: Delete "local-pvr6zj6" and create a new PV for same local volume storage STEP: Delete "local-pv4hxq4" and create a new PV for same local volume storage Jun 18 00:16:14.110: INFO: Deleting pod pod-2ed8bcb8-4541-4c58-8a22-153517b4a94a Jun 18 00:16:14.118: INFO: Deleting PersistentVolumeClaim "pvc-djwb7" Jun 18 00:16:14.122: INFO: Deleting PersistentVolumeClaim "pvc-9m5xt" Jun 18 00:16:14.125: INFO: Deleting PersistentVolumeClaim "pvc-vw8jb" Jun 18 00:16:14.129: INFO: 23/28 pods finished Jun 18 00:16:14.129: INFO: Deleting pod pod-33a73399-6dee-4a7b-8fc0-f479974cbdcb Jun 18 00:16:14.136: INFO: Deleting PersistentVolumeClaim "pvc-c7fbt" Jun 18 00:16:14.140: INFO: Deleting PersistentVolumeClaim "pvc-mdpl9" Jun 18 00:16:14.144: INFO: Deleting PersistentVolumeClaim "pvc-f7vpr" Jun 18 00:16:14.147: INFO: 24/28 pods finished STEP: Delete "local-pvxrg2f" and create a new PV for same local volume storage STEP: Delete "local-pv2bfcp" and create a new PV for same local volume storage STEP: Delete "local-pvs65lp" and create a new PV for same local volume storage STEP: Delete "local-pvxt4q2" and create a new PV for same local volume storage STEP: Delete "local-pvcwgnt" and create a new PV for same local volume storage STEP: Delete "local-pvf6rd2" and create a new PV for same local volume storage Jun 18 00:16:20.112: INFO: Deleting pod pod-b5463926-c35d-4604-9bea-261b48380f88 Jun 18 00:16:20.121: INFO: Deleting PersistentVolumeClaim "pvc-7g5sd" Jun 18 00:16:20.127: INFO: Deleting PersistentVolumeClaim "pvc-pk5sz" Jun 18 00:16:20.130: INFO: Deleting PersistentVolumeClaim "pvc-gh9dr" Jun 18 00:16:20.134: INFO: 25/28 pods finished STEP: Delete "local-pvmp24x" and create a new PV for same local volume storage STEP: Delete "local-pv84jrb" and create a new PV for same local volume storage STEP: Delete "local-pvfjcfv" and create a new PV for same local volume storage Jun 18 00:16:21.111: INFO: Deleting pod pod-513b1c29-f826-41f9-982c-bc3cedd5bdc9 Jun 18 00:16:21.117: INFO: Deleting PersistentVolumeClaim "pvc-smhkz" Jun 18 00:16:21.121: INFO: Deleting PersistentVolumeClaim "pvc-pzhsg" Jun 18 00:16:21.126: INFO: Deleting PersistentVolumeClaim "pvc-8mc82" Jun 18 00:16:21.130: INFO: 26/28 pods finished STEP: Delete "local-pvn7rhk" and create a new PV for same local volume storage STEP: Delete "local-pvff494" and create a new PV for same local volume storage STEP: Delete "local-pvf9fp4" and create a new PV for same local volume storage Jun 18 00:16:22.110: INFO: Deleting pod pod-8f8f0a8f-f0e5-4569-8dd2-9ae6ef179fbb Jun 18 00:16:22.118: INFO: Deleting PersistentVolumeClaim "pvc-wnw5t" Jun 18 00:16:22.121: INFO: Deleting PersistentVolumeClaim "pvc-rs4t9" Jun 18 00:16:22.125: INFO: Deleting PersistentVolumeClaim "pvc-hvhfr" Jun 18 00:16:22.128: INFO: 27/28 pods finished Jun 18 00:16:22.128: INFO: Deleting pod pod-9fb6610b-7903-4772-bd11-24f2bc55d834 Jun 18 00:16:22.135: INFO: Deleting PersistentVolumeClaim "pvc-2srhk" STEP: Delete "local-pvnlhzk" and create a new PV for same local volume storage Jun 18 00:16:22.139: INFO: Deleting PersistentVolumeClaim "pvc-vgmxg" Jun 18 00:16:22.142: INFO: Deleting PersistentVolumeClaim "pvc-rtgsd" Jun 18 00:16:22.146: INFO: 28/28 pods finished [AfterEach] Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:519 STEP: Stop and wait for recycle goroutine to finish STEP: Delete "local-pv5nwdx" and create a new PV for same local volume storage STEP: Clean all PVs STEP: Cleaning up 10 local volumes on node "node1" STEP: Cleaning up PVC and PV Jun 18 00:16:22.154: INFO: pvc is nil Jun 18 00:16:22.154: INFO: Deleting PersistentVolume "local-pv9zvdr" STEP: Cleaning up PVC and PV Jun 18 00:16:22.157: INFO: pvc is nil Jun 18 00:16:22.157: INFO: Deleting PersistentVolume "local-pvzm8g4" STEP: Cleaning up PVC and PV Jun 18 00:16:22.162: INFO: pvc is nil Jun 18 00:16:22.162: INFO: Deleting PersistentVolume "local-pvx9q4p" STEP: Cleaning up PVC and PV Jun 18 00:16:22.165: INFO: pvc is nil Jun 18 00:16:22.165: INFO: Deleting PersistentVolume "local-pvgwgvs" STEP: Cleaning up PVC and PV Jun 18 00:16:22.168: INFO: pvc is nil Jun 18 00:16:22.168: INFO: Deleting PersistentVolume "local-pvsnbjt" STEP: Cleaning up PVC and PV Jun 18 00:16:22.171: INFO: pvc is nil Jun 18 00:16:22.171: INFO: Deleting PersistentVolume "local-pvnktjv" STEP: Cleaning up PVC and PV Jun 18 00:16:22.175: INFO: pvc is nil Jun 18 00:16:22.175: INFO: Deleting PersistentVolume "local-pvbvjq2" STEP: Cleaning up PVC and PV Jun 18 00:16:22.178: INFO: pvc is nil Jun 18 00:16:22.178: INFO: Deleting PersistentVolume "local-pv6jbnl" STEP: Cleaning up PVC and PV Jun 18 00:16:22.182: INFO: pvc is nil Jun 18 00:16:22.182: INFO: Deleting PersistentVolume "local-pvwhxn5" STEP: Cleaning up PVC and PV Jun 18 00:16:22.185: INFO: pvc is nil Jun 18 00:16:22.185: INFO: Deleting PersistentVolume "local-pvn7tzp" STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-0348c472-c80c-4f0d-ba3f-3f380ee358fc" Jun 18 00:16:22.189: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-0348c472-c80c-4f0d-ba3f-3f380ee358fc"] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node1-9j9p4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:16:22.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:16:22.290: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0348c472-c80c-4f0d-ba3f-3f380ee358fc] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node1-9j9p4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:16:22.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-61ab4274-d9de-447e-a44d-e94117cb9aa1" Jun 18 00:16:22.375: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-61ab4274-d9de-447e-a44d-e94117cb9aa1"] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node1-9j9p4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:16:22.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:16:22.468: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-61ab4274-d9de-447e-a44d-e94117cb9aa1] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node1-9j9p4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:16:22.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-8cfb9f2f-611f-48db-9801-1f8a2631fbb4" Jun 18 00:16:22.555: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-8cfb9f2f-611f-48db-9801-1f8a2631fbb4"] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node1-9j9p4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:16:22.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:16:22.660: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8cfb9f2f-611f-48db-9801-1f8a2631fbb4] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node1-9j9p4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:16:22.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-e148220c-2246-4adc-a9cc-c64fe13172a9" Jun 18 00:16:22.755: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-e148220c-2246-4adc-a9cc-c64fe13172a9"] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node1-9j9p4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:16:22.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:16:22.859: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e148220c-2246-4adc-a9cc-c64fe13172a9] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node1-9j9p4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:16:22.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-40aad9c5-4dd1-456a-9bda-2df06fabd434" Jun 18 00:16:22.962: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-40aad9c5-4dd1-456a-9bda-2df06fabd434"] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node1-9j9p4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:16:22.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:16:23.069: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-40aad9c5-4dd1-456a-9bda-2df06fabd434] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node1-9j9p4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:16:23.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-6ad28f85-9e61-487f-908f-d8d60f801cc9" Jun 18 00:16:23.158: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-6ad28f85-9e61-487f-908f-d8d60f801cc9"] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node1-9j9p4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:16:23.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:16:23.271: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6ad28f85-9e61-487f-908f-d8d60f801cc9] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node1-9j9p4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:16:23.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-a89db4be-21c5-4b8a-ab80-ee303c865d1b" Jun 18 00:16:23.359: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-a89db4be-21c5-4b8a-ab80-ee303c865d1b"] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node1-9j9p4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:16:23.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:16:23.457: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a89db4be-21c5-4b8a-ab80-ee303c865d1b] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node1-9j9p4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:16:23.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-f24fdf31-2364-4aba-b1dd-5ba2db5dec8a" Jun 18 00:16:23.542: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-f24fdf31-2364-4aba-b1dd-5ba2db5dec8a"] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node1-9j9p4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:16:23.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:16:23.658: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f24fdf31-2364-4aba-b1dd-5ba2db5dec8a] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node1-9j9p4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:16:23.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-53244ef3-8ff8-456e-a4e7-b698c0f30a6a" Jun 18 00:16:23.745: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-53244ef3-8ff8-456e-a4e7-b698c0f30a6a"] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node1-9j9p4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:16:23.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:16:23.841: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-53244ef3-8ff8-456e-a4e7-b698c0f30a6a] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node1-9j9p4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:16:23.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-05a75871-d630-4bc4-b55c-cc66a5d9ffa1" Jun 18 00:16:23.919: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-05a75871-d630-4bc4-b55c-cc66a5d9ffa1"] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node1-9j9p4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:16:23.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:16:24.018: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-05a75871-d630-4bc4-b55c-cc66a5d9ffa1] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node1-9j9p4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:16:24.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up 10 local volumes on node "node2" STEP: Cleaning up PVC and PV Jun 18 00:16:24.110: INFO: pvc is nil Jun 18 00:16:24.110: INFO: Deleting PersistentVolume "local-pvssdgh" STEP: Cleaning up PVC and PV Jun 18 00:16:24.115: INFO: pvc is nil Jun 18 00:16:24.115: INFO: Deleting PersistentVolume "local-pvcjffj" STEP: Cleaning up PVC and PV Jun 18 00:16:24.119: INFO: pvc is nil Jun 18 00:16:24.119: INFO: Deleting PersistentVolume "local-pvscwl6" STEP: Cleaning up PVC and PV Jun 18 00:16:24.123: INFO: pvc is nil Jun 18 00:16:24.123: INFO: Deleting PersistentVolume "local-pvzrqzj" STEP: Cleaning up PVC and PV Jun 18 00:16:24.127: INFO: pvc is nil Jun 18 00:16:24.127: INFO: Deleting PersistentVolume "local-pvvfxg8" STEP: Cleaning up PVC and PV Jun 18 00:16:24.130: INFO: pvc is nil Jun 18 00:16:24.130: INFO: Deleting PersistentVolume "local-pvf6v2h" STEP: Cleaning up PVC and PV Jun 18 00:16:24.134: INFO: pvc is nil Jun 18 00:16:24.134: INFO: Deleting PersistentVolume "local-pvq4wdb" STEP: Cleaning up PVC and PV Jun 18 00:16:24.138: INFO: pvc is nil Jun 18 00:16:24.138: INFO: Deleting PersistentVolume "local-pvrzch6" STEP: Cleaning up PVC and PV Jun 18 00:16:24.142: INFO: pvc is nil Jun 18 00:16:24.142: INFO: Deleting PersistentVolume "local-pv87bcm" STEP: Cleaning up PVC and PV Jun 18 00:16:24.146: INFO: pvc is nil Jun 18 00:16:24.146: INFO: Deleting PersistentVolume "local-pvsw28v" STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-7e0f2c12-6938-41dc-ae73-bee8f7c718dd" Jun 18 00:16:24.149: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-7e0f2c12-6938-41dc-ae73-bee8f7c718dd"] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node2-vbznx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:16:24.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:16:24.247: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-7e0f2c12-6938-41dc-ae73-bee8f7c718dd] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node2-vbznx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:16:24.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-cfa12f70-5aaa-450d-bdf1-bb14301dd027" Jun 18 00:16:24.341: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-cfa12f70-5aaa-450d-bdf1-bb14301dd027"] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node2-vbznx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:16:24.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:16:24.429: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-cfa12f70-5aaa-450d-bdf1-bb14301dd027] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node2-vbznx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:16:24.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-8994c814-e7d0-4ba2-a09f-68e5cb98b973" Jun 18 00:16:24.508: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-8994c814-e7d0-4ba2-a09f-68e5cb98b973"] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node2-vbznx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:16:24.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:16:24.596: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8994c814-e7d0-4ba2-a09f-68e5cb98b973] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node2-vbznx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:16:24.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-daa25590-f424-4597-8d96-df175a89e2b6" Jun 18 00:16:24.678: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-daa25590-f424-4597-8d96-df175a89e2b6"] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node2-vbznx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:16:24.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:16:24.776: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-daa25590-f424-4597-8d96-df175a89e2b6] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node2-vbznx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:16:24.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-10c51bf5-9431-4f5a-8533-f691ab7b603a" Jun 18 00:16:24.858: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-10c51bf5-9431-4f5a-8533-f691ab7b603a"] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node2-vbznx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:16:24.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:16:24.962: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-10c51bf5-9431-4f5a-8533-f691ab7b603a] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node2-vbznx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:16:24.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-fd9d0453-2d78-4f47-9652-4bd1f3612320" Jun 18 00:16:25.047: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-fd9d0453-2d78-4f47-9652-4bd1f3612320"] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node2-vbznx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:16:25.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:16:25.142: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-fd9d0453-2d78-4f47-9652-4bd1f3612320] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node2-vbznx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:16:25.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-24d95cb0-e891-4862-aa2c-f8039e4e7df0" Jun 18 00:16:25.242: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-24d95cb0-e891-4862-aa2c-f8039e4e7df0"] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node2-vbznx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:16:25.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:16:25.330: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-24d95cb0-e891-4862-aa2c-f8039e4e7df0] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node2-vbznx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:16:25.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-95b68d7b-4b9d-4143-bdf8-b97721e25b90" Jun 18 00:16:25.409: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-95b68d7b-4b9d-4143-bdf8-b97721e25b90"] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node2-vbznx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:16:25.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:16:25.503: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-95b68d7b-4b9d-4143-bdf8-b97721e25b90] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node2-vbznx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:16:25.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-5380afc7-1e3c-44e4-8a89-cf36679163a2" Jun 18 00:16:25.601: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-5380afc7-1e3c-44e4-8a89-cf36679163a2"] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node2-vbznx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:16:25.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:16:25.691: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-5380afc7-1e3c-44e4-8a89-cf36679163a2] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node2-vbznx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:16:25.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-2b946854-f925-4ebc-815b-ee438cb9a3f0" Jun 18 00:16:25.774: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-2b946854-f925-4ebc-815b-ee438cb9a3f0"] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node2-vbznx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:16:25.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Jun 18 00:16:25.863: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-2b946854-f925-4ebc-815b-ee438cb9a3f0] Namespace:persistent-local-volumes-test-6160 PodName:hostexec-node2-vbznx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 18 00:16:25.863: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:16:25.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6160" for this suite. • [SLOW TEST:79.066 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:441 should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:531 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","total":-1,"completed":15,"skipped":487,"failed":0} Jun 18 00:16:25.964: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:15:15.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should retry NodeStage after NodeStage final error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 STEP: Building a driver namespace object, basename csi-mock-volumes-9224 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Jun 18 00:15:15.959: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9224-2821/csi-attacher Jun 18 00:15:15.962: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9224 Jun 18 00:15:15.962: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-9224 Jun 18 00:15:15.965: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9224 Jun 18 00:15:15.968: INFO: creating *v1.Role: csi-mock-volumes-9224-2821/external-attacher-cfg-csi-mock-volumes-9224 Jun 18 00:15:15.971: INFO: creating *v1.RoleBinding: csi-mock-volumes-9224-2821/csi-attacher-role-cfg Jun 18 00:15:15.973: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9224-2821/csi-provisioner Jun 18 00:15:15.978: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9224 Jun 18 00:15:15.978: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-9224 Jun 18 00:15:15.981: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9224 Jun 18 00:15:15.988: INFO: creating *v1.Role: csi-mock-volumes-9224-2821/external-provisioner-cfg-csi-mock-volumes-9224 Jun 18 00:15:15.993: INFO: creating *v1.RoleBinding: csi-mock-volumes-9224-2821/csi-provisioner-role-cfg Jun 18 00:15:15.999: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9224-2821/csi-resizer Jun 18 00:15:16.003: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9224 Jun 18 00:15:16.003: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-9224 Jun 18 00:15:16.007: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9224 Jun 18 00:15:16.011: INFO: creating *v1.Role: csi-mock-volumes-9224-2821/external-resizer-cfg-csi-mock-volumes-9224 Jun 18 00:15:16.014: INFO: creating *v1.RoleBinding: csi-mock-volumes-9224-2821/csi-resizer-role-cfg Jun 18 00:15:16.017: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9224-2821/csi-snapshotter Jun 18 00:15:16.020: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9224 Jun 18 00:15:16.020: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-9224 Jun 18 00:15:16.026: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9224 Jun 18 00:15:16.028: INFO: creating *v1.Role: csi-mock-volumes-9224-2821/external-snapshotter-leaderelection-csi-mock-volumes-9224 Jun 18 00:15:16.030: INFO: creating *v1.RoleBinding: csi-mock-volumes-9224-2821/external-snapshotter-leaderelection Jun 18 00:15:16.033: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9224-2821/csi-mock Jun 18 00:15:16.038: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9224 Jun 18 00:15:16.041: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9224 Jun 18 00:15:16.044: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9224 Jun 18 00:15:16.047: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9224 Jun 18 00:15:16.050: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9224 Jun 18 00:15:16.052: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9224 Jun 18 00:15:16.055: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9224 Jun 18 00:15:16.057: INFO: creating *v1.StatefulSet: csi-mock-volumes-9224-2821/csi-mockplugin Jun 18 00:15:16.062: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-9224 Jun 18 00:15:16.066: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-9224" Jun 18 00:15:16.068: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-9224 to register on node node1 I0618 00:15:22.187496 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-9224","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0618 00:15:22.210849 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I0618 00:15:22.214842 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-9224","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0618 00:15:22.216867 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I0618 00:15:22.218480 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I0618 00:15:23.166057 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-9224"},"Error":"","FullError":null} STEP: Creating pod Jun 18 00:15:25.585: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jun 18 00:15:25.589: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-wzf4n] to have phase Bound Jun 18 00:15:25.592: INFO: PersistentVolumeClaim pvc-wzf4n found but phase is Pending instead of Bound. I0618 00:15:25.621999 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-d37e8bf1-2e66-4e59-9846-b73f41f2d0cc","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-d37e8bf1-2e66-4e59-9846-b73f41f2d0cc"}}},"Error":"","FullError":null} Jun 18 00:15:27.595: INFO: PersistentVolumeClaim pvc-wzf4n found and phase=Bound (2.005664139s) Jun 18 00:15:27.610: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-wzf4n] to have phase Bound Jun 18 00:15:27.615: INFO: PersistentVolumeClaim pvc-wzf4n found and phase=Bound (4.137652ms) STEP: Waiting for expected CSI calls I0618 00:15:29.541389 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0618 00:15:29.544176 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d37e8bf1-2e66-4e59-9846-b73f41f2d0cc/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-d37e8bf1-2e66-4e59-9846-b73f41f2d0cc","storage.kubernetes.io/csiProvisionerIdentity":"1655511322221-8081-csi-mock-csi-mock-volumes-9224"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} I0618 00:15:30.046903 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0618 00:15:30.048743 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d37e8bf1-2e66-4e59-9846-b73f41f2d0cc/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-d37e8bf1-2e66-4e59-9846-b73f41f2d0cc","storage.kubernetes.io/csiProvisionerIdentity":"1655511322221-8081-csi-mock-csi-mock-volumes-9224"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} I0618 00:15:31.056010 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0618 00:15:31.058769 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d37e8bf1-2e66-4e59-9846-b73f41f2d0cc/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-d37e8bf1-2e66-4e59-9846-b73f41f2d0cc","storage.kubernetes.io/csiProvisionerIdentity":"1655511322221-8081-csi-mock-csi-mock-volumes-9224"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} I0618 00:15:33.116459 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Jun 18 00:15:33.118: INFO: >>> kubeConfig: /root/.kube/config I0618 00:15:33.404912 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d37e8bf1-2e66-4e59-9846-b73f41f2d0cc/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-d37e8bf1-2e66-4e59-9846-b73f41f2d0cc","storage.kubernetes.io/csiProvisionerIdentity":"1655511322221-8081-csi-mock-csi-mock-volumes-9224"}},"Response":{},"Error":"","FullError":null} STEP: Waiting for pod to be running I0618 00:15:33.738857 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Jun 18 00:15:33.740: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:15:33.835: INFO: >>> kubeConfig: /root/.kube/config Jun 18 00:15:33.933: INFO: >>> kubeConfig: /root/.kube/config I0618 00:15:34.072360 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d37e8bf1-2e66-4e59-9846-b73f41f2d0cc/globalmount","target_path":"/var/lib/kubelet/pods/6c040067-fb42-424c-870c-b6176d93c2ae/volumes/kubernetes.io~csi/pvc-d37e8bf1-2e66-4e59-9846-b73f41f2d0cc/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-d37e8bf1-2e66-4e59-9846-b73f41f2d0cc","storage.kubernetes.io/csiProvisionerIdentity":"1655511322221-8081-csi-mock-csi-mock-volumes-9224"}},"Response":{},"Error":"","FullError":null} STEP: Deleting the previously created pod Jun 18 00:15:39.626: INFO: Deleting pod "pvc-volume-tester-qzqlf" in namespace "csi-mock-volumes-9224" Jun 18 00:15:39.630: INFO: Wait up to 5m0s for pod "pvc-volume-tester-qzqlf" to be fully deleted Jun 18 00:15:46.975: INFO: >>> kubeConfig: /root/.kube/config I0618 00:15:47.178186 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/6c040067-fb42-424c-870c-b6176d93c2ae/volumes/kubernetes.io~csi/pvc-d37e8bf1-2e66-4e59-9846-b73f41f2d0cc/mount"},"Response":{},"Error":"","FullError":null} I0618 00:15:47.299828 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0618 00:15:47.331663 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d37e8bf1-2e66-4e59-9846-b73f41f2d0cc/globalmount"},"Response":{},"Error":"","FullError":null} STEP: Waiting for all remaining expected CSI calls STEP: Deleting pod pvc-volume-tester-qzqlf Jun 18 00:15:52.636: INFO: Deleting pod "pvc-volume-tester-qzqlf" in namespace "csi-mock-volumes-9224" STEP: Deleting claim pvc-wzf4n Jun 18 00:15:52.647: INFO: Waiting up to 2m0s for PersistentVolume pvc-d37e8bf1-2e66-4e59-9846-b73f41f2d0cc to get deleted Jun 18 00:15:52.649: INFO: PersistentVolume pvc-d37e8bf1-2e66-4e59-9846-b73f41f2d0cc found and phase=Bound (2.184664ms) I0618 00:15:52.662456 40 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} Jun 18 00:15:54.652: INFO: PersistentVolume pvc-d37e8bf1-2e66-4e59-9846-b73f41f2d0cc was removed STEP: Deleting storageclass csi-mock-volumes-9224-sc52nwv STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-9224 STEP: Waiting for namespaces [csi-mock-volumes-9224] to vanish STEP: uninstalling csi mock driver Jun 18 00:16:00.770: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9224-2821/csi-attacher Jun 18 00:16:00.775: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9224 Jun 18 00:16:00.779: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9224 Jun 18 00:16:00.782: INFO: deleting *v1.Role: csi-mock-volumes-9224-2821/external-attacher-cfg-csi-mock-volumes-9224 Jun 18 00:16:00.785: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9224-2821/csi-attacher-role-cfg Jun 18 00:16:00.788: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9224-2821/csi-provisioner Jun 18 00:16:00.791: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9224 Jun 18 00:16:00.795: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9224 Jun 18 00:16:00.798: INFO: deleting *v1.Role: csi-mock-volumes-9224-2821/external-provisioner-cfg-csi-mock-volumes-9224 Jun 18 00:16:00.801: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9224-2821/csi-provisioner-role-cfg Jun 18 00:16:00.805: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9224-2821/csi-resizer Jun 18 00:16:00.808: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9224 Jun 18 00:16:00.811: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9224 Jun 18 00:16:00.814: INFO: deleting *v1.Role: csi-mock-volumes-9224-2821/external-resizer-cfg-csi-mock-volumes-9224 Jun 18 00:16:00.818: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9224-2821/csi-resizer-role-cfg Jun 18 00:16:00.821: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9224-2821/csi-snapshotter Jun 18 00:16:00.825: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9224 Jun 18 00:16:00.828: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9224 Jun 18 00:16:00.831: INFO: deleting *v1.Role: csi-mock-volumes-9224-2821/external-snapshotter-leaderelection-csi-mock-volumes-9224 Jun 18 00:16:00.835: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9224-2821/external-snapshotter-leaderelection Jun 18 00:16:00.838: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9224-2821/csi-mock Jun 18 00:16:00.842: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9224 Jun 18 00:16:00.846: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9224 Jun 18 00:16:00.850: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9224 Jun 18 00:16:00.854: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9224 Jun 18 00:16:00.857: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9224 Jun 18 00:16:00.860: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9224 Jun 18 00:16:00.865: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9224 Jun 18 00:16:00.869: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9224-2821/csi-mockplugin Jun 18 00:16:00.872: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-9224 STEP: deleting the driver namespace: csi-mock-volumes-9224-2821 STEP: Waiting for namespaces [csi-mock-volumes-9224-2821] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:16:44.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:89.000 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI NodeStage error cases [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:734 should retry NodeStage after NodeStage final error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should retry NodeStage after NodeStage final error","total":-1,"completed":17,"skipped":577,"failed":1,"failures":["[sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage transient error"]} Jun 18 00:16:44.896: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:11:50.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:460 STEP: Creating the pod [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:16:50.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-854" for this suite. • [SLOW TEST:300.054 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:460 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap Should fail non-optional pod creation due to configMap object does not exist [Slow]","total":-1,"completed":3,"skipped":59,"failed":0} Jun 18 00:16:50.091: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:13:31.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:411 STEP: Creating the pod [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:18:31.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8035" for this suite. • [SLOW TEST:300.060 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:411 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret Should fail non-optional pod creation due to secret object does not exist [Slow]","total":-1,"completed":16,"skipped":488,"failed":1,"failures":["[sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage final error"]} Jun 18 00:18:31.381: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:13:33.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:557 STEP: Creating configMap with name cm-test-opt-create-f6283bea-4f24-4049-acca-77d3de65e261 STEP: Creating the pod [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:18:33.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9561" for this suite. • [SLOW TEST:300.059 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:557 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]","total":-1,"completed":19,"skipped":614,"failed":0} Jun 18 00:18:33.993: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:14:24.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/secrets_volume.go:430 STEP: Creating the pod [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:19:24.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4229" for this suite. • [SLOW TEST:300.061 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/secrets_volume.go:430 ------------------------------ {"msg":"PASSED [sig-storage] Secrets Should fail non-optional pod creation due to secret object does not exist [Slow]","total":-1,"completed":25,"skipped":823,"failed":0} Jun 18 00:19:24.546: INFO: Running AfterSuite actions on all nodes Jun 18 00:15:42.980: INFO: Running AfterSuite actions on all nodes Jun 18 00:19:24.628: INFO: Running AfterSuite actions on node 1 Jun 18 00:19:24.628: INFO: Skipping dumping logs from cluster Summarizing 3 Failures: [Fail] [sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] [It] two pods: should call NodeStage after previous NodeUnstage final error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1017 [Fail] [sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] [It] two pods: should call NodeStage after previous NodeUnstage transient error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1017 [Fail] [sig-storage] HostPath [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:742 Ran 166 of 5773 Specs in 1085.903 seconds FAIL! -- 163 Passed | 3 Failed | 0 Pending | 5607 Skipped Ginkgo ran 1 suite in 18m7.530142977s Test Suite Failed