Running Suite: Kubernetes e2e suite =================================== Random Seed: 1636163431 - Will randomize all specs Will run 5770 specs Running in parallel across 10 nodes Nov 6 01:50:33.225: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:50:33.227: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Nov 6 01:50:33.250: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 6 01:50:33.314: INFO: The status of Pod cmk-init-discover-node1-nnkks is Succeeded, skipping waiting Nov 6 01:50:33.314: INFO: The status of Pod cmk-init-discover-node2-9svdd is Succeeded, skipping waiting Nov 6 01:50:33.314: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 6 01:50:33.314: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Nov 6 01:50:33.314: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Nov 6 01:50:33.338: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Nov 6 01:50:33.338: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Nov 6 01:50:33.338: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Nov 6 01:50:33.338: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Nov 6 01:50:33.338: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Nov 6 01:50:33.338: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Nov 6 01:50:33.338: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Nov 6 01:50:33.338: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Nov 6 01:50:33.338: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Nov 6 01:50:33.338: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Nov 6 01:50:33.338: INFO: e2e test version: v1.21.5 Nov 6 01:50:33.340: INFO: kube-apiserver version: v1.21.1 Nov 6 01:50:33.341: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:50:33.347: INFO: Cluster IP family: ipv4 SSSSSSSSSS ------------------------------ Nov 6 01:50:33.347: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:50:33.370: INFO: Cluster IP family: ipv4 SSS ------------------------------ Nov 6 01:50:33.351: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:50:33.374: INFO: Cluster IP family: ipv4 Nov 6 01:50:33.351: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:50:33.375: INFO: Cluster IP family: ipv4 Nov 6 01:50:33.355: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:50:33.375: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSS ------------------------------ Nov 6 01:50:33.367: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:50:33.389: INFO: Cluster IP family: ipv4 SSS ------------------------------ Nov 6 01:50:33.371: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:50:33.390: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSS ------------------------------ Nov 6 01:50:33.376: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:50:33.397: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSS ------------------------------ Nov 6 01:50:33.386: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:50:33.407: INFO: Cluster IP family: ipv4 SS ------------------------------ Nov 6 01:50:33.387: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:50:33.408: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:50:33.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv W1106 01:50:33.414152 22 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 6 01:50:33.414: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 6 01:50:33.417: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Nov 6 01:50:33.420: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:50:33.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-6535" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.057 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create prometheus metrics for volume provisioning and attach/detach [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:101 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:50:33.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning W1106 01:50:33.419890 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 6 01:50:33.420: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 6 01:50:33.429: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should not provision a volume in an unmanaged GCE zone. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:451 Nov 6 01:50:33.431: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:50:33.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-9364" for this suite. S [SKIPPING] [0.051 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 DynamicProvisioner [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:152 should not provision a volume in an unmanaged GCE zone. [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:451 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:452 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:50:33.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected W1106 01:50:33.486380 34 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 6 01:50:33.486: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 6 01:50:33.488: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:90 STEP: Creating projection with secret that has name projected-secret-test-61bb7944-6ee4-4d67-9d86-03f0d8d1d070 STEP: Creating a pod to test consume secrets Nov 6 01:50:34.118: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f3c36a85-bfba-4130-a012-fd43033ac353" in namespace "projected-5135" to be "Succeeded or Failed" Nov 6 01:50:34.121: INFO: Pod "pod-projected-secrets-f3c36a85-bfba-4130-a012-fd43033ac353": Phase="Pending", Reason="", readiness=false. Elapsed: 2.252462ms Nov 6 01:50:36.125: INFO: Pod "pod-projected-secrets-f3c36a85-bfba-4130-a012-fd43033ac353": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006770247s Nov 6 01:50:38.128: INFO: Pod "pod-projected-secrets-f3c36a85-bfba-4130-a012-fd43033ac353": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009549657s Nov 6 01:50:40.133: INFO: Pod "pod-projected-secrets-f3c36a85-bfba-4130-a012-fd43033ac353": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.0149131s STEP: Saw pod success Nov 6 01:50:40.133: INFO: Pod "pod-projected-secrets-f3c36a85-bfba-4130-a012-fd43033ac353" satisfied condition "Succeeded or Failed" Nov 6 01:50:40.136: INFO: Trying to get logs from node node2 pod pod-projected-secrets-f3c36a85-bfba-4130-a012-fd43033ac353 container projected-secret-volume-test: STEP: delete the pod Nov 6 01:50:40.301: INFO: Waiting for pod pod-projected-secrets-f3c36a85-bfba-4130-a012-fd43033ac353 to disappear Nov 6 01:50:40.302: INFO: Pod pod-projected-secrets-f3c36a85-bfba-4130-a012-fd43033ac353 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:50:40.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5135" for this suite. STEP: Destroying namespace "secret-namespace-8230" for this suite. • [SLOW TEST:6.852 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:90 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]","total":-1,"completed":1,"skipped":32,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:50:33.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket W1106 01:50:33.469585 31 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 6 01:50:33.469: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 6 01:50:33.471: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Nov 6 01:50:33.487: INFO: The status of Pod test-hostpath-type-6jn64 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:50:35.491: INFO: The status of Pod test-hostpath-type-6jn64 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:50:37.491: INFO: The status of Pod test-hostpath-type-6jn64 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:50:39.492: INFO: The status of Pod test-hostpath-type-6jn64 is Running (Ready = true) STEP: running on node node2 [It] Should be able to mount socket 'asocket' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:212 [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:50:47.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-6534" for this suite. • [SLOW TEST:14.101 seconds] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount socket 'asocket' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:212 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should be able to mount socket 'asocket' successfully when HostPathType is HostPathUnset","total":-1,"completed":1,"skipped":20,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:50:47.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Nov 6 01:50:47.624: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:50:47.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-8898" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 schedule a pod w/ RW PD(s) mounted to 1 or more containers, write to PD, verify content, delete pod, and repeat in rapid succession [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:231 using 1 containers and 2 PDs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:254 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:50:33.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-6e3add60-7b1c-4d02-b5ec-239e69caf011" Nov 6 01:50:39.831: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-6e3add60-7b1c-4d02-b5ec-239e69caf011" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-6e3add60-7b1c-4d02-b5ec-239e69caf011" "/tmp/local-volume-test-6e3add60-7b1c-4d02-b5ec-239e69caf011"] Namespace:persistent-local-volumes-test-2478 PodName:hostexec-node1-24f9d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:50:39.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:50:39.931: INFO: Creating a PV followed by a PVC Nov 6 01:50:39.938: INFO: Waiting for PV local-pv4jq2x to bind to PVC pvc-nfjfm Nov 6 01:50:39.938: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-nfjfm] to have phase Bound Nov 6 01:50:39.940: INFO: PersistentVolumeClaim pvc-nfjfm found but phase is Pending instead of Bound. Nov 6 01:50:41.944: INFO: PersistentVolumeClaim pvc-nfjfm found and phase=Bound (2.005472732s) Nov 6 01:50:41.944: INFO: Waiting up to 3m0s for PersistentVolume local-pv4jq2x to have phase Bound Nov 6 01:50:41.946: INFO: PersistentVolume local-pv4jq2x found and phase=Bound (2.57567ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Nov 6 01:50:45.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-2478 exec pod-f452e998-b3ef-4d1f-9df1-25c50ebe7641 --namespace=persistent-local-volumes-test-2478 -- stat -c %g /mnt/volume1' Nov 6 01:50:46.324: INFO: stderr: "" Nov 6 01:50:46.324: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Nov 6 01:50:50.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-2478 exec pod-18b9b8b7-accf-4df2-823d-56d8a7c9d926 --namespace=persistent-local-volumes-test-2478 -- stat -c %g /mnt/volume1' Nov 6 01:50:50.606: INFO: stderr: "" Nov 6 01:50:50.606: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod pod-f452e998-b3ef-4d1f-9df1-25c50ebe7641 in namespace persistent-local-volumes-test-2478 STEP: Deleting second pod STEP: Deleting pod pod-18b9b8b7-accf-4df2-823d-56d8a7c9d926 in namespace persistent-local-volumes-test-2478 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:50:50.616: INFO: Deleting PersistentVolumeClaim "pvc-nfjfm" Nov 6 01:50:50.620: INFO: Deleting PersistentVolume "local-pv4jq2x" STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-6e3add60-7b1c-4d02-b5ec-239e69caf011" Nov 6 01:50:50.623: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-6e3add60-7b1c-4d02-b5ec-239e69caf011"] Namespace:persistent-local-volumes-test-2478 PodName:hostexec-node1-24f9d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:50:50.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:50:50.717: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6e3add60-7b1c-4d02-b5ec-239e69caf011] Namespace:persistent-local-volumes-test-2478 PodName:hostexec-node1-24f9d ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:50:50.717: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:50:50.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2478" for this suite. • [SLOW TEST:17.329 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":1,"skipped":20,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:50:33.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-8c4a1e13-64cd-4a93-b302-c6e44ea140ff" Nov 6 01:50:37.524: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-8c4a1e13-64cd-4a93-b302-c6e44ea140ff" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-8c4a1e13-64cd-4a93-b302-c6e44ea140ff" "/tmp/local-volume-test-8c4a1e13-64cd-4a93-b302-c6e44ea140ff"] Namespace:persistent-local-volumes-test-8085 PodName:hostexec-node2-rxqgc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:50:37.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:50:38.126: INFO: Creating a PV followed by a PVC Nov 6 01:50:38.134: INFO: Waiting for PV local-pvpfbq9 to bind to PVC pvc-5gdtf Nov 6 01:50:38.134: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-5gdtf] to have phase Bound Nov 6 01:50:38.136: INFO: PersistentVolumeClaim pvc-5gdtf found but phase is Pending instead of Bound. Nov 6 01:50:40.143: INFO: PersistentVolumeClaim pvc-5gdtf found but phase is Pending instead of Bound. Nov 6 01:50:42.147: INFO: PersistentVolumeClaim pvc-5gdtf found but phase is Pending instead of Bound. Nov 6 01:50:44.152: INFO: PersistentVolumeClaim pvc-5gdtf found but phase is Pending instead of Bound. Nov 6 01:50:46.157: INFO: PersistentVolumeClaim pvc-5gdtf found but phase is Pending instead of Bound. Nov 6 01:50:48.159: INFO: PersistentVolumeClaim pvc-5gdtf found but phase is Pending instead of Bound. Nov 6 01:50:50.164: INFO: PersistentVolumeClaim pvc-5gdtf found but phase is Pending instead of Bound. Nov 6 01:50:52.167: INFO: PersistentVolumeClaim pvc-5gdtf found and phase=Bound (14.032962785s) Nov 6 01:50:52.167: INFO: Waiting up to 3m0s for PersistentVolume local-pvpfbq9 to have phase Bound Nov 6 01:50:52.170: INFO: PersistentVolume local-pvpfbq9 found and phase=Bound (2.756556ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 6 01:50:56.200: INFO: pod "pod-96c54e5e-79a5-4619-95d5-b39a6760639a" created on Node "node2" STEP: Writing in pod1 Nov 6 01:50:56.200: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8085 PodName:pod-96c54e5e-79a5-4619-95d5-b39a6760639a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:50:56.200: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:50:56.288: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Nov 6 01:50:56.288: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8085 PodName:pod-96c54e5e-79a5-4619-95d5-b39a6760639a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:50:56.288: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:50:56.404: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Nov 6 01:50:56.404: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-8c4a1e13-64cd-4a93-b302-c6e44ea140ff > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8085 PodName:pod-96c54e5e-79a5-4619-95d5-b39a6760639a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:50:56.404: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:50:56.486: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-8c4a1e13-64cd-4a93-b302-c6e44ea140ff > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-96c54e5e-79a5-4619-95d5-b39a6760639a in namespace persistent-local-volumes-test-8085 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:50:56.493: INFO: Deleting PersistentVolumeClaim "pvc-5gdtf" Nov 6 01:50:56.496: INFO: Deleting PersistentVolume "local-pvpfbq9" STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-8c4a1e13-64cd-4a93-b302-c6e44ea140ff" Nov 6 01:50:56.499: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-8c4a1e13-64cd-4a93-b302-c6e44ea140ff"] Namespace:persistent-local-volumes-test-8085 PodName:hostexec-node2-rxqgc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:50:56.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:50:56.597: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8c4a1e13-64cd-4a93-b302-c6e44ea140ff] Namespace:persistent-local-volumes-test-8085 PodName:hostexec-node2-rxqgc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:50:56.597: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:50:56.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8085" for this suite. • [SLOW TEST:23.216 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":1,"skipped":15,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:50:50.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:59 STEP: Creating configMap with name projected-configmap-test-volume-cdc6f639-3c32-40ee-88a7-da6f7e40f35f STEP: Creating a pod to test consume configMaps Nov 6 01:50:50.892: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0d864fd7-c170-45e3-8a8a-af6fc00e4478" in namespace "projected-60" to be "Succeeded or Failed" Nov 6 01:50:50.894: INFO: Pod "pod-projected-configmaps-0d864fd7-c170-45e3-8a8a-af6fc00e4478": Phase="Pending", Reason="", readiness=false. Elapsed: 1.903679ms Nov 6 01:50:52.897: INFO: Pod "pod-projected-configmaps-0d864fd7-c170-45e3-8a8a-af6fc00e4478": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00469003s Nov 6 01:50:54.900: INFO: Pod "pod-projected-configmaps-0d864fd7-c170-45e3-8a8a-af6fc00e4478": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008473325s Nov 6 01:50:56.903: INFO: Pod "pod-projected-configmaps-0d864fd7-c170-45e3-8a8a-af6fc00e4478": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.011581249s STEP: Saw pod success Nov 6 01:50:56.903: INFO: Pod "pod-projected-configmaps-0d864fd7-c170-45e3-8a8a-af6fc00e4478" satisfied condition "Succeeded or Failed" Nov 6 01:50:56.905: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-0d864fd7-c170-45e3-8a8a-af6fc00e4478 container agnhost-container: STEP: delete the pod Nov 6 01:50:56.924: INFO: Waiting for pod pod-projected-configmaps-0d864fd7-c170-45e3-8a8a-af6fc00e4478 to disappear Nov 6 01:50:56.926: INFO: Pod pod-projected-configmaps-0d864fd7-c170-45e3-8a8a-af6fc00e4478 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:50:56.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-60" for this suite. • [SLOW TEST:6.079 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:59 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":2,"skipped":38,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:50:40.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-744a863e-3d64-4854-80dc-f4c31414d7bf" Nov 6 01:50:44.396: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-744a863e-3d64-4854-80dc-f4c31414d7bf && dd if=/dev/zero of=/tmp/local-volume-test-744a863e-3d64-4854-80dc-f4c31414d7bf/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-744a863e-3d64-4854-80dc-f4c31414d7bf/file] Namespace:persistent-local-volumes-test-2377 PodName:hostexec-node1-2cf5r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:50:44.396: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:50:44.507: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-744a863e-3d64-4854-80dc-f4c31414d7bf/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-2377 PodName:hostexec-node1-2cf5r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:50:44.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:50:44.594: INFO: Creating a PV followed by a PVC Nov 6 01:50:44.602: INFO: Waiting for PV local-pvlps4c to bind to PVC pvc-xnrng Nov 6 01:50:44.602: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-xnrng] to have phase Bound Nov 6 01:50:44.604: INFO: PersistentVolumeClaim pvc-xnrng found but phase is Pending instead of Bound. Nov 6 01:50:46.609: INFO: PersistentVolumeClaim pvc-xnrng found but phase is Pending instead of Bound. Nov 6 01:50:48.611: INFO: PersistentVolumeClaim pvc-xnrng found but phase is Pending instead of Bound. Nov 6 01:50:50.615: INFO: PersistentVolumeClaim pvc-xnrng found but phase is Pending instead of Bound. Nov 6 01:50:52.619: INFO: PersistentVolumeClaim pvc-xnrng found and phase=Bound (8.016313443s) Nov 6 01:50:52.619: INFO: Waiting up to 3m0s for PersistentVolume local-pvlps4c to have phase Bound Nov 6 01:50:52.622: INFO: PersistentVolume local-pvlps4c found and phase=Bound (2.979785ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Nov 6 01:51:00.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-2377 exec pod-ae3480f9-32c0-46c2-bfd5-fea900d8f4d1 --namespace=persistent-local-volumes-test-2377 -- stat -c %g /mnt/volume1' Nov 6 01:51:00.894: INFO: stderr: "" Nov 6 01:51:00.894: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-ae3480f9-32c0-46c2-bfd5-fea900d8f4d1 in namespace persistent-local-volumes-test-2377 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:51:00.900: INFO: Deleting PersistentVolumeClaim "pvc-xnrng" Nov 6 01:51:00.904: INFO: Deleting PersistentVolume "local-pvlps4c" Nov 6 01:51:00.909: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-744a863e-3d64-4854-80dc-f4c31414d7bf/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-2377 PodName:hostexec-node1-2cf5r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:51:00.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop1" on node "node1" at path /tmp/local-volume-test-744a863e-3d64-4854-80dc-f4c31414d7bf/file Nov 6 01:51:00.999: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop1] Namespace:persistent-local-volumes-test-2377 PodName:hostexec-node1-2cf5r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:51:00.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-744a863e-3d64-4854-80dc-f4c31414d7bf Nov 6 01:51:01.089: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-744a863e-3d64-4854-80dc-f4c31414d7bf] Namespace:persistent-local-volumes-test-2377 PodName:hostexec-node1-2cf5r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:51:01.090: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:51:01.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2377" for this suite. • [SLOW TEST:20.852 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":2,"skipped":44,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:50:56.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:106 STEP: Creating a pod to test downward API volume plugin Nov 6 01:50:56.783: INFO: Waiting up to 5m0s for pod "metadata-volume-14c02200-9260-421a-836b-893c57c0875a" in namespace "downward-api-1135" to be "Succeeded or Failed" Nov 6 01:50:56.786: INFO: Pod "metadata-volume-14c02200-9260-421a-836b-893c57c0875a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.619921ms Nov 6 01:50:58.789: INFO: Pod "metadata-volume-14c02200-9260-421a-836b-893c57c0875a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005917549s Nov 6 01:51:00.793: INFO: Pod "metadata-volume-14c02200-9260-421a-836b-893c57c0875a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009928515s Nov 6 01:51:02.797: INFO: Pod "metadata-volume-14c02200-9260-421a-836b-893c57c0875a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013594832s Nov 6 01:51:04.802: INFO: Pod "metadata-volume-14c02200-9260-421a-836b-893c57c0875a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.01836565s Nov 6 01:51:06.804: INFO: Pod "metadata-volume-14c02200-9260-421a-836b-893c57c0875a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.02115721s Nov 6 01:51:08.808: INFO: Pod "metadata-volume-14c02200-9260-421a-836b-893c57c0875a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.024419835s Nov 6 01:51:10.815: INFO: Pod "metadata-volume-14c02200-9260-421a-836b-893c57c0875a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.031189368s STEP: Saw pod success Nov 6 01:51:10.815: INFO: Pod "metadata-volume-14c02200-9260-421a-836b-893c57c0875a" satisfied condition "Succeeded or Failed" Nov 6 01:51:10.818: INFO: Trying to get logs from node node2 pod metadata-volume-14c02200-9260-421a-836b-893c57c0875a container client-container: STEP: delete the pod Nov 6 01:51:10.835: INFO: Waiting for pod metadata-volume-14c02200-9260-421a-836b-893c57c0875a to disappear Nov 6 01:51:10.837: INFO: Pod metadata-volume-14c02200-9260-421a-836b-893c57c0875a no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:51:10.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1135" for this suite. • [SLOW TEST:14.098 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:106 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":2,"skipped":36,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:51:10.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should provision storage with non-default reclaim policy Retain /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:403 Nov 6 01:51:10.896: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:51:10.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-5354" for this suite. S [SKIPPING] [0.037 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 DynamicProvisioner [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:152 should provision storage with non-default reclaim policy Retain [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:403 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:404 ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:50:33.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test W1106 01:50:33.415967 24 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 6 01:50:33.416: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 6 01:50:33.418: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-5f089fc0-de9b-4b28-bc8f-3b63179c3264" Nov 6 01:50:37.473: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-5f089fc0-de9b-4b28-bc8f-3b63179c3264 && dd if=/dev/zero of=/tmp/local-volume-test-5f089fc0-de9b-4b28-bc8f-3b63179c3264/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-5f089fc0-de9b-4b28-bc8f-3b63179c3264/file] Namespace:persistent-local-volumes-test-7663 PodName:hostexec-node1-247f8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:50:37.473: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:50:38.539: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-5f089fc0-de9b-4b28-bc8f-3b63179c3264/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-7663 PodName:hostexec-node1-247f8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:50:38.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:50:38.707: INFO: Creating a PV followed by a PVC Nov 6 01:50:38.713: INFO: Waiting for PV local-pvbwm8v to bind to PVC pvc-zhkmw Nov 6 01:50:38.713: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-zhkmw] to have phase Bound Nov 6 01:50:38.715: INFO: PersistentVolumeClaim pvc-zhkmw found but phase is Pending instead of Bound. Nov 6 01:50:40.720: INFO: PersistentVolumeClaim pvc-zhkmw found but phase is Pending instead of Bound. Nov 6 01:50:42.723: INFO: PersistentVolumeClaim pvc-zhkmw found but phase is Pending instead of Bound. Nov 6 01:50:44.727: INFO: PersistentVolumeClaim pvc-zhkmw found but phase is Pending instead of Bound. Nov 6 01:50:46.731: INFO: PersistentVolumeClaim pvc-zhkmw found but phase is Pending instead of Bound. Nov 6 01:50:48.735: INFO: PersistentVolumeClaim pvc-zhkmw found but phase is Pending instead of Bound. Nov 6 01:50:50.739: INFO: PersistentVolumeClaim pvc-zhkmw found but phase is Pending instead of Bound. Nov 6 01:50:52.742: INFO: PersistentVolumeClaim pvc-zhkmw found and phase=Bound (14.028268125s) Nov 6 01:50:52.742: INFO: Waiting up to 3m0s for PersistentVolume local-pvbwm8v to have phase Bound Nov 6 01:50:52.744: INFO: PersistentVolume local-pvbwm8v found and phase=Bound (2.454013ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Nov 6 01:51:02.769: INFO: pod "pod-61e99061-35f0-4f7d-8e07-eb580cf949f1" created on Node "node1" STEP: Writing in pod1 Nov 6 01:51:02.769: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7663 PodName:pod-61e99061-35f0-4f7d-8e07-eb580cf949f1 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:51:02.769: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:51:02.878: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 6 01:51:02.878: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7663 PodName:pod-61e99061-35f0-4f7d-8e07-eb580cf949f1 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:51:02.879: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:51:03.020: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Nov 6 01:51:15.043: INFO: pod "pod-5a67488a-0fa4-4aa1-8587-1facc851cff1" created on Node "node1" Nov 6 01:51:15.043: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7663 PodName:pod-5a67488a-0fa4-4aa1-8587-1facc851cff1 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:51:15.043: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:51:15.667: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Nov 6 01:51:15.668: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /dev/loop0 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7663 PodName:pod-5a67488a-0fa4-4aa1-8587-1facc851cff1 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:51:15.668: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:51:15.752: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /dev/loop0 > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Nov 6 01:51:15.752: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7663 PodName:pod-61e99061-35f0-4f7d-8e07-eb580cf949f1 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:51:15.752: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:51:15.860: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/dev/loop0", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-61e99061-35f0-4f7d-8e07-eb580cf949f1 in namespace persistent-local-volumes-test-7663 STEP: Deleting pod2 STEP: Deleting pod pod-5a67488a-0fa4-4aa1-8587-1facc851cff1 in namespace persistent-local-volumes-test-7663 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:51:15.868: INFO: Deleting PersistentVolumeClaim "pvc-zhkmw" Nov 6 01:51:15.871: INFO: Deleting PersistentVolume "local-pvbwm8v" Nov 6 01:51:15.875: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-5f089fc0-de9b-4b28-bc8f-3b63179c3264/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-7663 PodName:hostexec-node1-247f8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:51:15.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node1" at path /tmp/local-volume-test-5f089fc0-de9b-4b28-bc8f-3b63179c3264/file Nov 6 01:51:15.982: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-7663 PodName:hostexec-node1-247f8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:51:15.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-5f089fc0-de9b-4b28-bc8f-3b63179c3264 Nov 6 01:51:16.110: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-5f089fc0-de9b-4b28-bc8f-3b63179c3264] Namespace:persistent-local-volumes-test-7663 PodName:hostexec-node1-247f8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:51:16.110: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:51:16.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7663" for this suite. • [SLOW TEST:42.823 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":4,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:51:16.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Nov 6 01:51:16.262: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:51:16.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-4192" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 schedule pods each with a PD, delete pod and verify detach [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:93 for RW PD with pod delete grace period of "default (30s)" /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:135 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:51:01.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Nov 6 01:51:01.248: INFO: The status of Pod test-hostpath-type-57622 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:51:03.252: INFO: The status of Pod test-hostpath-type-57622 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:51:05.253: INFO: The status of Pod test-hostpath-type-57622 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:51:07.255: INFO: The status of Pod test-hostpath-type-57622 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:51:09.252: INFO: The status of Pod test-hostpath-type-57622 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:51:11.253: INFO: The status of Pod test-hostpath-type-57622 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:51:13.252: INFO: The status of Pod test-hostpath-type-57622 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:51:15.253: INFO: The status of Pod test-hostpath-type-57622 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:51:17.251: INFO: The status of Pod test-hostpath-type-57622 is Running (Ready = true) STEP: running on node node2 STEP: Create a block device for further testing Nov 6 01:51:17.254: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-815 PodName:test-hostpath-type-57622 ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:51:17.254: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:354 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:51:23.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-815" for this suite. • [SLOW TEST:22.171 seconds] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting block device 'ablkdev' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:354 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathDirectory","total":-1,"completed":3,"skipped":48,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:51:10.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Nov 6 01:51:10.946: INFO: The status of Pod test-hostpath-type-vpptq is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:51:12.949: INFO: The status of Pod test-hostpath-type-vpptq is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:51:14.951: INFO: The status of Pod test-hostpath-type-vpptq is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:51:16.949: INFO: The status of Pod test-hostpath-type-vpptq is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:51:18.950: INFO: The status of Pod test-hostpath-type-vpptq is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:51:20.953: INFO: The status of Pod test-hostpath-type-vpptq is Running (Ready = true) STEP: running on node node1 STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should fail on mounting directory 'adir' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:94 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:51:29.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-4545" for this suite. • [SLOW TEST:18.102 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting directory 'adir' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:94 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathCharDev","total":-1,"completed":3,"skipped":48,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:51:29.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Nov 6 01:51:29.100: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:51:29.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-9516" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 schedule pods each with a PD, delete pod and verify detach [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:93 for read-only PD with pod delete grace period of "default (30s)" /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:135 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:51:16.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Nov 6 01:51:16.319: INFO: The status of Pod test-hostpath-type-p9hfx is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:51:18.323: INFO: The status of Pod test-hostpath-type-p9hfx is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:51:20.326: INFO: The status of Pod test-hostpath-type-p9hfx is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:51:22.322: INFO: The status of Pod test-hostpath-type-p9hfx is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:51:24.324: INFO: The status of Pod test-hostpath-type-p9hfx is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:51:26.326: INFO: The status of Pod test-hostpath-type-p9hfx is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:51:28.322: INFO: The status of Pod test-hostpath-type-p9hfx is Running (Ready = true) STEP: running on node node1 STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should fail on mounting directory 'adir' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:99 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:51:36.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-8519" for this suite. • [SLOW TEST:20.106 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting directory 'adir' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:99 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathBlockDev","total":-1,"completed":2,"skipped":20,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:50:47.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not be passed when podInfoOnMount=nil /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 STEP: Building a driver namespace object, basename csi-mock-volumes-210 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 6 01:50:47.794: INFO: creating *v1.ServiceAccount: csi-mock-volumes-210-8445/csi-attacher Nov 6 01:50:47.797: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-210 Nov 6 01:50:47.797: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-210 Nov 6 01:50:47.801: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-210 Nov 6 01:50:47.803: INFO: creating *v1.Role: csi-mock-volumes-210-8445/external-attacher-cfg-csi-mock-volumes-210 Nov 6 01:50:47.806: INFO: creating *v1.RoleBinding: csi-mock-volumes-210-8445/csi-attacher-role-cfg Nov 6 01:50:47.809: INFO: creating *v1.ServiceAccount: csi-mock-volumes-210-8445/csi-provisioner Nov 6 01:50:47.812: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-210 Nov 6 01:50:47.812: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-210 Nov 6 01:50:47.815: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-210 Nov 6 01:50:47.817: INFO: creating *v1.Role: csi-mock-volumes-210-8445/external-provisioner-cfg-csi-mock-volumes-210 Nov 6 01:50:47.820: INFO: creating *v1.RoleBinding: csi-mock-volumes-210-8445/csi-provisioner-role-cfg Nov 6 01:50:47.823: INFO: creating *v1.ServiceAccount: csi-mock-volumes-210-8445/csi-resizer Nov 6 01:50:47.826: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-210 Nov 6 01:50:47.826: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-210 Nov 6 01:50:47.828: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-210 Nov 6 01:50:47.831: INFO: creating *v1.Role: csi-mock-volumes-210-8445/external-resizer-cfg-csi-mock-volumes-210 Nov 6 01:50:47.833: INFO: creating *v1.RoleBinding: csi-mock-volumes-210-8445/csi-resizer-role-cfg Nov 6 01:50:47.835: INFO: creating *v1.ServiceAccount: csi-mock-volumes-210-8445/csi-snapshotter Nov 6 01:50:47.838: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-210 Nov 6 01:50:47.838: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-210 Nov 6 01:50:47.841: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-210 Nov 6 01:50:47.844: INFO: creating *v1.Role: csi-mock-volumes-210-8445/external-snapshotter-leaderelection-csi-mock-volumes-210 Nov 6 01:50:47.847: INFO: creating *v1.RoleBinding: csi-mock-volumes-210-8445/external-snapshotter-leaderelection Nov 6 01:50:47.850: INFO: creating *v1.ServiceAccount: csi-mock-volumes-210-8445/csi-mock Nov 6 01:50:47.852: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-210 Nov 6 01:50:47.854: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-210 Nov 6 01:50:47.859: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-210 Nov 6 01:50:47.863: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-210 Nov 6 01:50:47.865: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-210 Nov 6 01:50:47.871: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-210 Nov 6 01:50:47.874: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-210 Nov 6 01:50:47.877: INFO: creating *v1.StatefulSet: csi-mock-volumes-210-8445/csi-mockplugin Nov 6 01:50:47.882: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-210 Nov 6 01:50:47.885: INFO: creating *v1.StatefulSet: csi-mock-volumes-210-8445/csi-mockplugin-attacher Nov 6 01:50:47.888: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-210" Nov 6 01:50:47.890: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-210 to register on node node2 STEP: Creating pod Nov 6 01:51:04.162: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 6 01:51:04.166: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-44c4k] to have phase Bound Nov 6 01:51:04.169: INFO: PersistentVolumeClaim pvc-44c4k found but phase is Pending instead of Bound. Nov 6 01:51:06.174: INFO: PersistentVolumeClaim pvc-44c4k found and phase=Bound (2.00785005s) STEP: Deleting the previously created pod Nov 6 01:51:22.201: INFO: Deleting pod "pvc-volume-tester-svgs9" in namespace "csi-mock-volumes-210" Nov 6 01:51:22.206: INFO: Wait up to 5m0s for pod "pvc-volume-tester-svgs9" to be fully deleted STEP: Checking CSI driver logs Nov 6 01:51:30.225: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/b7326abe-08ad-495d-a4a9-848dbc4b9041/volumes/kubernetes.io~csi/pvc-bb1ce683-70dd-4b99-a83a-94d10b3d53db/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-svgs9 Nov 6 01:51:30.225: INFO: Deleting pod "pvc-volume-tester-svgs9" in namespace "csi-mock-volumes-210" STEP: Deleting claim pvc-44c4k Nov 6 01:51:30.233: INFO: Waiting up to 2m0s for PersistentVolume pvc-bb1ce683-70dd-4b99-a83a-94d10b3d53db to get deleted Nov 6 01:51:30.236: INFO: PersistentVolume pvc-bb1ce683-70dd-4b99-a83a-94d10b3d53db found and phase=Bound (2.27712ms) Nov 6 01:51:32.241: INFO: PersistentVolume pvc-bb1ce683-70dd-4b99-a83a-94d10b3d53db found and phase=Released (2.007425068s) Nov 6 01:51:34.244: INFO: PersistentVolume pvc-bb1ce683-70dd-4b99-a83a-94d10b3d53db was removed STEP: Deleting storageclass csi-mock-volumes-210-scsq4h7 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-210 STEP: Waiting for namespaces [csi-mock-volumes-210] to vanish STEP: uninstalling csi mock driver Nov 6 01:51:40.257: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-210-8445/csi-attacher Nov 6 01:51:40.261: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-210 Nov 6 01:51:40.265: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-210 Nov 6 01:51:40.268: INFO: deleting *v1.Role: csi-mock-volumes-210-8445/external-attacher-cfg-csi-mock-volumes-210 Nov 6 01:51:40.272: INFO: deleting *v1.RoleBinding: csi-mock-volumes-210-8445/csi-attacher-role-cfg Nov 6 01:51:40.276: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-210-8445/csi-provisioner Nov 6 01:51:40.282: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-210 Nov 6 01:51:40.286: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-210 Nov 6 01:51:40.289: INFO: deleting *v1.Role: csi-mock-volumes-210-8445/external-provisioner-cfg-csi-mock-volumes-210 Nov 6 01:51:40.293: INFO: deleting *v1.RoleBinding: csi-mock-volumes-210-8445/csi-provisioner-role-cfg Nov 6 01:51:40.296: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-210-8445/csi-resizer Nov 6 01:51:40.299: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-210 Nov 6 01:51:40.304: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-210 Nov 6 01:51:40.307: INFO: deleting *v1.Role: csi-mock-volumes-210-8445/external-resizer-cfg-csi-mock-volumes-210 Nov 6 01:51:40.310: INFO: deleting *v1.RoleBinding: csi-mock-volumes-210-8445/csi-resizer-role-cfg Nov 6 01:51:40.313: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-210-8445/csi-snapshotter Nov 6 01:51:40.318: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-210 Nov 6 01:51:40.322: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-210 Nov 6 01:51:40.324: INFO: deleting *v1.Role: csi-mock-volumes-210-8445/external-snapshotter-leaderelection-csi-mock-volumes-210 Nov 6 01:51:40.328: INFO: deleting *v1.RoleBinding: csi-mock-volumes-210-8445/external-snapshotter-leaderelection Nov 6 01:51:40.332: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-210-8445/csi-mock Nov 6 01:51:40.335: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-210 Nov 6 01:51:40.338: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-210 Nov 6 01:51:40.342: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-210 Nov 6 01:51:40.345: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-210 Nov 6 01:51:40.349: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-210 Nov 6 01:51:40.353: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-210 Nov 6 01:51:40.356: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-210 Nov 6 01:51:40.359: INFO: deleting *v1.StatefulSet: csi-mock-volumes-210-8445/csi-mockplugin Nov 6 01:51:40.363: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-210 Nov 6 01:51:40.366: INFO: deleting *v1.StatefulSet: csi-mock-volumes-210-8445/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-210-8445 STEP: Waiting for namespaces [csi-mock-volumes-210-8445] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:51:52.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:64.662 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443 should not be passed when podInfoOnMount=nil /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil","total":-1,"completed":2,"skipped":91,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:50:33.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes W1106 01:50:33.905927 38 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 6 01:50:33.906: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 6 01:50:33.908: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should require VolumeAttach for drivers with attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 STEP: Building a driver namespace object, basename csi-mock-volumes-3457 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 6 01:50:35.336: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3457-8478/csi-attacher Nov 6 01:50:35.339: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3457 Nov 6 01:50:35.339: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-3457 Nov 6 01:50:35.342: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3457 Nov 6 01:50:35.345: INFO: creating *v1.Role: csi-mock-volumes-3457-8478/external-attacher-cfg-csi-mock-volumes-3457 Nov 6 01:50:35.348: INFO: creating *v1.RoleBinding: csi-mock-volumes-3457-8478/csi-attacher-role-cfg Nov 6 01:50:35.350: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3457-8478/csi-provisioner Nov 6 01:50:35.353: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3457 Nov 6 01:50:35.353: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-3457 Nov 6 01:50:35.356: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3457 Nov 6 01:50:35.360: INFO: creating *v1.Role: csi-mock-volumes-3457-8478/external-provisioner-cfg-csi-mock-volumes-3457 Nov 6 01:50:35.363: INFO: creating *v1.RoleBinding: csi-mock-volumes-3457-8478/csi-provisioner-role-cfg Nov 6 01:50:35.366: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3457-8478/csi-resizer Nov 6 01:50:35.368: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3457 Nov 6 01:50:35.368: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-3457 Nov 6 01:50:35.371: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3457 Nov 6 01:50:35.374: INFO: creating *v1.Role: csi-mock-volumes-3457-8478/external-resizer-cfg-csi-mock-volumes-3457 Nov 6 01:50:35.376: INFO: creating *v1.RoleBinding: csi-mock-volumes-3457-8478/csi-resizer-role-cfg Nov 6 01:50:35.379: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3457-8478/csi-snapshotter Nov 6 01:50:35.381: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3457 Nov 6 01:50:35.381: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-3457 Nov 6 01:50:35.384: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3457 Nov 6 01:50:35.387: INFO: creating *v1.Role: csi-mock-volumes-3457-8478/external-snapshotter-leaderelection-csi-mock-volumes-3457 Nov 6 01:50:35.389: INFO: creating *v1.RoleBinding: csi-mock-volumes-3457-8478/external-snapshotter-leaderelection Nov 6 01:50:35.391: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3457-8478/csi-mock Nov 6 01:50:35.394: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3457 Nov 6 01:50:35.397: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3457 Nov 6 01:50:35.399: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3457 Nov 6 01:50:35.402: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3457 Nov 6 01:50:35.404: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3457 Nov 6 01:50:35.407: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3457 Nov 6 01:50:35.410: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3457 Nov 6 01:50:35.413: INFO: creating *v1.StatefulSet: csi-mock-volumes-3457-8478/csi-mockplugin Nov 6 01:50:35.418: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-3457 Nov 6 01:50:35.421: INFO: creating *v1.StatefulSet: csi-mock-volumes-3457-8478/csi-mockplugin-attacher Nov 6 01:50:35.424: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-3457" Nov 6 01:50:35.427: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-3457 to register on node node2 STEP: Creating pod Nov 6 01:51:01.824: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 6 01:51:01.828: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-gr96r] to have phase Bound Nov 6 01:51:01.830: INFO: PersistentVolumeClaim pvc-gr96r found but phase is Pending instead of Bound. Nov 6 01:51:03.836: INFO: PersistentVolumeClaim pvc-gr96r found and phase=Bound (2.00734025s) STEP: Checking if VolumeAttachment was created for the pod STEP: Deleting pod pvc-volume-tester-d5p8g Nov 6 01:51:25.865: INFO: Deleting pod "pvc-volume-tester-d5p8g" in namespace "csi-mock-volumes-3457" Nov 6 01:51:25.871: INFO: Wait up to 5m0s for pod "pvc-volume-tester-d5p8g" to be fully deleted STEP: Deleting claim pvc-gr96r Nov 6 01:51:35.883: INFO: Waiting up to 2m0s for PersistentVolume pvc-91a3aaf9-d844-4f1d-9880-c620d711dc62 to get deleted Nov 6 01:51:35.885: INFO: PersistentVolume pvc-91a3aaf9-d844-4f1d-9880-c620d711dc62 found and phase=Bound (2.280336ms) Nov 6 01:51:37.890: INFO: PersistentVolume pvc-91a3aaf9-d844-4f1d-9880-c620d711dc62 was removed STEP: Deleting storageclass csi-mock-volumes-3457-scg7sgv STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-3457 STEP: Waiting for namespaces [csi-mock-volumes-3457] to vanish STEP: uninstalling csi mock driver Nov 6 01:51:43.904: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3457-8478/csi-attacher Nov 6 01:51:43.908: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3457 Nov 6 01:51:43.911: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3457 Nov 6 01:51:43.914: INFO: deleting *v1.Role: csi-mock-volumes-3457-8478/external-attacher-cfg-csi-mock-volumes-3457 Nov 6 01:51:43.918: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3457-8478/csi-attacher-role-cfg Nov 6 01:51:43.922: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3457-8478/csi-provisioner Nov 6 01:51:43.925: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3457 Nov 6 01:51:43.928: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3457 Nov 6 01:51:43.934: INFO: deleting *v1.Role: csi-mock-volumes-3457-8478/external-provisioner-cfg-csi-mock-volumes-3457 Nov 6 01:51:43.940: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3457-8478/csi-provisioner-role-cfg Nov 6 01:51:43.949: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3457-8478/csi-resizer Nov 6 01:51:43.955: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3457 Nov 6 01:51:43.959: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3457 Nov 6 01:51:43.962: INFO: deleting *v1.Role: csi-mock-volumes-3457-8478/external-resizer-cfg-csi-mock-volumes-3457 Nov 6 01:51:43.966: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3457-8478/csi-resizer-role-cfg Nov 6 01:51:43.969: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3457-8478/csi-snapshotter Nov 6 01:51:43.972: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3457 Nov 6 01:51:43.975: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3457 Nov 6 01:51:43.978: INFO: deleting *v1.Role: csi-mock-volumes-3457-8478/external-snapshotter-leaderelection-csi-mock-volumes-3457 Nov 6 01:51:43.981: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3457-8478/external-snapshotter-leaderelection Nov 6 01:51:43.985: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3457-8478/csi-mock Nov 6 01:51:43.988: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3457 Nov 6 01:51:43.992: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3457 Nov 6 01:51:43.995: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3457 Nov 6 01:51:43.998: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3457 Nov 6 01:51:44.002: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3457 Nov 6 01:51:44.005: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3457 Nov 6 01:51:44.009: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3457 Nov 6 01:51:44.012: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3457-8478/csi-mockplugin Nov 6 01:51:44.015: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-3457 Nov 6 01:51:44.019: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3457-8478/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-3457-8478 STEP: Waiting for namespaces [csi-mock-volumes-3457-8478] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:51:56.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:82.553 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI attach test using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:316 should require VolumeAttach for drivers with attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment","total":-1,"completed":1,"skipped":30,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:51:56.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Nov 6 01:51:56.070: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:51:56.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-1061" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.037 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should be able to delete a non-existent PD without error [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:449 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:51:56.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename regional-pd STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:68 Nov 6 01:51:56.191: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:51:56.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "regional-pd-4443" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 RegionalPD [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:76 should provision storage in the allowedTopologies [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:86 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:72 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:51:52.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 6 01:51:56.490: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-4afad232-0ea8-4119-8c46-4822f8f84f15 && mount --bind /tmp/local-volume-test-4afad232-0ea8-4119-8c46-4822f8f84f15 /tmp/local-volume-test-4afad232-0ea8-4119-8c46-4822f8f84f15] Namespace:persistent-local-volumes-test-3881 PodName:hostexec-node1-dxqsq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:51:56.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:51:56.594: INFO: Creating a PV followed by a PVC Nov 6 01:51:56.601: INFO: Waiting for PV local-pvzgzp4 to bind to PVC pvc-8kdfm Nov 6 01:51:56.601: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-8kdfm] to have phase Bound Nov 6 01:51:56.603: INFO: PersistentVolumeClaim pvc-8kdfm found but phase is Pending instead of Bound. Nov 6 01:51:58.607: INFO: PersistentVolumeClaim pvc-8kdfm found but phase is Pending instead of Bound. Nov 6 01:52:00.610: INFO: PersistentVolumeClaim pvc-8kdfm found but phase is Pending instead of Bound. Nov 6 01:52:02.614: INFO: PersistentVolumeClaim pvc-8kdfm found but phase is Pending instead of Bound. Nov 6 01:52:04.619: INFO: PersistentVolumeClaim pvc-8kdfm found but phase is Pending instead of Bound. Nov 6 01:52:06.622: INFO: PersistentVolumeClaim pvc-8kdfm found but phase is Pending instead of Bound. Nov 6 01:52:08.625: INFO: PersistentVolumeClaim pvc-8kdfm found and phase=Bound (12.024179563s) Nov 6 01:52:08.625: INFO: Waiting up to 3m0s for PersistentVolume local-pvzgzp4 to have phase Bound Nov 6 01:52:08.627: INFO: PersistentVolume local-pvzgzp4 found and phase=Bound (2.277096ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 6 01:52:12.673: INFO: pod "pod-ab309e27-4205-4b50-8588-0baee403c47a" created on Node "node1" STEP: Writing in pod1 Nov 6 01:52:12.673: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3881 PodName:pod-ab309e27-4205-4b50-8588-0baee403c47a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:52:12.673: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:52:12.782: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Nov 6 01:52:12.782: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3881 PodName:pod-ab309e27-4205-4b50-8588-0baee403c47a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:52:12.782: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:52:12.871: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-ab309e27-4205-4b50-8588-0baee403c47a in namespace persistent-local-volumes-test-3881 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:52:12.876: INFO: Deleting PersistentVolumeClaim "pvc-8kdfm" Nov 6 01:52:12.880: INFO: Deleting PersistentVolume "local-pvzgzp4" STEP: Removing the test directory Nov 6 01:52:12.884: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-4afad232-0ea8-4119-8c46-4822f8f84f15 && rm -r /tmp/local-volume-test-4afad232-0ea8-4119-8c46-4822f8f84f15] Namespace:persistent-local-volumes-test-3881 PodName:hostexec-node1-dxqsq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:12.884: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:52:13.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3881" for this suite. • [SLOW TEST:20.638 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":3,"skipped":107,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:51:36.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 6 01:51:52.533: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-97573448-c377-46e3-8e19-31c67ea16f45 && mount --bind /tmp/local-volume-test-97573448-c377-46e3-8e19-31c67ea16f45 /tmp/local-volume-test-97573448-c377-46e3-8e19-31c67ea16f45] Namespace:persistent-local-volumes-test-5562 PodName:hostexec-node2-trqmd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:51:52.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:51:52.665: INFO: Creating a PV followed by a PVC Nov 6 01:51:52.671: INFO: Waiting for PV local-pvc6mbk to bind to PVC pvc-2gxfc Nov 6 01:51:52.671: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-2gxfc] to have phase Bound Nov 6 01:51:52.674: INFO: PersistentVolumeClaim pvc-2gxfc found but phase is Pending instead of Bound. Nov 6 01:51:54.678: INFO: PersistentVolumeClaim pvc-2gxfc found but phase is Pending instead of Bound. Nov 6 01:51:56.681: INFO: PersistentVolumeClaim pvc-2gxfc found but phase is Pending instead of Bound. Nov 6 01:51:58.684: INFO: PersistentVolumeClaim pvc-2gxfc found but phase is Pending instead of Bound. Nov 6 01:52:00.691: INFO: PersistentVolumeClaim pvc-2gxfc found but phase is Pending instead of Bound. Nov 6 01:52:02.695: INFO: PersistentVolumeClaim pvc-2gxfc found but phase is Pending instead of Bound. Nov 6 01:52:04.698: INFO: PersistentVolumeClaim pvc-2gxfc found but phase is Pending instead of Bound. Nov 6 01:52:06.701: INFO: PersistentVolumeClaim pvc-2gxfc found but phase is Pending instead of Bound. Nov 6 01:52:08.704: INFO: PersistentVolumeClaim pvc-2gxfc found and phase=Bound (16.032496936s) Nov 6 01:52:08.704: INFO: Waiting up to 3m0s for PersistentVolume local-pvc6mbk to have phase Bound Nov 6 01:52:08.706: INFO: PersistentVolume local-pvc6mbk found and phase=Bound (2.346228ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Nov 6 01:52:16.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-5562 exec pod-061d17fe-e739-49a3-8a86-f1b0cc6f7d0a --namespace=persistent-local-volumes-test-5562 -- stat -c %g /mnt/volume1' Nov 6 01:52:16.961: INFO: stderr: "" Nov 6 01:52:16.961: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-061d17fe-e739-49a3-8a86-f1b0cc6f7d0a in namespace persistent-local-volumes-test-5562 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:52:16.968: INFO: Deleting PersistentVolumeClaim "pvc-2gxfc" Nov 6 01:52:16.971: INFO: Deleting PersistentVolume "local-pvc6mbk" STEP: Removing the test directory Nov 6 01:52:16.975: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-97573448-c377-46e3-8e19-31c67ea16f45 && rm -r /tmp/local-volume-test-97573448-c377-46e3-8e19-31c67ea16f45] Namespace:persistent-local-volumes-test-5562 PodName:hostexec-node2-trqmd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:16.975: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:52:17.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5562" for this suite. • [SLOW TEST:40.623 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":3,"skipped":64,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:52:17.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Nov 6 01:52:17.151: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:52:17.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-2923" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 schedule pods each with a PD, delete pod and verify detach [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:93 for read-only PD with pod delete grace period of "immediate (0s)" /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:135 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:50:33.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test W1106 01:50:34.855329 32 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 6 01:50:34.855: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 6 01:50:34.857: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:455 STEP: Setting up 10 local volumes on node "node1" STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-4a69964e-def4-4789-af86-7e58e6720551" Nov 6 01:50:40.891: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-4a69964e-def4-4789-af86-7e58e6720551" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-4a69964e-def4-4789-af86-7e58e6720551" "/tmp/local-volume-test-4a69964e-def4-4789-af86-7e58e6720551"] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node1-x46s7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:50:40.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-40925952-70d0-4c6d-872b-f668668cc210" Nov 6 01:50:41.066: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-40925952-70d0-4c6d-872b-f668668cc210" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-40925952-70d0-4c6d-872b-f668668cc210" "/tmp/local-volume-test-40925952-70d0-4c6d-872b-f668668cc210"] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node1-x46s7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:50:41.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-95bf36f7-57d2-45cd-8ac2-8b98b7124ccc" Nov 6 01:50:41.184: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-95bf36f7-57d2-45cd-8ac2-8b98b7124ccc" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-95bf36f7-57d2-45cd-8ac2-8b98b7124ccc" "/tmp/local-volume-test-95bf36f7-57d2-45cd-8ac2-8b98b7124ccc"] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node1-x46s7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:50:41.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-8fe17d23-0266-4158-a16b-f2fba441bb39" Nov 6 01:50:41.291: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-8fe17d23-0266-4158-a16b-f2fba441bb39" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-8fe17d23-0266-4158-a16b-f2fba441bb39" "/tmp/local-volume-test-8fe17d23-0266-4158-a16b-f2fba441bb39"] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node1-x46s7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:50:41.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-05fedaf4-410e-4a3d-b26e-3a2dde6c419c" Nov 6 01:50:41.415: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-05fedaf4-410e-4a3d-b26e-3a2dde6c419c" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-05fedaf4-410e-4a3d-b26e-3a2dde6c419c" "/tmp/local-volume-test-05fedaf4-410e-4a3d-b26e-3a2dde6c419c"] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node1-x46s7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:50:41.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-55c2be5e-bbdf-44bd-aff4-d6bacd4a36cc" Nov 6 01:50:41.531: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-55c2be5e-bbdf-44bd-aff4-d6bacd4a36cc" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-55c2be5e-bbdf-44bd-aff4-d6bacd4a36cc" "/tmp/local-volume-test-55c2be5e-bbdf-44bd-aff4-d6bacd4a36cc"] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node1-x46s7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:50:41.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-9b8b1864-8ad1-41c1-985d-c7873e952c43" Nov 6 01:50:41.633: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-9b8b1864-8ad1-41c1-985d-c7873e952c43" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-9b8b1864-8ad1-41c1-985d-c7873e952c43" "/tmp/local-volume-test-9b8b1864-8ad1-41c1-985d-c7873e952c43"] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node1-x46s7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:50:41.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-99b4faf0-04f7-43f0-8ede-41681965cfc8" Nov 6 01:50:41.745: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-99b4faf0-04f7-43f0-8ede-41681965cfc8" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-99b4faf0-04f7-43f0-8ede-41681965cfc8" "/tmp/local-volume-test-99b4faf0-04f7-43f0-8ede-41681965cfc8"] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node1-x46s7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:50:41.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-920a3766-81f8-446c-ad90-4596982b21b8" Nov 6 01:50:41.837: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-920a3766-81f8-446c-ad90-4596982b21b8" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-920a3766-81f8-446c-ad90-4596982b21b8" "/tmp/local-volume-test-920a3766-81f8-446c-ad90-4596982b21b8"] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node1-x46s7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:50:41.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-2aa674d7-912b-447a-a0f7-50c0edbaa690" Nov 6 01:50:41.999: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-2aa674d7-912b-447a-a0f7-50c0edbaa690" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-2aa674d7-912b-447a-a0f7-50c0edbaa690" "/tmp/local-volume-test-2aa674d7-912b-447a-a0f7-50c0edbaa690"] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node1-x46s7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:50:41.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Setting up 10 local volumes on node "node2" STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-d5b09707-b361-4cfb-906a-55c25cdf4f33" Nov 6 01:50:52.145: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-d5b09707-b361-4cfb-906a-55c25cdf4f33" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-d5b09707-b361-4cfb-906a-55c25cdf4f33" "/tmp/local-volume-test-d5b09707-b361-4cfb-906a-55c25cdf4f33"] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node2-bgcbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:50:52.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-ebfa976f-8a4d-4fc3-91ba-fe941220b8d6" Nov 6 01:50:52.326: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-ebfa976f-8a4d-4fc3-91ba-fe941220b8d6" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-ebfa976f-8a4d-4fc3-91ba-fe941220b8d6" "/tmp/local-volume-test-ebfa976f-8a4d-4fc3-91ba-fe941220b8d6"] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node2-bgcbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:50:52.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-8e97e5d6-c8b8-4eae-8e9e-93a8072ba105" Nov 6 01:50:52.448: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-8e97e5d6-c8b8-4eae-8e9e-93a8072ba105" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-8e97e5d6-c8b8-4eae-8e9e-93a8072ba105" "/tmp/local-volume-test-8e97e5d6-c8b8-4eae-8e9e-93a8072ba105"] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node2-bgcbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:50:52.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-eb792536-b789-4a71-aa80-cbe3a5713696" Nov 6 01:50:52.653: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-eb792536-b789-4a71-aa80-cbe3a5713696" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-eb792536-b789-4a71-aa80-cbe3a5713696" "/tmp/local-volume-test-eb792536-b789-4a71-aa80-cbe3a5713696"] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node2-bgcbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:50:52.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-336d9ac9-5bf0-4bba-9e2a-022e73e8d6b0" Nov 6 01:50:52.818: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-336d9ac9-5bf0-4bba-9e2a-022e73e8d6b0" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-336d9ac9-5bf0-4bba-9e2a-022e73e8d6b0" "/tmp/local-volume-test-336d9ac9-5bf0-4bba-9e2a-022e73e8d6b0"] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node2-bgcbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:50:52.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-4d14979f-57ac-4304-a345-ec2147b3d4d1" Nov 6 01:50:53.018: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-4d14979f-57ac-4304-a345-ec2147b3d4d1" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-4d14979f-57ac-4304-a345-ec2147b3d4d1" "/tmp/local-volume-test-4d14979f-57ac-4304-a345-ec2147b3d4d1"] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node2-bgcbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:50:53.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-6f2f4f35-c7ff-4af4-bf9d-d2f0ed3db422" Nov 6 01:50:53.196: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-6f2f4f35-c7ff-4af4-bf9d-d2f0ed3db422" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-6f2f4f35-c7ff-4af4-bf9d-d2f0ed3db422" "/tmp/local-volume-test-6f2f4f35-c7ff-4af4-bf9d-d2f0ed3db422"] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node2-bgcbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:50:53.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-9b172b00-7e4e-4e6f-a810-4a95beb1197b" Nov 6 01:50:53.773: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-9b172b00-7e4e-4e6f-a810-4a95beb1197b" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-9b172b00-7e4e-4e6f-a810-4a95beb1197b" "/tmp/local-volume-test-9b172b00-7e4e-4e6f-a810-4a95beb1197b"] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node2-bgcbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:50:53.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-e53aa599-9ec2-4798-8e65-d98133192984" Nov 6 01:50:53.994: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-e53aa599-9ec2-4798-8e65-d98133192984" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-e53aa599-9ec2-4798-8e65-d98133192984" "/tmp/local-volume-test-e53aa599-9ec2-4798-8e65-d98133192984"] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node2-bgcbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:50:53.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-1e701f97-17bd-4eb8-8021-4b612c1c1345" Nov 6 01:50:54.122: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-1e701f97-17bd-4eb8-8021-4b612c1c1345" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-1e701f97-17bd-4eb8-8021-4b612c1c1345" "/tmp/local-volume-test-1e701f97-17bd-4eb8-8021-4b612c1c1345"] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node2-bgcbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:50:54.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Create 20 PVs STEP: Start a goroutine to recycle unbound PVs [It] should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:531 STEP: Creating 7 pods periodically STEP: Waiting for all pods to complete successfully STEP: Delete "local-pv4jq2x" and create a new PV for same local volume storage STEP: Delete "local-pvpfbq9" and create a new PV for same local volume storage Nov 6 01:51:04.445: INFO: Deleting pod pod-2bd2f121-2c93-4719-a583-31d3713d204e Nov 6 01:51:04.453: INFO: Deleting PersistentVolumeClaim "pvc-mzw9x" Nov 6 01:51:04.457: INFO: Deleting PersistentVolumeClaim "pvc-wl2b8" Nov 6 01:51:04.461: INFO: Deleting PersistentVolumeClaim "pvc-5qzcg" Nov 6 01:51:04.464: INFO: 1/28 pods finished STEP: Delete "local-pvfrjvt" and create a new PV for same local volume storage STEP: Delete "local-pvc5hwk" and create a new PV for same local volume storage STEP: Delete "local-pvkqwcr" and create a new PV for same local volume storage STEP: Delete "local-pvlps4c" and create a new PV for same local volume storage Nov 6 01:51:08.442: INFO: Deleting pod pod-a36f1aba-05b0-4ab2-8eaa-962951dec744 Nov 6 01:51:08.449: INFO: Deleting PersistentVolumeClaim "pvc-zcn4r" Nov 6 01:51:08.453: INFO: Deleting PersistentVolumeClaim "pvc-6w96c" Nov 6 01:51:08.457: INFO: Deleting PersistentVolumeClaim "pvc-hmt6x" Nov 6 01:51:08.461: INFO: 2/28 pods finished STEP: Delete "local-pvwmgzd" and create a new PV for same local volume storage STEP: Delete "local-pv777pn" and create a new PV for same local volume storage STEP: Delete "local-pvq4fkp" and create a new PV for same local volume storage Nov 6 01:51:09.442: INFO: Deleting pod pod-d4a36e21-3599-41d2-aafd-c03bd76f32f7 Nov 6 01:51:09.449: INFO: Deleting PersistentVolumeClaim "pvc-vfdg9" Nov 6 01:51:09.453: INFO: Deleting PersistentVolumeClaim "pvc-dtsp5" Nov 6 01:51:09.456: INFO: Deleting PersistentVolumeClaim "pvc-t2z4l" Nov 6 01:51:09.459: INFO: 3/28 pods finished STEP: Delete "local-pvrmstv" and create a new PV for same local volume storage STEP: Delete "local-pvzbfpk" and create a new PV for same local volume storage STEP: Delete "local-pvc28c8" and create a new PV for same local volume storage Nov 6 01:51:12.443: INFO: Deleting pod pod-b558623b-8e47-4b1a-8198-f68f3982e5a4 Nov 6 01:51:12.451: INFO: Deleting PersistentVolumeClaim "pvc-mjt7h" Nov 6 01:51:12.455: INFO: Deleting PersistentVolumeClaim "pvc-snffs" Nov 6 01:51:12.458: INFO: Deleting PersistentVolumeClaim "pvc-cw8rh" Nov 6 01:51:12.461: INFO: 4/28 pods finished STEP: Delete "local-pv4zkmj" and create a new PV for same local volume storage STEP: Delete "local-pvdjqzd" and create a new PV for same local volume storage STEP: Delete "local-pv2pzfn" and create a new PV for same local volume storage Nov 6 01:51:14.444: INFO: Deleting pod pod-784b84f4-1a3a-4301-b46a-74819afcd2b5 Nov 6 01:51:14.449: INFO: Deleting PersistentVolumeClaim "pvc-6khwd" Nov 6 01:51:14.453: INFO: Deleting PersistentVolumeClaim "pvc-j7pcb" Nov 6 01:51:14.456: INFO: Deleting PersistentVolumeClaim "pvc-r7pjj" Nov 6 01:51:14.460: INFO: 5/28 pods finished STEP: Delete "local-pvrjm7z" and create a new PV for same local volume storage STEP: Delete "local-pvs4c9r" and create a new PV for same local volume storage STEP: Delete "local-pvwmrnn" and create a new PV for same local volume storage Nov 6 01:51:16.443: INFO: Deleting pod pod-b094121e-dfc2-42d4-b956-c0e0eac7f700 Nov 6 01:51:16.449: INFO: Deleting PersistentVolumeClaim "pvc-fzhwm" Nov 6 01:51:16.453: INFO: Deleting PersistentVolumeClaim "pvc-pdp99" Nov 6 01:51:16.457: INFO: Deleting PersistentVolumeClaim "pvc-rm567" Nov 6 01:51:16.460: INFO: 6/28 pods finished STEP: Delete "local-pv6mdj6" and create a new PV for same local volume storage STEP: Delete "local-pv6jrlt" and create a new PV for same local volume storage STEP: Delete "local-pv6jrlt" and create a new PV for same local volume storage STEP: Delete "local-pvhd8mz" and create a new PV for same local volume storage STEP: Delete "local-pvbwm8v" and create a new PV for same local volume storage Nov 6 01:51:23.443: INFO: Deleting pod pod-801957d6-b666-4d8e-a1e6-b110700bb007 Nov 6 01:51:23.448: INFO: Deleting PersistentVolumeClaim "pvc-txs7b" Nov 6 01:51:23.451: INFO: Deleting PersistentVolumeClaim "pvc-hwrch" Nov 6 01:51:23.455: INFO: Deleting PersistentVolumeClaim "pvc-867xw" Nov 6 01:51:23.458: INFO: 7/28 pods finished STEP: Delete "local-pvj2vqs" and create a new PV for same local volume storage STEP: Delete "local-pvmwbbk" and create a new PV for same local volume storage STEP: Delete "local-pvsmld4" and create a new PV for same local volume storage Nov 6 01:51:25.444: INFO: Deleting pod pod-cf199734-62ed-432e-b373-9d1caca66e39 Nov 6 01:51:25.451: INFO: Deleting PersistentVolumeClaim "pvc-6c4bx" Nov 6 01:51:25.454: INFO: Deleting PersistentVolumeClaim "pvc-rlw69" Nov 6 01:51:25.458: INFO: Deleting PersistentVolumeClaim "pvc-rlbdr" Nov 6 01:51:25.462: INFO: 8/28 pods finished STEP: Delete "local-pv7t5tv" and create a new PV for same local volume storage STEP: Delete "local-pvqwcz9" and create a new PV for same local volume storage STEP: Delete "local-pvstr4w" and create a new PV for same local volume storage Nov 6 01:51:26.443: INFO: Deleting pod pod-b3bad232-a5b4-4a19-a224-7ca20955573b Nov 6 01:51:26.449: INFO: Deleting PersistentVolumeClaim "pvc-scq22" Nov 6 01:51:26.455: INFO: Deleting PersistentVolumeClaim "pvc-4rng7" Nov 6 01:51:26.458: INFO: Deleting PersistentVolumeClaim "pvc-2b7tl" Nov 6 01:51:26.462: INFO: 9/28 pods finished STEP: Delete "local-pvptw4v" and create a new PV for same local volume storage STEP: Delete "local-pvzc44v" and create a new PV for same local volume storage STEP: Delete "local-pv66kmq" and create a new PV for same local volume storage Nov 6 01:51:27.443: INFO: Deleting pod pod-031290bb-14fa-4a1a-bf7b-5d27ce85ad72 Nov 6 01:51:27.448: INFO: Deleting PersistentVolumeClaim "pvc-4g9wf" Nov 6 01:51:27.452: INFO: Deleting PersistentVolumeClaim "pvc-8csvl" Nov 6 01:51:27.455: INFO: Deleting PersistentVolumeClaim "pvc-g6phq" Nov 6 01:51:27.459: INFO: 10/28 pods finished STEP: Delete "local-pvdmxcj" and create a new PV for same local volume storage STEP: Delete "local-pv4cvj8" and create a new PV for same local volume storage STEP: Delete "local-pv5pfzp" and create a new PV for same local volume storage STEP: Delete "pvc-bb1ce683-70dd-4b99-a83a-94d10b3d53db" and create a new PV for same local volume storage Nov 6 01:51:30.442: INFO: Deleting pod pod-29a91e40-1fb3-43f3-89e5-fdfe2f426cbe Nov 6 01:51:30.448: INFO: Deleting PersistentVolumeClaim "pvc-btpvd" Nov 6 01:51:30.451: INFO: Deleting PersistentVolumeClaim "pvc-kjhr2" Nov 6 01:51:30.455: INFO: Deleting PersistentVolumeClaim "pvc-659qm" Nov 6 01:51:30.459: INFO: 11/28 pods finished STEP: Delete "local-pv97hdd" and create a new PV for same local volume storage STEP: Delete "local-pvtgkn2" and create a new PV for same local volume storage STEP: Delete "local-pvzx7wf" and create a new PV for same local volume storage STEP: Delete "pvc-bb1ce683-70dd-4b99-a83a-94d10b3d53db" and create a new PV for same local volume storage STEP: Delete "pvc-bb1ce683-70dd-4b99-a83a-94d10b3d53db" and create a new PV for same local volume storage STEP: Delete "pvc-91a3aaf9-d844-4f1d-9880-c620d711dc62" and create a new PV for same local volume storage STEP: Delete "pvc-91a3aaf9-d844-4f1d-9880-c620d711dc62" and create a new PV for same local volume storage STEP: Delete "pvc-91a3aaf9-d844-4f1d-9880-c620d711dc62" and create a new PV for same local volume storage Nov 6 01:51:37.444: INFO: Deleting pod pod-95f658c8-ccfe-4de3-a2ad-377948eff4ea Nov 6 01:51:37.452: INFO: Deleting PersistentVolumeClaim "pvc-fvr2n" Nov 6 01:51:37.456: INFO: Deleting PersistentVolumeClaim "pvc-kg7zd" Nov 6 01:51:37.460: INFO: Deleting PersistentVolumeClaim "pvc-nl2kl" Nov 6 01:51:37.463: INFO: 12/28 pods finished Nov 6 01:51:37.463: INFO: Deleting pod pod-f3bf7ca0-ab01-44cd-8ba6-3739f53df5a1 Nov 6 01:51:37.470: INFO: Deleting PersistentVolumeClaim "pvc-q2rtb" STEP: Delete "local-pv69x6x" and create a new PV for same local volume storage Nov 6 01:51:37.474: INFO: Deleting PersistentVolumeClaim "pvc-w5r7q" Nov 6 01:51:37.477: INFO: Deleting PersistentVolumeClaim "pvc-2vtdv" Nov 6 01:51:37.480: INFO: 13/28 pods finished STEP: Delete "local-pvvf4jx" and create a new PV for same local volume storage STEP: Delete "local-pv9bsgk" and create a new PV for same local volume storage STEP: Delete "local-pv6t8b6" and create a new PV for same local volume storage STEP: Delete "local-pvbsr6z" and create a new PV for same local volume storage STEP: Delete "local-pvlvk8z" and create a new PV for same local volume storage Nov 6 01:51:39.442: INFO: Deleting pod pod-6d2132c7-131d-420a-a584-ab2d362bcbce Nov 6 01:51:39.448: INFO: Deleting PersistentVolumeClaim "pvc-6fnnl" Nov 6 01:51:39.452: INFO: Deleting PersistentVolumeClaim "pvc-vtwzh" Nov 6 01:51:39.455: INFO: Deleting PersistentVolumeClaim "pvc-wb82d" Nov 6 01:51:39.458: INFO: 14/28 pods finished STEP: Delete "local-pvrhpvk" and create a new PV for same local volume storage STEP: Delete "local-pvf9rlc" and create a new PV for same local volume storage STEP: Delete "local-pv2g9b4" and create a new PV for same local volume storage Nov 6 01:51:43.443: INFO: Deleting pod pod-81333dc9-7093-495d-b548-afe8132e1ab4 Nov 6 01:51:43.448: INFO: Deleting PersistentVolumeClaim "pvc-rglrc" Nov 6 01:51:43.452: INFO: Deleting PersistentVolumeClaim "pvc-vkbt7" Nov 6 01:51:43.455: INFO: Deleting PersistentVolumeClaim "pvc-6mkcx" Nov 6 01:51:43.458: INFO: 15/28 pods finished STEP: Delete "local-pv4jsxl" and create a new PV for same local volume storage STEP: Delete "local-pv4jsxl" and create a new PV for same local volume storage STEP: Delete "local-pv5pc7m" and create a new PV for same local volume storage STEP: Delete "local-pvpkl6n" and create a new PV for same local volume storage Nov 6 01:51:46.442: INFO: Deleting pod pod-ee4840d1-60bb-41d3-8b90-d32ab0c3ca79 Nov 6 01:51:46.448: INFO: Deleting PersistentVolumeClaim "pvc-dp226" Nov 6 01:51:46.451: INFO: Deleting PersistentVolumeClaim "pvc-h6mxf" Nov 6 01:51:46.455: INFO: Deleting PersistentVolumeClaim "pvc-jgc8s" Nov 6 01:51:46.460: INFO: 16/28 pods finished STEP: Delete "local-pvb8f2x" and create a new PV for same local volume storage STEP: Delete "local-pvv22sj" and create a new PV for same local volume storage STEP: Delete "local-pvnk6wl" and create a new PV for same local volume storage Nov 6 01:51:50.442: INFO: Deleting pod pod-a9130a8b-0288-434d-a269-3b03635b81bc Nov 6 01:51:50.448: INFO: Deleting PersistentVolumeClaim "pvc-89wbg" Nov 6 01:51:50.452: INFO: Deleting PersistentVolumeClaim "pvc-54wkb" Nov 6 01:51:50.456: INFO: Deleting PersistentVolumeClaim "pvc-fvd42" Nov 6 01:51:50.459: INFO: 17/28 pods finished STEP: Delete "local-pvchp6m" and create a new PV for same local volume storage STEP: Delete "local-pvzfcjv" and create a new PV for same local volume storage STEP: Delete "local-pvwgrsj" and create a new PV for same local volume storage Nov 6 01:51:55.442: INFO: Deleting pod pod-bc9b2344-09a4-4e71-8c52-26d48a902ecd Nov 6 01:51:55.448: INFO: Deleting PersistentVolumeClaim "pvc-jzfx8" Nov 6 01:51:55.452: INFO: Deleting PersistentVolumeClaim "pvc-tk2mq" Nov 6 01:51:55.456: INFO: Deleting PersistentVolumeClaim "pvc-mbvsb" Nov 6 01:51:55.459: INFO: 18/28 pods finished STEP: Delete "local-pvcgb28" and create a new PV for same local volume storage STEP: Delete "local-pvlcc4g" and create a new PV for same local volume storage STEP: Delete "local-pv5gjmd" and create a new PV for same local volume storage Nov 6 01:51:57.444: INFO: Deleting pod pod-48aa2a46-8302-4cfd-8ad5-0c905639fe6c Nov 6 01:51:57.450: INFO: Deleting PersistentVolumeClaim "pvc-t8dg5" Nov 6 01:51:57.455: INFO: Deleting PersistentVolumeClaim "pvc-8tx79" Nov 6 01:51:57.458: INFO: Deleting PersistentVolumeClaim "pvc-zbdxc" Nov 6 01:51:57.462: INFO: 19/28 pods finished Nov 6 01:51:57.462: INFO: Deleting pod pod-7ec14a0f-e221-4cd8-8a7a-d1c6f688578f Nov 6 01:51:57.467: INFO: Deleting PersistentVolumeClaim "pvc-h796z" Nov 6 01:51:57.470: INFO: Deleting PersistentVolumeClaim "pvc-tf7sx" Nov 6 01:51:57.474: INFO: Deleting PersistentVolumeClaim "pvc-7gm59" Nov 6 01:51:57.477: INFO: 20/28 pods finished STEP: Delete "local-pv5tpr7" and create a new PV for same local volume storage STEP: Delete "local-pv5tpr7" and create a new PV for same local volume storage STEP: Delete "local-pvr4q5q" and create a new PV for same local volume storage STEP: Delete "local-pvsfxnd" and create a new PV for same local volume storage STEP: Delete "local-pvh7ssq" and create a new PV for same local volume storage STEP: Delete "local-pv578bc" and create a new PV for same local volume storage STEP: Delete "local-pvjf9qb" and create a new PV for same local volume storage Nov 6 01:51:58.442: INFO: Deleting pod pod-e5a91891-c59c-4c68-8ab1-d341905c51ab Nov 6 01:51:58.447: INFO: Deleting PersistentVolumeClaim "pvc-vsrfr" Nov 6 01:51:58.452: INFO: Deleting PersistentVolumeClaim "pvc-rwvzq" Nov 6 01:51:58.455: INFO: Deleting PersistentVolumeClaim "pvc-tw79b" Nov 6 01:51:58.459: INFO: 21/28 pods finished STEP: Delete "local-pvfnllb" and create a new PV for same local volume storage STEP: Delete "local-pvpblm5" and create a new PV for same local volume storage STEP: Delete "local-pvz8h9c" and create a new PV for same local volume storage Nov 6 01:52:04.446: INFO: Deleting pod pod-24db76eb-14c0-46cb-b603-3abca8cb106b Nov 6 01:52:04.454: INFO: Deleting PersistentVolumeClaim "pvc-tmq59" Nov 6 01:52:04.458: INFO: Deleting PersistentVolumeClaim "pvc-rhbtn" Nov 6 01:52:04.461: INFO: Deleting PersistentVolumeClaim "pvc-2rllw" Nov 6 01:52:04.465: INFO: 22/28 pods finished STEP: Delete "local-pvfc7hg" and create a new PV for same local volume storage STEP: Delete "local-pvhv8pd" and create a new PV for same local volume storage STEP: Delete "local-pvgxn7v" and create a new PV for same local volume storage Nov 6 01:52:05.443: INFO: Deleting pod pod-c944a5fa-1599-4b18-8c39-25c4bdf8ba52 Nov 6 01:52:05.449: INFO: Deleting PersistentVolumeClaim "pvc-cvdlx" Nov 6 01:52:05.453: INFO: Deleting PersistentVolumeClaim "pvc-nxfxf" Nov 6 01:52:05.457: INFO: Deleting PersistentVolumeClaim "pvc-6lxsv" Nov 6 01:52:05.461: INFO: 23/28 pods finished STEP: Delete "local-pvgd7nh" and create a new PV for same local volume storage STEP: Delete "local-pvb6d9w" and create a new PV for same local volume storage STEP: Delete "local-pv54ltt" and create a new PV for same local volume storage STEP: Delete "pvc-7a743f7f-2303-4f97-a68d-372ee462cec5" and create a new PV for same local volume storage STEP: Delete "pvc-7a743f7f-2303-4f97-a68d-372ee462cec5" and create a new PV for same local volume storage Nov 6 01:52:06.445: INFO: Deleting pod pod-9ab1cfd1-5282-43e4-8ab5-47cff1c0515d Nov 6 01:52:06.452: INFO: Deleting PersistentVolumeClaim "pvc-mjz9c" Nov 6 01:52:06.455: INFO: Deleting PersistentVolumeClaim "pvc-fc5dh" Nov 6 01:52:06.459: INFO: Deleting PersistentVolumeClaim "pvc-xfbgb" Nov 6 01:52:06.462: INFO: 24/28 pods finished STEP: Delete "local-pvkpgcs" and create a new PV for same local volume storage STEP: Delete "local-pvjfszm" and create a new PV for same local volume storage STEP: Delete "local-pvvtpwt" and create a new PV for same local volume storage Nov 6 01:52:09.444: INFO: Deleting pod pod-7baacfba-48c3-429c-bc53-86b00ac4fe75 Nov 6 01:52:09.451: INFO: Deleting PersistentVolumeClaim "pvc-sfkd5" Nov 6 01:52:09.456: INFO: Deleting PersistentVolumeClaim "pvc-bk4w4" Nov 6 01:52:09.459: INFO: Deleting PersistentVolumeClaim "pvc-q9psl" Nov 6 01:52:09.463: INFO: 25/28 pods finished Nov 6 01:52:09.463: INFO: Deleting pod pod-c7339af6-27da-4c7a-b061-339acf651854 Nov 6 01:52:09.472: INFO: Deleting PersistentVolumeClaim "pvc-lnjwd" STEP: Delete "local-pvf2knw" and create a new PV for same local volume storage Nov 6 01:52:09.475: INFO: Deleting PersistentVolumeClaim "pvc-5l4bp" Nov 6 01:52:09.479: INFO: Deleting PersistentVolumeClaim "pvc-pjgkx" STEP: Delete "local-pvg9vvc" and create a new PV for same local volume storage Nov 6 01:52:09.482: INFO: 26/28 pods finished STEP: Delete "local-pvnsdxs" and create a new PV for same local volume storage STEP: Delete "local-pvnmv6z" and create a new PV for same local volume storage STEP: Delete "local-pvxpzdx" and create a new PV for same local volume storage STEP: Delete "local-pv9vkzh" and create a new PV for same local volume storage Nov 6 01:52:12.441: INFO: Deleting pod pod-adf15ebd-4653-4307-bb31-b07e07f4828e Nov 6 01:52:12.464: INFO: Deleting PersistentVolumeClaim "pvc-w2jkq" Nov 6 01:52:12.468: INFO: Deleting PersistentVolumeClaim "pvc-bpzvt" Nov 6 01:52:12.472: INFO: Deleting PersistentVolumeClaim "pvc-lc7tr" Nov 6 01:52:12.475: INFO: 27/28 pods finished STEP: Delete "local-pvk4z4x" and create a new PV for same local volume storage STEP: Delete "local-pvmcrmx" and create a new PV for same local volume storage STEP: Delete "local-pv6gfzr" and create a new PV for same local volume storage Nov 6 01:52:15.442: INFO: Deleting pod pod-3a74a2fa-25d4-4b79-905b-5c0db86c9fd8 Nov 6 01:52:15.449: INFO: Deleting PersistentVolumeClaim "pvc-zrbdr" Nov 6 01:52:15.452: INFO: Deleting PersistentVolumeClaim "pvc-xwpjv" Nov 6 01:52:15.456: INFO: Deleting PersistentVolumeClaim "pvc-p7pg6" Nov 6 01:52:15.460: INFO: 28/28 pods finished [AfterEach] Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:519 STEP: Stop and wait for recycle goroutine to finish STEP: Clean all PVs STEP: Cleaning up 10 local volumes on node "node1" STEP: Cleaning up PVC and PV Nov 6 01:52:15.460: INFO: pvc is nil Nov 6 01:52:15.460: INFO: Deleting PersistentVolume "local-pv6tck7" STEP: Cleaning up PVC and PV Nov 6 01:52:15.464: INFO: pvc is nil Nov 6 01:52:15.464: INFO: Deleting PersistentVolume "local-pv8xf2g" STEP: Cleaning up PVC and PV Nov 6 01:52:15.468: INFO: pvc is nil Nov 6 01:52:15.468: INFO: Deleting PersistentVolume "local-pvkwdsl" STEP: Cleaning up PVC and PV Nov 6 01:52:15.471: INFO: pvc is nil Nov 6 01:52:15.471: INFO: Deleting PersistentVolume "local-pv59pjt" STEP: Cleaning up PVC and PV Nov 6 01:52:15.475: INFO: pvc is nil Nov 6 01:52:15.475: INFO: Deleting PersistentVolume "local-pvbdt5w" STEP: Cleaning up PVC and PV Nov 6 01:52:15.478: INFO: pvc is nil Nov 6 01:52:15.478: INFO: Deleting PersistentVolume "local-pv86k7d" STEP: Cleaning up PVC and PV Nov 6 01:52:15.481: INFO: pvc is nil Nov 6 01:52:15.481: INFO: Deleting PersistentVolume "local-pvzpctg" STEP: Cleaning up PVC and PV Nov 6 01:52:15.485: INFO: pvc is nil Nov 6 01:52:15.485: INFO: Deleting PersistentVolume "local-pvqtq7b" STEP: Cleaning up PVC and PV Nov 6 01:52:15.489: INFO: pvc is nil Nov 6 01:52:15.489: INFO: Deleting PersistentVolume "local-pvt4mv9" STEP: Cleaning up PVC and PV Nov 6 01:52:15.492: INFO: pvc is nil Nov 6 01:52:15.492: INFO: Deleting PersistentVolume "local-pv4drqb" STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-4a69964e-def4-4789-af86-7e58e6720551" Nov 6 01:52:15.496: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-4a69964e-def4-4789-af86-7e58e6720551"] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node1-x46s7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:15.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:52:15.594: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4a69964e-def4-4789-af86-7e58e6720551] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node1-x46s7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:15.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-40925952-70d0-4c6d-872b-f668668cc210" Nov 6 01:52:15.693: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-40925952-70d0-4c6d-872b-f668668cc210"] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node1-x46s7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:15.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:52:15.805: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-40925952-70d0-4c6d-872b-f668668cc210] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node1-x46s7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:15.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-95bf36f7-57d2-45cd-8ac2-8b98b7124ccc" Nov 6 01:52:15.890: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-95bf36f7-57d2-45cd-8ac2-8b98b7124ccc"] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node1-x46s7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:15.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:52:15.984: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-95bf36f7-57d2-45cd-8ac2-8b98b7124ccc] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node1-x46s7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:15.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-8fe17d23-0266-4158-a16b-f2fba441bb39" Nov 6 01:52:16.075: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-8fe17d23-0266-4158-a16b-f2fba441bb39"] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node1-x46s7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:16.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:52:16.166: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8fe17d23-0266-4158-a16b-f2fba441bb39] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node1-x46s7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:16.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-05fedaf4-410e-4a3d-b26e-3a2dde6c419c" Nov 6 01:52:16.252: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-05fedaf4-410e-4a3d-b26e-3a2dde6c419c"] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node1-x46s7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:16.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:52:16.355: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-05fedaf4-410e-4a3d-b26e-3a2dde6c419c] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node1-x46s7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:16.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-55c2be5e-bbdf-44bd-aff4-d6bacd4a36cc" Nov 6 01:52:16.442: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-55c2be5e-bbdf-44bd-aff4-d6bacd4a36cc"] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node1-x46s7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:16.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:52:16.558: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-55c2be5e-bbdf-44bd-aff4-d6bacd4a36cc] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node1-x46s7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:16.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-9b8b1864-8ad1-41c1-985d-c7873e952c43" Nov 6 01:52:16.682: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-9b8b1864-8ad1-41c1-985d-c7873e952c43"] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node1-x46s7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:16.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:52:16.776: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-9b8b1864-8ad1-41c1-985d-c7873e952c43] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node1-x46s7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:16.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-99b4faf0-04f7-43f0-8ede-41681965cfc8" Nov 6 01:52:16.866: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-99b4faf0-04f7-43f0-8ede-41681965cfc8"] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node1-x46s7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:16.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:52:16.966: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-99b4faf0-04f7-43f0-8ede-41681965cfc8] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node1-x46s7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:16.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-920a3766-81f8-446c-ad90-4596982b21b8" Nov 6 01:52:17.068: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-920a3766-81f8-446c-ad90-4596982b21b8"] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node1-x46s7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:17.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:52:17.159: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-920a3766-81f8-446c-ad90-4596982b21b8] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node1-x46s7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:17.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-2aa674d7-912b-447a-a0f7-50c0edbaa690" Nov 6 01:52:17.274: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-2aa674d7-912b-447a-a0f7-50c0edbaa690"] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node1-x46s7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:17.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:52:17.385: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-2aa674d7-912b-447a-a0f7-50c0edbaa690] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node1-x46s7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:17.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up 10 local volumes on node "node2" STEP: Cleaning up PVC and PV Nov 6 01:52:17.474: INFO: pvc is nil Nov 6 01:52:17.474: INFO: Deleting PersistentVolume "local-pvwcqgz" STEP: Cleaning up PVC and PV Nov 6 01:52:17.478: INFO: pvc is nil Nov 6 01:52:17.478: INFO: Deleting PersistentVolume "local-pvlrrnm" STEP: Cleaning up PVC and PV Nov 6 01:52:17.482: INFO: pvc is nil Nov 6 01:52:17.482: INFO: Deleting PersistentVolume "local-pvljx9l" STEP: Cleaning up PVC and PV Nov 6 01:52:17.486: INFO: pvc is nil Nov 6 01:52:17.486: INFO: Deleting PersistentVolume "local-pvmlbw7" STEP: Cleaning up PVC and PV Nov 6 01:52:17.489: INFO: pvc is nil Nov 6 01:52:17.489: INFO: Deleting PersistentVolume "local-pvrtvhl" STEP: Cleaning up PVC and PV Nov 6 01:52:17.493: INFO: pvc is nil Nov 6 01:52:17.493: INFO: Deleting PersistentVolume "local-pvjzkkv" STEP: Cleaning up PVC and PV Nov 6 01:52:17.496: INFO: pvc is nil Nov 6 01:52:17.496: INFO: Deleting PersistentVolume "local-pvhzxc5" STEP: Cleaning up PVC and PV Nov 6 01:52:17.500: INFO: pvc is nil Nov 6 01:52:17.500: INFO: Deleting PersistentVolume "local-pvdnh6w" STEP: Cleaning up PVC and PV Nov 6 01:52:17.504: INFO: pvc is nil Nov 6 01:52:17.504: INFO: Deleting PersistentVolume "local-pvg6wd9" STEP: Cleaning up PVC and PV Nov 6 01:52:17.508: INFO: pvc is nil Nov 6 01:52:17.508: INFO: Deleting PersistentVolume "local-pvwt4tg" STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-d5b09707-b361-4cfb-906a-55c25cdf4f33" Nov 6 01:52:17.511: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-d5b09707-b361-4cfb-906a-55c25cdf4f33"] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node2-bgcbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:17.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:52:17.636: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-d5b09707-b361-4cfb-906a-55c25cdf4f33] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node2-bgcbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:17.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-ebfa976f-8a4d-4fc3-91ba-fe941220b8d6" Nov 6 01:52:18.007: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-ebfa976f-8a4d-4fc3-91ba-fe941220b8d6"] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node2-bgcbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:18.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:52:18.107: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ebfa976f-8a4d-4fc3-91ba-fe941220b8d6] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node2-bgcbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:18.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-8e97e5d6-c8b8-4eae-8e9e-93a8072ba105" Nov 6 01:52:18.237: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-8e97e5d6-c8b8-4eae-8e9e-93a8072ba105"] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node2-bgcbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:18.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:52:18.373: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8e97e5d6-c8b8-4eae-8e9e-93a8072ba105] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node2-bgcbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:18.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-eb792536-b789-4a71-aa80-cbe3a5713696" Nov 6 01:52:18.500: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-eb792536-b789-4a71-aa80-cbe3a5713696"] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node2-bgcbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:18.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:52:18.614: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-eb792536-b789-4a71-aa80-cbe3a5713696] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node2-bgcbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:18.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-336d9ac9-5bf0-4bba-9e2a-022e73e8d6b0" Nov 6 01:52:18.715: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-336d9ac9-5bf0-4bba-9e2a-022e73e8d6b0"] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node2-bgcbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:18.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:52:18.837: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-336d9ac9-5bf0-4bba-9e2a-022e73e8d6b0] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node2-bgcbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:18.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-4d14979f-57ac-4304-a345-ec2147b3d4d1" Nov 6 01:52:18.927: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-4d14979f-57ac-4304-a345-ec2147b3d4d1"] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node2-bgcbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:18.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:52:19.055: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4d14979f-57ac-4304-a345-ec2147b3d4d1] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node2-bgcbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:19.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-6f2f4f35-c7ff-4af4-bf9d-d2f0ed3db422" Nov 6 01:52:19.202: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-6f2f4f35-c7ff-4af4-bf9d-d2f0ed3db422"] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node2-bgcbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:19.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:52:19.321: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6f2f4f35-c7ff-4af4-bf9d-d2f0ed3db422] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node2-bgcbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:19.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-9b172b00-7e4e-4e6f-a810-4a95beb1197b" Nov 6 01:52:19.477: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-9b172b00-7e4e-4e6f-a810-4a95beb1197b"] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node2-bgcbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:19.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:52:19.582: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-9b172b00-7e4e-4e6f-a810-4a95beb1197b] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node2-bgcbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:19.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-e53aa599-9ec2-4798-8e65-d98133192984" Nov 6 01:52:19.701: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-e53aa599-9ec2-4798-8e65-d98133192984"] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node2-bgcbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:19.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:52:19.844: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e53aa599-9ec2-4798-8e65-d98133192984] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node2-bgcbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:19.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-1e701f97-17bd-4eb8-8021-4b612c1c1345" Nov 6 01:52:19.922: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-1e701f97-17bd-4eb8-8021-4b612c1c1345"] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node2-bgcbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:19.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:52:20.026: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-1e701f97-17bd-4eb8-8021-4b612c1c1345] Namespace:persistent-local-volumes-test-3104 PodName:hostexec-node2-bgcbk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:20.026: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:52:20.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3104" for this suite. • [SLOW TEST:106.602 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:441 should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:531 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","total":-1,"completed":1,"skipped":86,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:52:17.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48 STEP: Creating a pod to test hostPath mode Nov 6 01:52:17.234: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-470" to be "Succeeded or Failed" Nov 6 01:52:17.241: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 7.577744ms Nov 6 01:52:19.244: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010605043s Nov 6 01:52:21.249: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014949681s STEP: Saw pod success Nov 6 01:52:21.249: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Nov 6 01:52:21.251: INFO: Trying to get logs from node node1 pod pod-host-path-test container test-container-1: STEP: delete the pod Nov 6 01:52:21.270: INFO: Waiting for pod pod-host-path-test to disappear Nov 6 01:52:21.273: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:52:21.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-470" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","total":-1,"completed":4,"skipped":90,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:52:13.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 6 01:52:15.123: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-3e9f61bd-0d20-4591-8fca-15ce1bacbd69-backend && ln -s /tmp/local-volume-test-3e9f61bd-0d20-4591-8fca-15ce1bacbd69-backend /tmp/local-volume-test-3e9f61bd-0d20-4591-8fca-15ce1bacbd69] Namespace:persistent-local-volumes-test-529 PodName:hostexec-node1-pbb5x ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:15.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:52:15.213: INFO: Creating a PV followed by a PVC Nov 6 01:52:15.219: INFO: Waiting for PV local-pv92rkr to bind to PVC pvc-ml5zt Nov 6 01:52:15.219: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-ml5zt] to have phase Bound Nov 6 01:52:15.222: INFO: PersistentVolumeClaim pvc-ml5zt found but phase is Pending instead of Bound. Nov 6 01:52:17.225: INFO: PersistentVolumeClaim pvc-ml5zt found but phase is Pending instead of Bound. Nov 6 01:52:19.228: INFO: PersistentVolumeClaim pvc-ml5zt found but phase is Pending instead of Bound. Nov 6 01:52:21.232: INFO: PersistentVolumeClaim pvc-ml5zt found but phase is Pending instead of Bound. Nov 6 01:52:23.235: INFO: PersistentVolumeClaim pvc-ml5zt found and phase=Bound (8.015840854s) Nov 6 01:52:23.235: INFO: Waiting up to 3m0s for PersistentVolume local-pv92rkr to have phase Bound Nov 6 01:52:23.237: INFO: PersistentVolume local-pv92rkr found and phase=Bound (1.897195ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 6 01:52:27.283: INFO: pod "pod-08ea5f81-7ed5-429a-9f3b-7625ce293736" created on Node "node1" STEP: Writing in pod1 Nov 6 01:52:27.283: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-529 PodName:pod-08ea5f81-7ed5-429a-9f3b-7625ce293736 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:52:27.283: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:52:27.655: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Nov 6 01:52:27.655: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-529 PodName:pod-08ea5f81-7ed5-429a-9f3b-7625ce293736 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:52:27.655: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:52:27.873: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Nov 6 01:52:27.874: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-3e9f61bd-0d20-4591-8fca-15ce1bacbd69 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-529 PodName:pod-08ea5f81-7ed5-429a-9f3b-7625ce293736 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:52:27.874: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:52:27.951: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-3e9f61bd-0d20-4591-8fca-15ce1bacbd69 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-08ea5f81-7ed5-429a-9f3b-7625ce293736 in namespace persistent-local-volumes-test-529 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:52:27.956: INFO: Deleting PersistentVolumeClaim "pvc-ml5zt" Nov 6 01:52:27.960: INFO: Deleting PersistentVolume "local-pv92rkr" STEP: Removing the test directory Nov 6 01:52:27.965: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-3e9f61bd-0d20-4591-8fca-15ce1bacbd69 && rm -r /tmp/local-volume-test-3e9f61bd-0d20-4591-8fca-15ce1bacbd69-backend] Namespace:persistent-local-volumes-test-529 PodName:hostexec-node1-pbb5x ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:27.965: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:52:28.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-529" for this suite. • [SLOW TEST:15.066 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":4,"skipped":110,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:52:28.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Nov 6 01:52:28.197: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:52:28.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3188" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.037 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create total pv count metrics for with plugin and volume mode labels after creating pv /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:513 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:52:21.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:110 STEP: Creating configMap with name configmap-test-volume-map-5bc86115-2cb8-4709-8cce-8020177bd3e7 STEP: Creating a pod to test consume configMaps Nov 6 01:52:21.333: INFO: Waiting up to 5m0s for pod "pod-configmaps-4ea31bed-54c5-4266-843c-190e2d7982f4" in namespace "configmap-9816" to be "Succeeded or Failed" Nov 6 01:52:21.336: INFO: Pod "pod-configmaps-4ea31bed-54c5-4266-843c-190e2d7982f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.756724ms Nov 6 01:52:23.339: INFO: Pod "pod-configmaps-4ea31bed-54c5-4266-843c-190e2d7982f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00681134s Nov 6 01:52:25.344: INFO: Pod "pod-configmaps-4ea31bed-54c5-4266-843c-190e2d7982f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011303841s Nov 6 01:52:27.347: INFO: Pod "pod-configmaps-4ea31bed-54c5-4266-843c-190e2d7982f4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014518638s Nov 6 01:52:29.351: INFO: Pod "pod-configmaps-4ea31bed-54c5-4266-843c-190e2d7982f4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018827635s Nov 6 01:52:31.356: INFO: Pod "pod-configmaps-4ea31bed-54c5-4266-843c-190e2d7982f4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.023309757s Nov 6 01:52:33.358: INFO: Pod "pod-configmaps-4ea31bed-54c5-4266-843c-190e2d7982f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.025897265s STEP: Saw pod success Nov 6 01:52:33.359: INFO: Pod "pod-configmaps-4ea31bed-54c5-4266-843c-190e2d7982f4" satisfied condition "Succeeded or Failed" Nov 6 01:52:33.361: INFO: Trying to get logs from node node1 pod pod-configmaps-4ea31bed-54c5-4266-843c-190e2d7982f4 container agnhost-container: STEP: delete the pod Nov 6 01:52:33.406: INFO: Waiting for pod pod-configmaps-4ea31bed-54c5-4266-843c-190e2d7982f4 to disappear Nov 6 01:52:33.408: INFO: Pod pod-configmaps-4ea31bed-54c5-4266-843c-190e2d7982f4 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:52:33.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9816" for this suite. • [SLOW TEST:12.123 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:110 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":5,"skipped":95,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:52:28.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 6 01:52:30.269: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-31ebd544-7ee7-480b-a2ef-6f3224a03253 && mount --bind /tmp/local-volume-test-31ebd544-7ee7-480b-a2ef-6f3224a03253 /tmp/local-volume-test-31ebd544-7ee7-480b-a2ef-6f3224a03253] Namespace:persistent-local-volumes-test-2605 PodName:hostexec-node2-t42ht ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:30.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:52:30.380: INFO: Creating a PV followed by a PVC Nov 6 01:52:30.387: INFO: Waiting for PV local-pv96ddh to bind to PVC pvc-4r695 Nov 6 01:52:30.387: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-4r695] to have phase Bound Nov 6 01:52:30.389: INFO: PersistentVolumeClaim pvc-4r695 found but phase is Pending instead of Bound. Nov 6 01:52:32.394: INFO: PersistentVolumeClaim pvc-4r695 found but phase is Pending instead of Bound. Nov 6 01:52:34.399: INFO: PersistentVolumeClaim pvc-4r695 found but phase is Pending instead of Bound. Nov 6 01:52:36.403: INFO: PersistentVolumeClaim pvc-4r695 found but phase is Pending instead of Bound. Nov 6 01:52:38.406: INFO: PersistentVolumeClaim pvc-4r695 found and phase=Bound (8.018388623s) Nov 6 01:52:38.406: INFO: Waiting up to 3m0s for PersistentVolume local-pv96ddh to have phase Bound Nov 6 01:52:38.409: INFO: PersistentVolume local-pv96ddh found and phase=Bound (2.964029ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Nov 6 01:52:38.414: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:52:38.415: INFO: Deleting PersistentVolumeClaim "pvc-4r695" Nov 6 01:52:38.420: INFO: Deleting PersistentVolume "local-pv96ddh" STEP: Removing the test directory Nov 6 01:52:38.424: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-31ebd544-7ee7-480b-a2ef-6f3224a03253 && rm -r /tmp/local-volume-test-31ebd544-7ee7-480b-a2ef-6f3224a03253] Namespace:persistent-local-volumes-test-2605 PodName:hostexec-node2-t42ht ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:38.424: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:52:38.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2605" for this suite. S [SKIPPING] [10.313 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:52:38.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Nov 6 01:52:38.581: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:52:38.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-9700" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.028 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create prometheus metrics for volume provisioning errors [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:147 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:52:20.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-53a6b426-0886-4279-addb-4ba94ab56cb2" Nov 6 01:52:22.226: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-53a6b426-0886-4279-addb-4ba94ab56cb2 && dd if=/dev/zero of=/tmp/local-volume-test-53a6b426-0886-4279-addb-4ba94ab56cb2/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-53a6b426-0886-4279-addb-4ba94ab56cb2/file] Namespace:persistent-local-volumes-test-715 PodName:hostexec-node2-cb5jq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:22.226: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:52:22.421: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-53a6b426-0886-4279-addb-4ba94ab56cb2/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-715 PodName:hostexec-node2-cb5jq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:22.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:52:22.609: INFO: Creating a PV followed by a PVC Nov 6 01:52:22.618: INFO: Waiting for PV local-pv4vsqj to bind to PVC pvc-cgnkh Nov 6 01:52:22.618: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-cgnkh] to have phase Bound Nov 6 01:52:22.620: INFO: PersistentVolumeClaim pvc-cgnkh found but phase is Pending instead of Bound. Nov 6 01:52:24.625: INFO: PersistentVolumeClaim pvc-cgnkh found but phase is Pending instead of Bound. Nov 6 01:52:26.627: INFO: PersistentVolumeClaim pvc-cgnkh found but phase is Pending instead of Bound. Nov 6 01:52:28.630: INFO: PersistentVolumeClaim pvc-cgnkh found but phase is Pending instead of Bound. Nov 6 01:52:30.634: INFO: PersistentVolumeClaim pvc-cgnkh found but phase is Pending instead of Bound. Nov 6 01:52:32.639: INFO: PersistentVolumeClaim pvc-cgnkh found but phase is Pending instead of Bound. Nov 6 01:52:34.643: INFO: PersistentVolumeClaim pvc-cgnkh found but phase is Pending instead of Bound. Nov 6 01:52:36.645: INFO: PersistentVolumeClaim pvc-cgnkh found but phase is Pending instead of Bound. Nov 6 01:52:38.648: INFO: PersistentVolumeClaim pvc-cgnkh found and phase=Bound (16.030472258s) Nov 6 01:52:38.648: INFO: Waiting up to 3m0s for PersistentVolume local-pv4vsqj to have phase Bound Nov 6 01:52:38.650: INFO: PersistentVolume local-pv4vsqj found and phase=Bound (1.850657ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Nov 6 01:52:42.677: INFO: pod "pod-d5d2f842-63c5-4913-961d-99b584fa7395" created on Node "node2" STEP: Writing in pod1 Nov 6 01:52:42.677: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-715 PodName:pod-d5d2f842-63c5-4913-961d-99b584fa7395 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:52:42.677: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:52:42.876: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n18 bytes (18B) copied, 0.000128 seconds, 137.3KB/s", err: Nov 6 01:52:42.876: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-715 PodName:pod-d5d2f842-63c5-4913-961d-99b584fa7395 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:52:42.876: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:52:42.952: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Nov 6 01:52:46.975: INFO: pod "pod-03ebe1a0-25ed-490c-854c-b33d80c8f5b9" created on Node "node2" Nov 6 01:52:46.975: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-715 PodName:pod-03ebe1a0-25ed-490c-854c-b33d80c8f5b9 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:52:46.975: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:52:47.105: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Writing in pod2 Nov 6 01:52:47.105: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo /dev/loop0 > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-715 PodName:pod-03ebe1a0-25ed-490c-854c-b33d80c8f5b9 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:52:47.105: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:52:47.215: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo /dev/loop0 > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n11 bytes (11B) copied, 0.000031 seconds, 346.5KB/s", err: STEP: Reading in pod1 Nov 6 01:52:47.215: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-715 PodName:pod-d5d2f842-63c5-4913-961d-99b584fa7395 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:52:47.215: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:52:47.407: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "/dev/loop0.ontent...................................................................................", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-d5d2f842-63c5-4913-961d-99b584fa7395 in namespace persistent-local-volumes-test-715 STEP: Deleting pod2 STEP: Deleting pod pod-03ebe1a0-25ed-490c-854c-b33d80c8f5b9 in namespace persistent-local-volumes-test-715 [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:52:47.415: INFO: Deleting PersistentVolumeClaim "pvc-cgnkh" Nov 6 01:52:47.418: INFO: Deleting PersistentVolume "local-pv4vsqj" Nov 6 01:52:47.422: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-53a6b426-0886-4279-addb-4ba94ab56cb2/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-715 PodName:hostexec-node2-cb5jq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:47.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-53a6b426-0886-4279-addb-4ba94ab56cb2/file Nov 6 01:52:47.582: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-715 PodName:hostexec-node2-cb5jq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:47.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-53a6b426-0886-4279-addb-4ba94ab56cb2 Nov 6 01:52:47.745: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-53a6b426-0886-4279-addb-4ba94ab56cb2] Namespace:persistent-local-volumes-test-715 PodName:hostexec-node2-cb5jq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:47.745: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:52:47.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-715" for this suite. • [SLOW TEST:27.804 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":2,"skipped":93,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:52:48.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-b88ca898-7d6a-4d06-aa6c-999bd8989820" Nov 6 01:52:50.170: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-b88ca898-7d6a-4d06-aa6c-999bd8989820 && dd if=/dev/zero of=/tmp/local-volume-test-b88ca898-7d6a-4d06-aa6c-999bd8989820/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-b88ca898-7d6a-4d06-aa6c-999bd8989820/file] Namespace:persistent-local-volumes-test-7841 PodName:hostexec-node1-bxq2g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:50.170: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:52:50.310: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-b88ca898-7d6a-4d06-aa6c-999bd8989820/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-7841 PodName:hostexec-node1-bxq2g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:50.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:52:50.399: INFO: Creating a PV followed by a PVC Nov 6 01:52:50.405: INFO: Waiting for PV local-pv7x4jp to bind to PVC pvc-gnd68 Nov 6 01:52:50.406: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-gnd68] to have phase Bound Nov 6 01:52:50.407: INFO: PersistentVolumeClaim pvc-gnd68 found but phase is Pending instead of Bound. Nov 6 01:52:52.411: INFO: PersistentVolumeClaim pvc-gnd68 found and phase=Bound (2.005162283s) Nov 6 01:52:52.411: INFO: Waiting up to 3m0s for PersistentVolume local-pv7x4jp to have phase Bound Nov 6 01:52:52.413: INFO: PersistentVolume local-pv7x4jp found and phase=Bound (1.939208ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 Nov 6 01:52:52.416: INFO: We don't set fsGroup on block device, skipped. [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:52:52.418: INFO: Deleting PersistentVolumeClaim "pvc-gnd68" Nov 6 01:52:52.422: INFO: Deleting PersistentVolume "local-pv7x4jp" Nov 6 01:52:52.426: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-b88ca898-7d6a-4d06-aa6c-999bd8989820/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-7841 PodName:hostexec-node1-bxq2g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:52.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node1" at path /tmp/local-volume-test-b88ca898-7d6a-4d06-aa6c-999bd8989820/file Nov 6 01:52:52.515: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-7841 PodName:hostexec-node1-bxq2g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:52.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-b88ca898-7d6a-4d06-aa6c-999bd8989820 Nov 6 01:52:52.604: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-b88ca898-7d6a-4d06-aa6c-999bd8989820] Namespace:persistent-local-volumes-test-7841 PodName:hostexec-node1-bxq2g ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:52.604: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:52:52.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7841" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [4.627 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 We don't set fsGroup on block device, skipped. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:263 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:52:52.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Nov 6 01:52:54.855: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-45 PodName:hostexec-node2-p5hhs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:52:54.855: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:52:54.943: INFO: exec node2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Nov 6 01:52:54.943: INFO: exec node2: stdout: "0\n" Nov 6 01:52:54.943: INFO: exec node2: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Nov 6 01:52:54.943: INFO: exec node2: exit code: 0 Nov 6 01:52:54.943: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:52:54.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-45" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.147 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1256 ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:51:29.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] exhausted, immediate binding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 STEP: Building a driver namespace object, basename csi-mock-volumes-9740 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Nov 6 01:51:29.286: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9740-6674/csi-attacher Nov 6 01:51:29.289: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9740 Nov 6 01:51:29.289: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-9740 Nov 6 01:51:29.291: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9740 Nov 6 01:51:29.295: INFO: creating *v1.Role: csi-mock-volumes-9740-6674/external-attacher-cfg-csi-mock-volumes-9740 Nov 6 01:51:29.298: INFO: creating *v1.RoleBinding: csi-mock-volumes-9740-6674/csi-attacher-role-cfg Nov 6 01:51:29.300: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9740-6674/csi-provisioner Nov 6 01:51:29.318: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9740 Nov 6 01:51:29.318: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-9740 Nov 6 01:51:29.322: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9740 Nov 6 01:51:29.325: INFO: creating *v1.Role: csi-mock-volumes-9740-6674/external-provisioner-cfg-csi-mock-volumes-9740 Nov 6 01:51:29.328: INFO: creating *v1.RoleBinding: csi-mock-volumes-9740-6674/csi-provisioner-role-cfg Nov 6 01:51:29.331: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9740-6674/csi-resizer Nov 6 01:51:29.333: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9740 Nov 6 01:51:29.333: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-9740 Nov 6 01:51:29.336: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9740 Nov 6 01:51:29.339: INFO: creating *v1.Role: csi-mock-volumes-9740-6674/external-resizer-cfg-csi-mock-volumes-9740 Nov 6 01:51:29.341: INFO: creating *v1.RoleBinding: csi-mock-volumes-9740-6674/csi-resizer-role-cfg Nov 6 01:51:29.344: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9740-6674/csi-snapshotter Nov 6 01:51:29.346: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9740 Nov 6 01:51:29.346: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-9740 Nov 6 01:51:29.349: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9740 Nov 6 01:51:29.351: INFO: creating *v1.Role: csi-mock-volumes-9740-6674/external-snapshotter-leaderelection-csi-mock-volumes-9740 Nov 6 01:51:29.353: INFO: creating *v1.RoleBinding: csi-mock-volumes-9740-6674/external-snapshotter-leaderelection Nov 6 01:51:29.356: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9740-6674/csi-mock Nov 6 01:51:29.359: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9740 Nov 6 01:51:29.362: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9740 Nov 6 01:51:29.365: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9740 Nov 6 01:51:29.368: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9740 Nov 6 01:51:29.370: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9740 Nov 6 01:51:29.373: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9740 Nov 6 01:51:29.375: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9740 Nov 6 01:51:29.377: INFO: creating *v1.StatefulSet: csi-mock-volumes-9740-6674/csi-mockplugin Nov 6 01:51:29.381: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-9740 Nov 6 01:51:29.384: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-9740" Nov 6 01:51:29.386: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-9740 to register on node node1 I1106 01:51:39.444217 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-9740","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1106 01:51:39.541624 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I1106 01:51:39.543302 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-9740","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1106 01:51:39.544846 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I1106 01:51:39.546850 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I1106 01:51:39.684108 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-9740"},"Error":"","FullError":null} STEP: Creating pod Nov 6 01:51:45.657: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 6 01:51:45.662: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-h2zlw] to have phase Bound Nov 6 01:51:45.664: INFO: PersistentVolumeClaim pvc-h2zlw found but phase is Pending instead of Bound. I1106 01:51:45.670885 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-7a743f7f-2303-4f97-a68d-372ee462cec5","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}} I1106 01:51:45.672752 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-7a743f7f-2303-4f97-a68d-372ee462cec5","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-7a743f7f-2303-4f97-a68d-372ee462cec5"}}},"Error":"","FullError":null} Nov 6 01:51:47.668: INFO: PersistentVolumeClaim pvc-h2zlw found and phase=Bound (2.006732052s) I1106 01:51:52.621072 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Nov 6 01:51:52.626: INFO: >>> kubeConfig: /root/.kube/config I1106 01:51:52.796267 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7a743f7f-2303-4f97-a68d-372ee462cec5/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-7a743f7f-2303-4f97-a68d-372ee462cec5","storage.kubernetes.io/csiProvisionerIdentity":"1636163499546-8081-csi-mock-csi-mock-volumes-9740"}},"Response":{},"Error":"","FullError":null} I1106 01:51:53.433925 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Nov 6 01:51:53.436: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:51:53.614: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:51:53.711: INFO: >>> kubeConfig: /root/.kube/config I1106 01:51:53.803837 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7a743f7f-2303-4f97-a68d-372ee462cec5/globalmount","target_path":"/var/lib/kubelet/pods/1ea75cbc-af1d-4180-a5e7-1ad868d04d95/volumes/kubernetes.io~csi/pvc-7a743f7f-2303-4f97-a68d-372ee462cec5/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-7a743f7f-2303-4f97-a68d-372ee462cec5","storage.kubernetes.io/csiProvisionerIdentity":"1636163499546-8081-csi-mock-csi-mock-volumes-9740"}},"Response":{},"Error":"","FullError":null} Nov 6 01:51:59.690: INFO: Deleting pod "pvc-volume-tester-bkn2t" in namespace "csi-mock-volumes-9740" Nov 6 01:51:59.695: INFO: Wait up to 5m0s for pod "pvc-volume-tester-bkn2t" to be fully deleted Nov 6 01:52:01.368: INFO: >>> kubeConfig: /root/.kube/config I1106 01:52:01.466326 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/1ea75cbc-af1d-4180-a5e7-1ad868d04d95/volumes/kubernetes.io~csi/pvc-7a743f7f-2303-4f97-a68d-372ee462cec5/mount"},"Response":{},"Error":"","FullError":null} I1106 01:52:01.613587 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1106 01:52:01.615667 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7a743f7f-2303-4f97-a68d-372ee462cec5/globalmount"},"Response":{},"Error":"","FullError":null} I1106 01:52:05.725521 28 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} STEP: Checking PVC events Nov 6 01:52:06.705: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-h2zlw", GenerateName:"pvc-", Namespace:"csi-mock-volumes-9740", SelfLink:"", UID:"7a743f7f-2303-4f97-a68d-372ee462cec5", ResourceVersion:"96945", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63771760305, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0042ea5b8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0042ea5d0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc003a34520), VolumeMode:(*v1.PersistentVolumeMode)(0xc003a34530), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 6 01:52:06.705: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-h2zlw", GenerateName:"pvc-", Namespace:"csi-mock-volumes-9740", SelfLink:"", UID:"7a743f7f-2303-4f97-a68d-372ee462cec5", ResourceVersion:"96946", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63771760305, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-9740"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0042ea630), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0042ea648)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0042ea660), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0042ea678)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc003a34560), VolumeMode:(*v1.PersistentVolumeMode)(0xc003a34570), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 6 01:52:06.705: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-h2zlw", GenerateName:"pvc-", Namespace:"csi-mock-volumes-9740", SelfLink:"", UID:"7a743f7f-2303-4f97-a68d-372ee462cec5", ResourceVersion:"96954", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63771760305, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-9740"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004acceb8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004acced0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004accee8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004accf00)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-7a743f7f-2303-4f97-a68d-372ee462cec5", StorageClassName:(*string)(0xc004d4cc20), VolumeMode:(*v1.PersistentVolumeMode)(0xc004d4cc30), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 6 01:52:06.705: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-h2zlw", GenerateName:"pvc-", Namespace:"csi-mock-volumes-9740", SelfLink:"", UID:"7a743f7f-2303-4f97-a68d-372ee462cec5", ResourceVersion:"96955", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63771760305, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-9740"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004accf30), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004accf48)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004accf60), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004accf78)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-7a743f7f-2303-4f97-a68d-372ee462cec5", StorageClassName:(*string)(0xc004d4cc60), VolumeMode:(*v1.PersistentVolumeMode)(0xc004d4cc70), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 6 01:52:06.705: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-h2zlw", GenerateName:"pvc-", Namespace:"csi-mock-volumes-9740", SelfLink:"", UID:"7a743f7f-2303-4f97-a68d-372ee462cec5", ResourceVersion:"97578", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63771760305, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(0xc004accfa8), DeletionGracePeriodSeconds:(*int64)(0xc00458b538), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-9740"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004accfc0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004accfd8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004accff0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004acd008)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-7a743f7f-2303-4f97-a68d-372ee462cec5", StorageClassName:(*string)(0xc004d4ccb0), VolumeMode:(*v1.PersistentVolumeMode)(0xc004d4ccc0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 6 01:52:06.705: INFO: PVC event DELETED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-h2zlw", GenerateName:"pvc-", Namespace:"csi-mock-volumes-9740", SelfLink:"", UID:"7a743f7f-2303-4f97-a68d-372ee462cec5", ResourceVersion:"97579", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63771760305, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(0xc003eb7158), DeletionGracePeriodSeconds:(*int64)(0xc003eddb78), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-9740"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003eb7188), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003eb7260)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003eb7290), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003eb72c0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-7a743f7f-2303-4f97-a68d-372ee462cec5", StorageClassName:(*string)(0xc003d34ea0), VolumeMode:(*v1.PersistentVolumeMode)(0xc003d34ec0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} STEP: Deleting pod pvc-volume-tester-bkn2t Nov 6 01:52:06.706: INFO: Deleting pod "pvc-volume-tester-bkn2t" in namespace "csi-mock-volumes-9740" STEP: Deleting claim pvc-h2zlw STEP: Deleting storageclass csi-mock-volumes-9740-scftw8c STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-9740 STEP: Waiting for namespaces [csi-mock-volumes-9740] to vanish STEP: uninstalling csi mock driver Nov 6 01:52:12.738: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9740-6674/csi-attacher Nov 6 01:52:12.742: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9740 Nov 6 01:52:12.746: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9740 Nov 6 01:52:12.749: INFO: deleting *v1.Role: csi-mock-volumes-9740-6674/external-attacher-cfg-csi-mock-volumes-9740 Nov 6 01:52:12.753: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9740-6674/csi-attacher-role-cfg Nov 6 01:52:12.756: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9740-6674/csi-provisioner Nov 6 01:52:12.762: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9740 Nov 6 01:52:12.765: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9740 Nov 6 01:52:12.771: INFO: deleting *v1.Role: csi-mock-volumes-9740-6674/external-provisioner-cfg-csi-mock-volumes-9740 Nov 6 01:52:12.777: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9740-6674/csi-provisioner-role-cfg Nov 6 01:52:12.780: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9740-6674/csi-resizer Nov 6 01:52:12.785: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9740 Nov 6 01:52:12.793: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9740 Nov 6 01:52:12.796: INFO: deleting *v1.Role: csi-mock-volumes-9740-6674/external-resizer-cfg-csi-mock-volumes-9740 Nov 6 01:52:12.799: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9740-6674/csi-resizer-role-cfg Nov 6 01:52:12.802: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9740-6674/csi-snapshotter Nov 6 01:52:12.806: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9740 Nov 6 01:52:12.810: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9740 Nov 6 01:52:12.813: INFO: deleting *v1.Role: csi-mock-volumes-9740-6674/external-snapshotter-leaderelection-csi-mock-volumes-9740 Nov 6 01:52:12.816: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9740-6674/external-snapshotter-leaderelection Nov 6 01:52:12.819: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9740-6674/csi-mock Nov 6 01:52:12.822: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9740 Nov 6 01:52:12.826: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9740 Nov 6 01:52:12.828: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9740 Nov 6 01:52:12.832: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9740 Nov 6 01:52:12.835: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9740 Nov 6 01:52:12.838: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9740 Nov 6 01:52:12.842: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9740 Nov 6 01:52:12.846: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9740-6674/csi-mockplugin Nov 6 01:52:12.849: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-9740 STEP: deleting the driver namespace: csi-mock-volumes-9740-6674 STEP: Waiting for namespaces [csi-mock-volumes-9740-6674] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:52:56.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:87.649 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 storage capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900 exhausted, immediate binding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, immediate binding","total":-1,"completed":4,"skipped":126,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:51:23.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not modify fsGroup if fsGroupPolicy=None /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1460 STEP: Building a driver namespace object, basename csi-mock-volumes-6589 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 6 01:51:23.572: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6589-3982/csi-attacher Nov 6 01:51:23.575: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6589 Nov 6 01:51:23.575: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-6589 Nov 6 01:51:23.578: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6589 Nov 6 01:51:23.581: INFO: creating *v1.Role: csi-mock-volumes-6589-3982/external-attacher-cfg-csi-mock-volumes-6589 Nov 6 01:51:23.584: INFO: creating *v1.RoleBinding: csi-mock-volumes-6589-3982/csi-attacher-role-cfg Nov 6 01:51:23.587: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6589-3982/csi-provisioner Nov 6 01:51:23.590: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6589 Nov 6 01:51:23.590: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-6589 Nov 6 01:51:23.592: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6589 Nov 6 01:51:23.596: INFO: creating *v1.Role: csi-mock-volumes-6589-3982/external-provisioner-cfg-csi-mock-volumes-6589 Nov 6 01:51:23.598: INFO: creating *v1.RoleBinding: csi-mock-volumes-6589-3982/csi-provisioner-role-cfg Nov 6 01:51:23.600: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6589-3982/csi-resizer Nov 6 01:51:23.603: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6589 Nov 6 01:51:23.603: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-6589 Nov 6 01:51:23.605: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6589 Nov 6 01:51:23.608: INFO: creating *v1.Role: csi-mock-volumes-6589-3982/external-resizer-cfg-csi-mock-volumes-6589 Nov 6 01:51:23.610: INFO: creating *v1.RoleBinding: csi-mock-volumes-6589-3982/csi-resizer-role-cfg Nov 6 01:51:23.613: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6589-3982/csi-snapshotter Nov 6 01:51:23.616: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6589 Nov 6 01:51:23.616: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-6589 Nov 6 01:51:23.619: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6589 Nov 6 01:51:23.621: INFO: creating *v1.Role: csi-mock-volumes-6589-3982/external-snapshotter-leaderelection-csi-mock-volumes-6589 Nov 6 01:51:23.624: INFO: creating *v1.RoleBinding: csi-mock-volumes-6589-3982/external-snapshotter-leaderelection Nov 6 01:51:23.626: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6589-3982/csi-mock Nov 6 01:51:23.628: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6589 Nov 6 01:51:23.631: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6589 Nov 6 01:51:23.633: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6589 Nov 6 01:51:23.635: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6589 Nov 6 01:51:23.638: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6589 Nov 6 01:51:23.640: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6589 Nov 6 01:51:23.643: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6589 Nov 6 01:51:23.646: INFO: creating *v1.StatefulSet: csi-mock-volumes-6589-3982/csi-mockplugin Nov 6 01:51:23.653: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-6589 Nov 6 01:51:23.656: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-6589" Nov 6 01:51:23.659: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-6589 to register on node node2 STEP: Creating pod with fsGroup Nov 6 01:51:44.928: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 6 01:51:44.933: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-5h6zt] to have phase Bound Nov 6 01:51:44.935: INFO: PersistentVolumeClaim pvc-5h6zt found but phase is Pending instead of Bound. Nov 6 01:51:46.938: INFO: PersistentVolumeClaim pvc-5h6zt found and phase=Bound (2.005016s) Nov 6 01:51:56.961: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir /mnt/test/csi-mock-volumes-6589] Namespace:csi-mock-volumes-6589 PodName:pvc-volume-tester-gw6v2 ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:51:56.961: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:51:57.061: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'filecontents' > '/mnt/test/csi-mock-volumes-6589/csi-mock-volumes-6589'; sync] Namespace:csi-mock-volumes-6589 PodName:pvc-volume-tester-gw6v2 ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:51:57.061: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:51:58.896: INFO: ExecWithOptions {Command:[/bin/sh -c ls -l /mnt/test/csi-mock-volumes-6589/csi-mock-volumes-6589] Namespace:csi-mock-volumes-6589 PodName:pvc-volume-tester-gw6v2 ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:51:58.896: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:51:58.973: INFO: pod csi-mock-volumes-6589/pvc-volume-tester-gw6v2 exec for cmd ls -l /mnt/test/csi-mock-volumes-6589/csi-mock-volumes-6589, stdout: -rw-r--r-- 1 root root 13 Nov 6 01:51 /mnt/test/csi-mock-volumes-6589/csi-mock-volumes-6589, stderr: Nov 6 01:51:58.973: INFO: ExecWithOptions {Command:[/bin/sh -c rm -fr /mnt/test/csi-mock-volumes-6589] Namespace:csi-mock-volumes-6589 PodName:pvc-volume-tester-gw6v2 ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:51:58.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod pvc-volume-tester-gw6v2 Nov 6 01:51:59.064: INFO: Deleting pod "pvc-volume-tester-gw6v2" in namespace "csi-mock-volumes-6589" Nov 6 01:51:59.069: INFO: Wait up to 5m0s for pod "pvc-volume-tester-gw6v2" to be fully deleted STEP: Deleting claim pvc-5h6zt Nov 6 01:52:39.082: INFO: Waiting up to 2m0s for PersistentVolume pvc-87b2ec8f-fc84-4d7d-8073-ce55202b3c3b to get deleted Nov 6 01:52:39.085: INFO: PersistentVolume pvc-87b2ec8f-fc84-4d7d-8073-ce55202b3c3b found and phase=Bound (2.450105ms) Nov 6 01:52:41.090: INFO: PersistentVolume pvc-87b2ec8f-fc84-4d7d-8073-ce55202b3c3b was removed STEP: Deleting storageclass csi-mock-volumes-6589-sc8zcw7 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-6589 STEP: Waiting for namespaces [csi-mock-volumes-6589] to vanish STEP: uninstalling csi mock driver Nov 6 01:52:47.103: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6589-3982/csi-attacher Nov 6 01:52:47.108: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6589 Nov 6 01:52:47.113: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6589 Nov 6 01:52:47.116: INFO: deleting *v1.Role: csi-mock-volumes-6589-3982/external-attacher-cfg-csi-mock-volumes-6589 Nov 6 01:52:47.120: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6589-3982/csi-attacher-role-cfg Nov 6 01:52:47.125: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6589-3982/csi-provisioner Nov 6 01:52:47.129: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6589 Nov 6 01:52:47.135: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6589 Nov 6 01:52:47.142: INFO: deleting *v1.Role: csi-mock-volumes-6589-3982/external-provisioner-cfg-csi-mock-volumes-6589 Nov 6 01:52:47.148: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6589-3982/csi-provisioner-role-cfg Nov 6 01:52:47.155: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6589-3982/csi-resizer Nov 6 01:52:47.159: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6589 Nov 6 01:52:47.165: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6589 Nov 6 01:52:47.169: INFO: deleting *v1.Role: csi-mock-volumes-6589-3982/external-resizer-cfg-csi-mock-volumes-6589 Nov 6 01:52:47.175: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6589-3982/csi-resizer-role-cfg Nov 6 01:52:47.181: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6589-3982/csi-snapshotter Nov 6 01:52:47.185: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6589 Nov 6 01:52:47.188: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6589 Nov 6 01:52:47.191: INFO: deleting *v1.Role: csi-mock-volumes-6589-3982/external-snapshotter-leaderelection-csi-mock-volumes-6589 Nov 6 01:52:47.197: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6589-3982/external-snapshotter-leaderelection Nov 6 01:52:47.200: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6589-3982/csi-mock Nov 6 01:52:47.203: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6589 Nov 6 01:52:47.206: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6589 Nov 6 01:52:47.210: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6589 Nov 6 01:52:47.213: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6589 Nov 6 01:52:47.217: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6589 Nov 6 01:52:47.221: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6589 Nov 6 01:52:47.224: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6589 Nov 6 01:52:47.227: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6589-3982/csi-mockplugin Nov 6 01:52:47.232: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-6589 STEP: deleting the driver namespace: csi-mock-volumes-6589-3982 STEP: Waiting for namespaces [csi-mock-volumes-6589-3982] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:53:15.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:111.779 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI FSGroupPolicy [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1436 should not modify fsGroup if fsGroupPolicy=None /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1460 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should not modify fsGroup if fsGroupPolicy=None","total":-1,"completed":4,"skipped":85,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:50:33.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes W1106 01:50:33.489446 29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 6 01:50:33.489: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 6 01:50:33.491: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume by restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 STEP: Building a driver namespace object, basename csi-mock-volumes-7981 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 6 01:50:34.234: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7981-1756/csi-attacher Nov 6 01:50:34.237: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7981 Nov 6 01:50:34.237: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-7981 Nov 6 01:50:34.241: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7981 Nov 6 01:50:34.244: INFO: creating *v1.Role: csi-mock-volumes-7981-1756/external-attacher-cfg-csi-mock-volumes-7981 Nov 6 01:50:34.248: INFO: creating *v1.RoleBinding: csi-mock-volumes-7981-1756/csi-attacher-role-cfg Nov 6 01:50:34.251: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7981-1756/csi-provisioner Nov 6 01:50:34.254: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7981 Nov 6 01:50:34.254: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-7981 Nov 6 01:50:34.257: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7981 Nov 6 01:50:34.260: INFO: creating *v1.Role: csi-mock-volumes-7981-1756/external-provisioner-cfg-csi-mock-volumes-7981 Nov 6 01:50:34.263: INFO: creating *v1.RoleBinding: csi-mock-volumes-7981-1756/csi-provisioner-role-cfg Nov 6 01:50:34.265: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7981-1756/csi-resizer Nov 6 01:50:34.268: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7981 Nov 6 01:50:34.268: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-7981 Nov 6 01:50:34.271: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7981 Nov 6 01:50:34.277: INFO: creating *v1.Role: csi-mock-volumes-7981-1756/external-resizer-cfg-csi-mock-volumes-7981 Nov 6 01:50:34.280: INFO: creating *v1.RoleBinding: csi-mock-volumes-7981-1756/csi-resizer-role-cfg Nov 6 01:50:34.283: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7981-1756/csi-snapshotter Nov 6 01:50:34.286: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7981 Nov 6 01:50:34.286: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-7981 Nov 6 01:50:34.289: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7981 Nov 6 01:50:34.291: INFO: creating *v1.Role: csi-mock-volumes-7981-1756/external-snapshotter-leaderelection-csi-mock-volumes-7981 Nov 6 01:50:34.294: INFO: creating *v1.RoleBinding: csi-mock-volumes-7981-1756/external-snapshotter-leaderelection Nov 6 01:50:34.297: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7981-1756/csi-mock Nov 6 01:50:34.299: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7981 Nov 6 01:50:34.302: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7981 Nov 6 01:50:34.304: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7981 Nov 6 01:50:34.306: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7981 Nov 6 01:50:34.309: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7981 Nov 6 01:50:34.311: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7981 Nov 6 01:50:34.314: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7981 Nov 6 01:50:34.316: INFO: creating *v1.StatefulSet: csi-mock-volumes-7981-1756/csi-mockplugin Nov 6 01:50:34.321: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-7981 Nov 6 01:50:34.324: INFO: creating *v1.StatefulSet: csi-mock-volumes-7981-1756/csi-mockplugin-resizer Nov 6 01:50:34.327: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-7981" Nov 6 01:50:34.330: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-7981 to register on node node2 STEP: Creating pod Nov 6 01:51:00.727: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 6 01:51:00.732: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-vmmkp] to have phase Bound Nov 6 01:51:00.733: INFO: PersistentVolumeClaim pvc-vmmkp found but phase is Pending instead of Bound. Nov 6 01:51:02.738: INFO: PersistentVolumeClaim pvc-vmmkp found and phase=Bound (2.005928106s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Checking for conditions on pvc STEP: Deleting the previously created pod Nov 6 01:51:20.783: INFO: Deleting pod "pvc-volume-tester-vkqk7" in namespace "csi-mock-volumes-7981" Nov 6 01:51:20.788: INFO: Wait up to 5m0s for pod "pvc-volume-tester-vkqk7" to be fully deleted STEP: Creating a new pod with same volume STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-vkqk7 Nov 6 01:51:32.814: INFO: Deleting pod "pvc-volume-tester-vkqk7" in namespace "csi-mock-volumes-7981" STEP: Deleting pod pvc-volume-tester-fmh2g Nov 6 01:51:32.817: INFO: Deleting pod "pvc-volume-tester-fmh2g" in namespace "csi-mock-volumes-7981" Nov 6 01:51:32.822: INFO: Wait up to 5m0s for pod "pvc-volume-tester-fmh2g" to be fully deleted STEP: Deleting claim pvc-vmmkp Nov 6 01:52:46.835: INFO: Waiting up to 2m0s for PersistentVolume pvc-11b7d7ec-3cf5-4714-8196-ab5c7e6a3b9d to get deleted Nov 6 01:52:46.838: INFO: PersistentVolume pvc-11b7d7ec-3cf5-4714-8196-ab5c7e6a3b9d found and phase=Bound (2.288823ms) Nov 6 01:52:48.841: INFO: PersistentVolume pvc-11b7d7ec-3cf5-4714-8196-ab5c7e6a3b9d was removed STEP: Deleting storageclass csi-mock-volumes-7981-scgs2bm STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-7981 STEP: Waiting for namespaces [csi-mock-volumes-7981] to vanish STEP: uninstalling csi mock driver Nov 6 01:52:54.855: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7981-1756/csi-attacher Nov 6 01:52:54.861: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7981 Nov 6 01:52:54.865: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7981 Nov 6 01:52:54.868: INFO: deleting *v1.Role: csi-mock-volumes-7981-1756/external-attacher-cfg-csi-mock-volumes-7981 Nov 6 01:52:54.871: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7981-1756/csi-attacher-role-cfg Nov 6 01:52:54.876: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7981-1756/csi-provisioner Nov 6 01:52:54.881: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7981 Nov 6 01:52:54.887: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7981 Nov 6 01:52:54.891: INFO: deleting *v1.Role: csi-mock-volumes-7981-1756/external-provisioner-cfg-csi-mock-volumes-7981 Nov 6 01:52:54.897: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7981-1756/csi-provisioner-role-cfg Nov 6 01:52:54.904: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7981-1756/csi-resizer Nov 6 01:52:54.907: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7981 Nov 6 01:52:54.911: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7981 Nov 6 01:52:54.914: INFO: deleting *v1.Role: csi-mock-volumes-7981-1756/external-resizer-cfg-csi-mock-volumes-7981 Nov 6 01:52:54.918: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7981-1756/csi-resizer-role-cfg Nov 6 01:52:54.921: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7981-1756/csi-snapshotter Nov 6 01:52:54.924: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7981 Nov 6 01:52:54.928: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7981 Nov 6 01:52:54.932: INFO: deleting *v1.Role: csi-mock-volumes-7981-1756/external-snapshotter-leaderelection-csi-mock-volumes-7981 Nov 6 01:52:54.935: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7981-1756/external-snapshotter-leaderelection Nov 6 01:52:54.939: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7981-1756/csi-mock Nov 6 01:52:54.942: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7981 Nov 6 01:52:54.946: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7981 Nov 6 01:52:54.949: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7981 Nov 6 01:52:54.952: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7981 Nov 6 01:52:54.956: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7981 Nov 6 01:52:54.959: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7981 Nov 6 01:52:54.962: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7981 Nov 6 01:52:54.966: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7981-1756/csi-mockplugin Nov 6 01:52:54.970: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-7981 Nov 6 01:52:54.974: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7981-1756/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-7981-1756 STEP: Waiting for namespaces [csi-mock-volumes-7981-1756] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:53:22.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:169.537 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI Volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561 should expand volume by restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":1,"skipped":22,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:53:23.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Nov 6 01:53:27.103: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-4602 PodName:hostexec-node1-psscm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:53:27.103: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:53:27.230: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Nov 6 01:53:27.230: INFO: exec node1: stdout: "0\n" Nov 6 01:53:27.230: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Nov 6 01:53:27.230: INFO: exec node1: exit code: 0 Nov 6 01:53:27.230: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:53:27.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4602" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [4.192 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1256 ------------------------------ SSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:52:33.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity used, have capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 STEP: Building a driver namespace object, basename csi-mock-volumes-4673 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 6 01:52:33.602: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4673-4705/csi-attacher Nov 6 01:52:33.605: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4673 Nov 6 01:52:33.605: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4673 Nov 6 01:52:33.608: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4673 Nov 6 01:52:33.612: INFO: creating *v1.Role: csi-mock-volumes-4673-4705/external-attacher-cfg-csi-mock-volumes-4673 Nov 6 01:52:33.614: INFO: creating *v1.RoleBinding: csi-mock-volumes-4673-4705/csi-attacher-role-cfg Nov 6 01:52:33.618: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4673-4705/csi-provisioner Nov 6 01:52:33.620: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4673 Nov 6 01:52:33.620: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4673 Nov 6 01:52:33.623: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4673 Nov 6 01:52:33.625: INFO: creating *v1.Role: csi-mock-volumes-4673-4705/external-provisioner-cfg-csi-mock-volumes-4673 Nov 6 01:52:33.628: INFO: creating *v1.RoleBinding: csi-mock-volumes-4673-4705/csi-provisioner-role-cfg Nov 6 01:52:33.630: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4673-4705/csi-resizer Nov 6 01:52:33.633: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4673 Nov 6 01:52:33.633: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4673 Nov 6 01:52:33.636: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4673 Nov 6 01:52:33.639: INFO: creating *v1.Role: csi-mock-volumes-4673-4705/external-resizer-cfg-csi-mock-volumes-4673 Nov 6 01:52:33.642: INFO: creating *v1.RoleBinding: csi-mock-volumes-4673-4705/csi-resizer-role-cfg Nov 6 01:52:33.645: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4673-4705/csi-snapshotter Nov 6 01:52:33.648: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4673 Nov 6 01:52:33.648: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4673 Nov 6 01:52:33.651: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4673 Nov 6 01:52:33.653: INFO: creating *v1.Role: csi-mock-volumes-4673-4705/external-snapshotter-leaderelection-csi-mock-volumes-4673 Nov 6 01:52:33.656: INFO: creating *v1.RoleBinding: csi-mock-volumes-4673-4705/external-snapshotter-leaderelection Nov 6 01:52:33.658: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4673-4705/csi-mock Nov 6 01:52:33.661: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4673 Nov 6 01:52:33.665: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4673 Nov 6 01:52:33.668: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4673 Nov 6 01:52:33.670: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4673 Nov 6 01:52:33.672: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4673 Nov 6 01:52:33.675: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4673 Nov 6 01:52:33.677: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4673 Nov 6 01:52:33.680: INFO: creating *v1.StatefulSet: csi-mock-volumes-4673-4705/csi-mockplugin Nov 6 01:52:33.684: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4673 Nov 6 01:52:33.687: INFO: creating *v1.StatefulSet: csi-mock-volumes-4673-4705/csi-mockplugin-attacher Nov 6 01:52:33.690: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4673" Nov 6 01:52:33.692: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4673 to register on node node1 STEP: Creating pod Nov 6 01:52:48.213: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Deleting the previously created pod Nov 6 01:53:08.236: INFO: Deleting pod "pvc-volume-tester-jz6fj" in namespace "csi-mock-volumes-4673" Nov 6 01:53:08.241: INFO: Wait up to 5m0s for pod "pvc-volume-tester-jz6fj" to be fully deleted STEP: Deleting pod pvc-volume-tester-jz6fj Nov 6 01:53:20.249: INFO: Deleting pod "pvc-volume-tester-jz6fj" in namespace "csi-mock-volumes-4673" STEP: Deleting claim pvc-zj6r4 Nov 6 01:53:20.259: INFO: Waiting up to 2m0s for PersistentVolume pvc-2b4960df-0808-4d8b-af0c-613d8fac1d70 to get deleted Nov 6 01:53:20.261: INFO: PersistentVolume pvc-2b4960df-0808-4d8b-af0c-613d8fac1d70 found and phase=Bound (2.127272ms) Nov 6 01:53:22.264: INFO: PersistentVolume pvc-2b4960df-0808-4d8b-af0c-613d8fac1d70 was removed STEP: Deleting storageclass mock-csi-storage-capacity-csi-mock-volumes-4673 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4673 STEP: Waiting for namespaces [csi-mock-volumes-4673] to vanish STEP: uninstalling csi mock driver Nov 6 01:53:28.279: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4673-4705/csi-attacher Nov 6 01:53:28.282: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4673 Nov 6 01:53:28.286: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4673 Nov 6 01:53:28.291: INFO: deleting *v1.Role: csi-mock-volumes-4673-4705/external-attacher-cfg-csi-mock-volumes-4673 Nov 6 01:53:28.296: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4673-4705/csi-attacher-role-cfg Nov 6 01:53:28.300: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4673-4705/csi-provisioner Nov 6 01:53:28.304: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4673 Nov 6 01:53:28.308: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4673 Nov 6 01:53:28.312: INFO: deleting *v1.Role: csi-mock-volumes-4673-4705/external-provisioner-cfg-csi-mock-volumes-4673 Nov 6 01:53:28.316: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4673-4705/csi-provisioner-role-cfg Nov 6 01:53:28.320: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4673-4705/csi-resizer Nov 6 01:53:28.324: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4673 Nov 6 01:53:28.327: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4673 Nov 6 01:53:28.330: INFO: deleting *v1.Role: csi-mock-volumes-4673-4705/external-resizer-cfg-csi-mock-volumes-4673 Nov 6 01:53:28.333: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4673-4705/csi-resizer-role-cfg Nov 6 01:53:28.336: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4673-4705/csi-snapshotter Nov 6 01:53:28.339: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4673 Nov 6 01:53:28.343: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4673 Nov 6 01:53:28.346: INFO: deleting *v1.Role: csi-mock-volumes-4673-4705/external-snapshotter-leaderelection-csi-mock-volumes-4673 Nov 6 01:53:28.350: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4673-4705/external-snapshotter-leaderelection Nov 6 01:53:28.353: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4673-4705/csi-mock Nov 6 01:53:28.357: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4673 Nov 6 01:53:28.360: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4673 Nov 6 01:53:28.363: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4673 Nov 6 01:53:28.367: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4673 Nov 6 01:53:28.376: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4673 Nov 6 01:53:28.382: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4673 Nov 6 01:53:28.386: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4673 Nov 6 01:53:28.390: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4673-4705/csi-mockplugin Nov 6 01:53:28.394: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-4673 Nov 6 01:53:28.398: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4673-4705/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-4673-4705 STEP: Waiting for namespaces [csi-mock-volumes-4673-4705] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:53:34.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:60.885 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134 CSIStorageCapacity used, have capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","total":-1,"completed":6,"skipped":145,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:53:27.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Nov 6 01:53:27.291: INFO: The status of Pod test-hostpath-type-ldp2m is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:53:29.295: INFO: The status of Pod test-hostpath-type-ldp2m is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:53:31.294: INFO: The status of Pod test-hostpath-type-ldp2m is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:53:33.295: INFO: The status of Pod test-hostpath-type-ldp2m is Running (Ready = true) STEP: running on node node2 STEP: Create a block device for further testing Nov 6 01:53:33.297: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-4166 PodName:test-hostpath-type-ldp2m ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:53:33.297: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:369 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:53:35.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-4166" for this suite. • [SLOW TEST:8.172 seconds] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting block device 'ablkdev' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:369 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathCharDev","total":-1,"completed":2,"skipped":56,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:52:56.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:391 STEP: Setting up local volumes on node "node1" STEP: Initializing test volumes Nov 6 01:53:00.940: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-cd0d0a86-5060-45f0-a601-4450c27276aa] Namespace:persistent-local-volumes-test-4498 PodName:hostexec-node1-4sk9v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:53:00.940: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:53:01.028: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-658b3915-8c41-4dc4-ba0d-654830b146e0] Namespace:persistent-local-volumes-test-4498 PodName:hostexec-node1-4sk9v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:53:01.028: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:53:01.121: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-a937564d-d4f7-4e32-840a-e677f6848a6c] Namespace:persistent-local-volumes-test-4498 PodName:hostexec-node1-4sk9v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:53:01.121: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:53:01.207: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-f54b194d-aa5a-447d-aa4b-ba0cdd61cce9] Namespace:persistent-local-volumes-test-4498 PodName:hostexec-node1-4sk9v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:53:01.207: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:53:01.298: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-6c752a92-e45d-4a65-a2af-a28a07a1eadf] Namespace:persistent-local-volumes-test-4498 PodName:hostexec-node1-4sk9v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:53:01.298: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:53:01.387: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-6753f664-92de-4f5d-b749-7e1cd50e3fca] Namespace:persistent-local-volumes-test-4498 PodName:hostexec-node1-4sk9v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:53:01.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:53:01.469: INFO: Creating a PV followed by a PVC Nov 6 01:53:01.476: INFO: Creating a PV followed by a PVC Nov 6 01:53:01.482: INFO: Creating a PV followed by a PVC Nov 6 01:53:01.487: INFO: Creating a PV followed by a PVC Nov 6 01:53:01.493: INFO: Creating a PV followed by a PVC Nov 6 01:53:01.499: INFO: Creating a PV followed by a PVC Nov 6 01:53:11.547: INFO: PVCs were not bound within 10s (that's good) STEP: Setting up local volumes on node "node2" STEP: Initializing test volumes Nov 6 01:53:13.566: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-020c6c3b-69ef-44fb-805b-38c7c5fdc8f0] Namespace:persistent-local-volumes-test-4498 PodName:hostexec-node2-6pq6c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:53:13.566: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:53:13.675: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-76a05793-ff01-42c9-a0be-7a4b9d2a24f3] Namespace:persistent-local-volumes-test-4498 PodName:hostexec-node2-6pq6c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:53:13.675: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:53:13.768: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-c9bcd8a0-a21b-476f-b7a7-ebae759ee30a] Namespace:persistent-local-volumes-test-4498 PodName:hostexec-node2-6pq6c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:53:13.769: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:53:13.863: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-0af09f7b-089e-4ad9-9df3-ddaef7e1b556] Namespace:persistent-local-volumes-test-4498 PodName:hostexec-node2-6pq6c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:53:13.863: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:53:13.951: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-6f9fc80b-60f1-4b79-806b-e955271f4d46] Namespace:persistent-local-volumes-test-4498 PodName:hostexec-node2-6pq6c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:53:13.951: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:53:14.044: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-1f443bd2-d3b6-4c78-a877-4437a12bc7bc] Namespace:persistent-local-volumes-test-4498 PodName:hostexec-node2-6pq6c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:53:14.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:53:14.133: INFO: Creating a PV followed by a PVC Nov 6 01:53:14.140: INFO: Creating a PV followed by a PVC Nov 6 01:53:14.146: INFO: Creating a PV followed by a PVC Nov 6 01:53:14.151: INFO: Creating a PV followed by a PVC Nov 6 01:53:14.157: INFO: Creating a PV followed by a PVC Nov 6 01:53:14.162: INFO: Creating a PV followed by a PVC Nov 6 01:53:24.208: INFO: PVCs were not bound within 10s (that's good) [It] should use volumes on one node when pod management is parallel and pod has affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:434 STEP: Creating a StatefulSet with pod affinity on nodes Nov 6 01:53:24.215: INFO: Found 0 stateful pods, waiting for 3 Nov 6 01:53:34.221: INFO: Waiting for pod local-volume-statefulset-0 to enter Running - Ready=true, currently Running - Ready=true Nov 6 01:53:34.221: INFO: Waiting for pod local-volume-statefulset-1 to enter Running - Ready=true, currently Running - Ready=true Nov 6 01:53:34.221: INFO: Waiting for pod local-volume-statefulset-2 to enter Running - Ready=true, currently Running - Ready=true Nov 6 01:53:34.224: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol1-local-volume-statefulset-0] to have phase Bound Nov 6 01:53:34.227: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-0 found and phase=Bound (2.321827ms) Nov 6 01:53:34.227: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol1-local-volume-statefulset-1] to have phase Bound Nov 6 01:53:34.229: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-1 found and phase=Bound (2.488085ms) Nov 6 01:53:34.229: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol1-local-volume-statefulset-2] to have phase Bound Nov 6 01:53:34.231: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-2 found and phase=Bound (1.7623ms) [AfterEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:403 STEP: Cleaning up PVC and PV Nov 6 01:53:34.231: INFO: Deleting PersistentVolumeClaim "pvc-bmcv6" Nov 6 01:53:34.235: INFO: Deleting PersistentVolume "local-pv8r9gb" STEP: Cleaning up PVC and PV Nov 6 01:53:34.239: INFO: Deleting PersistentVolumeClaim "pvc-k29pw" Nov 6 01:53:34.242: INFO: Deleting PersistentVolume "local-pvdbb8x" STEP: Cleaning up PVC and PV Nov 6 01:53:34.246: INFO: Deleting PersistentVolumeClaim "pvc-9629l" Nov 6 01:53:34.249: INFO: Deleting PersistentVolume "local-pv2bznv" STEP: Cleaning up PVC and PV Nov 6 01:53:34.253: INFO: Deleting PersistentVolumeClaim "pvc-qxl8b" Nov 6 01:53:34.260: INFO: Deleting PersistentVolume "local-pvglsvw" STEP: Cleaning up PVC and PV Nov 6 01:53:34.264: INFO: Deleting PersistentVolumeClaim "pvc-t544q" Nov 6 01:53:34.267: INFO: Deleting PersistentVolume "local-pvfzl2h" STEP: Cleaning up PVC and PV Nov 6 01:53:34.272: INFO: Deleting PersistentVolumeClaim "pvc-9rz5j" Nov 6 01:53:34.275: INFO: Deleting PersistentVolume "local-pvwj5k8" STEP: Removing the test directory Nov 6 01:53:34.279: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-cd0d0a86-5060-45f0-a601-4450c27276aa] Namespace:persistent-local-volumes-test-4498 PodName:hostexec-node1-4sk9v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:53:34.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:53:36.240: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-658b3915-8c41-4dc4-ba0d-654830b146e0] Namespace:persistent-local-volumes-test-4498 PodName:hostexec-node1-4sk9v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:53:36.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:53:36.331: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a937564d-d4f7-4e32-840a-e677f6848a6c] Namespace:persistent-local-volumes-test-4498 PodName:hostexec-node1-4sk9v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:53:36.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:53:36.410: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f54b194d-aa5a-447d-aa4b-ba0cdd61cce9] Namespace:persistent-local-volumes-test-4498 PodName:hostexec-node1-4sk9v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:53:36.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:53:36.501: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6c752a92-e45d-4a65-a2af-a28a07a1eadf] Namespace:persistent-local-volumes-test-4498 PodName:hostexec-node1-4sk9v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:53:36.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:53:36.708: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6753f664-92de-4f5d-b749-7e1cd50e3fca] Namespace:persistent-local-volumes-test-4498 PodName:hostexec-node1-4sk9v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:53:36.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up PVC and PV Nov 6 01:53:37.206: INFO: Deleting PersistentVolumeClaim "pvc-mpcwz" Nov 6 01:53:37.211: INFO: Deleting PersistentVolume "local-pvmbvwl" STEP: Cleaning up PVC and PV Nov 6 01:53:37.214: INFO: Deleting PersistentVolumeClaim "pvc-b5scs" Nov 6 01:53:37.218: INFO: Deleting PersistentVolume "local-pvwr6sk" STEP: Cleaning up PVC and PV Nov 6 01:53:37.221: INFO: Deleting PersistentVolumeClaim "pvc-lv5g7" Nov 6 01:53:37.224: INFO: Deleting PersistentVolume "local-pvdfcqx" STEP: Cleaning up PVC and PV Nov 6 01:53:37.228: INFO: Deleting PersistentVolumeClaim "pvc-dkckc" Nov 6 01:53:37.232: INFO: Deleting PersistentVolume "local-pvftt69" STEP: Cleaning up PVC and PV Nov 6 01:53:37.235: INFO: Deleting PersistentVolumeClaim "pvc-hlgrm" Nov 6 01:53:37.238: INFO: Deleting PersistentVolume "local-pvc7r9s" STEP: Cleaning up PVC and PV Nov 6 01:53:37.242: INFO: Deleting PersistentVolumeClaim "pvc-wcg82" Nov 6 01:53:37.245: INFO: Deleting PersistentVolume "local-pvn8k4r" STEP: Removing the test directory Nov 6 01:53:37.250: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-020c6c3b-69ef-44fb-805b-38c7c5fdc8f0] Namespace:persistent-local-volumes-test-4498 PodName:hostexec-node2-6pq6c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:53:37.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:53:37.405: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-76a05793-ff01-42c9-a0be-7a4b9d2a24f3] Namespace:persistent-local-volumes-test-4498 PodName:hostexec-node2-6pq6c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:53:37.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:53:37.592: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c9bcd8a0-a21b-476f-b7a7-ebae759ee30a] Namespace:persistent-local-volumes-test-4498 PodName:hostexec-node2-6pq6c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:53:37.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:53:37.795: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0af09f7b-089e-4ad9-9df3-ddaef7e1b556] Namespace:persistent-local-volumes-test-4498 PodName:hostexec-node2-6pq6c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:53:37.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:53:37.932: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6f9fc80b-60f1-4b79-806b-e955271f4d46] Namespace:persistent-local-volumes-test-4498 PodName:hostexec-node2-6pq6c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:53:37.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:53:38.025: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-1f443bd2-d3b6-4c78-a877-4437a12bc7bc] Namespace:persistent-local-volumes-test-4498 PodName:hostexec-node2-6pq6c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:53:38.025: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:53:38.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4498" for this suite. • [SLOW TEST:41.265 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:384 should use volumes on one node when pod management is parallel and pod has affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:434 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local StatefulSet with pod affinity [Slow] should use volumes on one node when pod management is parallel and pod has affinity","total":-1,"completed":5,"skipped":135,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:53:35.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-e117e99d-f51d-4582-8a49-4f439978cfe6" Nov 6 01:53:41.546: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-e117e99d-f51d-4582-8a49-4f439978cfe6 && dd if=/dev/zero of=/tmp/local-volume-test-e117e99d-f51d-4582-8a49-4f439978cfe6/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-e117e99d-f51d-4582-8a49-4f439978cfe6/file] Namespace:persistent-local-volumes-test-7999 PodName:hostexec-node1-p7dwq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:53:41.546: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:53:41.686: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-e117e99d-f51d-4582-8a49-4f439978cfe6/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-7999 PodName:hostexec-node1-p7dwq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:53:41.686: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:53:41.780: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-e117e99d-f51d-4582-8a49-4f439978cfe6 && chmod o+rwx /tmp/local-volume-test-e117e99d-f51d-4582-8a49-4f439978cfe6] Namespace:persistent-local-volumes-test-7999 PodName:hostexec-node1-p7dwq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:53:41.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:53:42.050: INFO: Creating a PV followed by a PVC Nov 6 01:53:42.058: INFO: Waiting for PV local-pvszns7 to bind to PVC pvc-cnmnn Nov 6 01:53:42.058: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-cnmnn] to have phase Bound Nov 6 01:53:42.061: INFO: PersistentVolumeClaim pvc-cnmnn found but phase is Pending instead of Bound. Nov 6 01:53:44.064: INFO: PersistentVolumeClaim pvc-cnmnn found but phase is Pending instead of Bound. Nov 6 01:53:46.066: INFO: PersistentVolumeClaim pvc-cnmnn found but phase is Pending instead of Bound. Nov 6 01:53:48.071: INFO: PersistentVolumeClaim pvc-cnmnn found but phase is Pending instead of Bound. Nov 6 01:53:50.075: INFO: PersistentVolumeClaim pvc-cnmnn found but phase is Pending instead of Bound. Nov 6 01:53:52.079: INFO: PersistentVolumeClaim pvc-cnmnn found and phase=Bound (10.020041032s) Nov 6 01:53:52.079: INFO: Waiting up to 3m0s for PersistentVolume local-pvszns7 to have phase Bound Nov 6 01:53:52.080: INFO: PersistentVolume local-pvszns7 found and phase=Bound (1.800072ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Nov 6 01:53:56.119: INFO: pod "pod-51d9d4a2-e19e-4e51-ae99-fa5046b1fcf1" created on Node "node1" STEP: Writing in pod1 Nov 6 01:53:56.119: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7999 PodName:pod-51d9d4a2-e19e-4e51-ae99-fa5046b1fcf1 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:53:56.119: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:53:56.298: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 6 01:53:56.298: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7999 PodName:pod-51d9d4a2-e19e-4e51-ae99-fa5046b1fcf1 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:53:56.298: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:53:56.551: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Nov 6 01:54:00.575: INFO: pod "pod-92c34ba7-386b-4bc6-ad56-e6411a298bca" created on Node "node1" Nov 6 01:54:00.575: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7999 PodName:pod-92c34ba7-386b-4bc6-ad56-e6411a298bca ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:54:00.575: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:54:00.657: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Nov 6 01:54:00.657: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-e117e99d-f51d-4582-8a49-4f439978cfe6 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7999 PodName:pod-92c34ba7-386b-4bc6-ad56-e6411a298bca ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:54:00.657: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:54:00.745: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-e117e99d-f51d-4582-8a49-4f439978cfe6 > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Nov 6 01:54:00.745: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-7999 PodName:pod-51d9d4a2-e19e-4e51-ae99-fa5046b1fcf1 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:54:00.745: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:54:00.825: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-e117e99d-f51d-4582-8a49-4f439978cfe6", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-51d9d4a2-e19e-4e51-ae99-fa5046b1fcf1 in namespace persistent-local-volumes-test-7999 STEP: Deleting pod2 STEP: Deleting pod pod-92c34ba7-386b-4bc6-ad56-e6411a298bca in namespace persistent-local-volumes-test-7999 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:54:00.835: INFO: Deleting PersistentVolumeClaim "pvc-cnmnn" Nov 6 01:54:00.839: INFO: Deleting PersistentVolume "local-pvszns7" Nov 6 01:54:00.843: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-e117e99d-f51d-4582-8a49-4f439978cfe6] Namespace:persistent-local-volumes-test-7999 PodName:hostexec-node1-p7dwq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:54:00.843: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:54:00.937: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-e117e99d-f51d-4582-8a49-4f439978cfe6/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-7999 PodName:hostexec-node1-p7dwq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:54:00.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node1" at path /tmp/local-volume-test-e117e99d-f51d-4582-8a49-4f439978cfe6/file Nov 6 01:54:01.030: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-7999 PodName:hostexec-node1-p7dwq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:54:01.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-e117e99d-f51d-4582-8a49-4f439978cfe6 Nov 6 01:54:01.121: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e117e99d-f51d-4582-8a49-4f439978cfe6] Namespace:persistent-local-volumes-test-7999 PodName:hostexec-node1-p7dwq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:54:01.121: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:54:01.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7999" for this suite. • [SLOW TEST:25.719 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":3,"skipped":85,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:53:34.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 6 01:53:38.470: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-afeee7fc-d2c5-48bf-bb6a-5a6f78d1d1fd && mount --bind /tmp/local-volume-test-afeee7fc-d2c5-48bf-bb6a-5a6f78d1d1fd /tmp/local-volume-test-afeee7fc-d2c5-48bf-bb6a-5a6f78d1d1fd] Namespace:persistent-local-volumes-test-6546 PodName:hostexec-node2-fhm6k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:53:38.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:53:38.743: INFO: Creating a PV followed by a PVC Nov 6 01:53:38.749: INFO: Waiting for PV local-pvtvz94 to bind to PVC pvc-95vl2 Nov 6 01:53:38.749: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-95vl2] to have phase Bound Nov 6 01:53:38.752: INFO: PersistentVolumeClaim pvc-95vl2 found but phase is Pending instead of Bound. Nov 6 01:53:40.755: INFO: PersistentVolumeClaim pvc-95vl2 found but phase is Pending instead of Bound. Nov 6 01:53:42.758: INFO: PersistentVolumeClaim pvc-95vl2 found but phase is Pending instead of Bound. Nov 6 01:53:44.763: INFO: PersistentVolumeClaim pvc-95vl2 found but phase is Pending instead of Bound. Nov 6 01:53:46.766: INFO: PersistentVolumeClaim pvc-95vl2 found but phase is Pending instead of Bound. Nov 6 01:53:48.768: INFO: PersistentVolumeClaim pvc-95vl2 found but phase is Pending instead of Bound. Nov 6 01:53:50.777: INFO: PersistentVolumeClaim pvc-95vl2 found but phase is Pending instead of Bound. Nov 6 01:53:52.783: INFO: PersistentVolumeClaim pvc-95vl2 found and phase=Bound (14.033987401s) Nov 6 01:53:52.783: INFO: Waiting up to 3m0s for PersistentVolume local-pvtvz94 to have phase Bound Nov 6 01:53:52.785: INFO: PersistentVolume local-pvtvz94 found and phase=Bound (2.279538ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Nov 6 01:53:56.812: INFO: pod "pod-c8bb3001-4961-4314-b764-aeca9eabc2d4" created on Node "node2" STEP: Writing in pod1 Nov 6 01:53:56.812: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6546 PodName:pod-c8bb3001-4961-4314-b764-aeca9eabc2d4 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:53:56.812: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:53:56.887: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 6 01:53:56.888: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6546 PodName:pod-c8bb3001-4961-4314-b764-aeca9eabc2d4 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:53:56.888: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:53:56.962: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-c8bb3001-4961-4314-b764-aeca9eabc2d4 in namespace persistent-local-volumes-test-6546 STEP: Creating pod2 STEP: Creating a pod Nov 6 01:54:00.990: INFO: pod "pod-e692871f-792f-431d-82f6-437a8a5b8ff0" created on Node "node2" STEP: Reading in pod2 Nov 6 01:54:00.990: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6546 PodName:pod-e692871f-792f-431d-82f6-437a8a5b8ff0 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:54:00.990: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:54:01.098: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-e692871f-792f-431d-82f6-437a8a5b8ff0 in namespace persistent-local-volumes-test-6546 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:54:01.104: INFO: Deleting PersistentVolumeClaim "pvc-95vl2" Nov 6 01:54:01.108: INFO: Deleting PersistentVolume "local-pvtvz94" STEP: Removing the test directory Nov 6 01:54:01.111: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-afeee7fc-d2c5-48bf-bb6a-5a6f78d1d1fd && rm -r /tmp/local-volume-test-afeee7fc-d2c5-48bf-bb6a-5a6f78d1d1fd] Namespace:persistent-local-volumes-test-6546 PodName:hostexec-node2-fhm6k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:54:01.111: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:54:01.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6546" for this suite. • [SLOW TEST:26.813 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":7,"skipped":146,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:52:38.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should retry NodeStage after NodeStage final error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 STEP: Building a driver namespace object, basename csi-mock-volumes-1974 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Nov 6 01:52:38.715: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1974-2039/csi-attacher Nov 6 01:52:38.718: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1974 Nov 6 01:52:38.718: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-1974 Nov 6 01:52:38.721: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1974 Nov 6 01:52:38.724: INFO: creating *v1.Role: csi-mock-volumes-1974-2039/external-attacher-cfg-csi-mock-volumes-1974 Nov 6 01:52:38.726: INFO: creating *v1.RoleBinding: csi-mock-volumes-1974-2039/csi-attacher-role-cfg Nov 6 01:52:38.728: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1974-2039/csi-provisioner Nov 6 01:52:38.730: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1974 Nov 6 01:52:38.730: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-1974 Nov 6 01:52:38.733: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1974 Nov 6 01:52:38.736: INFO: creating *v1.Role: csi-mock-volumes-1974-2039/external-provisioner-cfg-csi-mock-volumes-1974 Nov 6 01:52:38.739: INFO: creating *v1.RoleBinding: csi-mock-volumes-1974-2039/csi-provisioner-role-cfg Nov 6 01:52:38.742: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1974-2039/csi-resizer Nov 6 01:52:38.745: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1974 Nov 6 01:52:38.745: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-1974 Nov 6 01:52:38.748: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1974 Nov 6 01:52:38.751: INFO: creating *v1.Role: csi-mock-volumes-1974-2039/external-resizer-cfg-csi-mock-volumes-1974 Nov 6 01:52:38.753: INFO: creating *v1.RoleBinding: csi-mock-volumes-1974-2039/csi-resizer-role-cfg Nov 6 01:52:38.756: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1974-2039/csi-snapshotter Nov 6 01:52:38.759: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1974 Nov 6 01:52:38.759: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-1974 Nov 6 01:52:38.762: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1974 Nov 6 01:52:38.765: INFO: creating *v1.Role: csi-mock-volumes-1974-2039/external-snapshotter-leaderelection-csi-mock-volumes-1974 Nov 6 01:52:38.767: INFO: creating *v1.RoleBinding: csi-mock-volumes-1974-2039/external-snapshotter-leaderelection Nov 6 01:52:38.770: INFO: creating *v1.ServiceAccount: csi-mock-volumes-1974-2039/csi-mock Nov 6 01:52:38.772: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1974 Nov 6 01:52:38.774: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1974 Nov 6 01:52:38.776: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1974 Nov 6 01:52:38.779: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1974 Nov 6 01:52:38.782: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1974 Nov 6 01:52:38.784: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1974 Nov 6 01:52:38.787: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1974 Nov 6 01:52:38.790: INFO: creating *v1.StatefulSet: csi-mock-volumes-1974-2039/csi-mockplugin Nov 6 01:52:38.794: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-1974 Nov 6 01:52:38.798: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-1974" Nov 6 01:52:38.800: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-1974 to register on node node1 I1106 01:52:43.873326 31 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-1974","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1106 01:52:43.970286 31 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I1106 01:52:43.971960 31 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-1974","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1106 01:52:43.974013 31 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I1106 01:52:43.976140 31 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I1106 01:52:44.826907 31 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-1974"},"Error":"","FullError":null} STEP: Creating pod Nov 6 01:52:48.315: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 6 01:52:48.319: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-htnr9] to have phase Bound Nov 6 01:52:48.322: INFO: PersistentVolumeClaim pvc-htnr9 found but phase is Pending instead of Bound. I1106 01:52:48.327850 31 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-68e534fc-2c1e-424f-b2a1-f95b6fd60054","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-68e534fc-2c1e-424f-b2a1-f95b6fd60054"}}},"Error":"","FullError":null} Nov 6 01:52:50.326: INFO: PersistentVolumeClaim pvc-htnr9 found and phase=Bound (2.006357559s) Nov 6 01:52:50.339: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-htnr9] to have phase Bound Nov 6 01:52:50.343: INFO: PersistentVolumeClaim pvc-htnr9 found and phase=Bound (4.646301ms) STEP: Waiting for expected CSI calls I1106 01:52:50.753377 31 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1106 01:52:50.756314 31 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-68e534fc-2c1e-424f-b2a1-f95b6fd60054/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-68e534fc-2c1e-424f-b2a1-f95b6fd60054","storage.kubernetes.io/csiProvisionerIdentity":"1636163563975-8081-csi-mock-csi-mock-volumes-1974"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} I1106 01:52:51.359196 31 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1106 01:52:51.361081 31 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-68e534fc-2c1e-424f-b2a1-f95b6fd60054/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-68e534fc-2c1e-424f-b2a1-f95b6fd60054","storage.kubernetes.io/csiProvisionerIdentity":"1636163563975-8081-csi-mock-csi-mock-volumes-1974"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} I1106 01:52:52.377353 31 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1106 01:52:52.379209 31 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-68e534fc-2c1e-424f-b2a1-f95b6fd60054/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-68e534fc-2c1e-424f-b2a1-f95b6fd60054","storage.kubernetes.io/csiProvisionerIdentity":"1636163563975-8081-csi-mock-csi-mock-volumes-1974"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} I1106 01:52:54.417794 31 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Nov 6 01:52:54.419: INFO: >>> kubeConfig: /root/.kube/config I1106 01:52:54.510813 31 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-68e534fc-2c1e-424f-b2a1-f95b6fd60054/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-68e534fc-2c1e-424f-b2a1-f95b6fd60054","storage.kubernetes.io/csiProvisionerIdentity":"1636163563975-8081-csi-mock-csi-mock-volumes-1974"}},"Response":{},"Error":"","FullError":null} I1106 01:52:54.515148 31 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Nov 6 01:52:54.517: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:52:54.603: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:52:54.718: INFO: >>> kubeConfig: /root/.kube/config I1106 01:52:54.801221 31 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-68e534fc-2c1e-424f-b2a1-f95b6fd60054/globalmount","target_path":"/var/lib/kubelet/pods/680094a3-787a-4e9f-9418-3890c67633b7/volumes/kubernetes.io~csi/pvc-68e534fc-2c1e-424f-b2a1-f95b6fd60054/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-68e534fc-2c1e-424f-b2a1-f95b6fd60054","storage.kubernetes.io/csiProvisionerIdentity":"1636163563975-8081-csi-mock-csi-mock-volumes-1974"}},"Response":{},"Error":"","FullError":null} STEP: Waiting for pod to be running STEP: Deleting the previously created pod Nov 6 01:52:59.353: INFO: Deleting pod "pvc-volume-tester-lkjs2" in namespace "csi-mock-volumes-1974" Nov 6 01:52:59.358: INFO: Wait up to 5m0s for pod "pvc-volume-tester-lkjs2" to be fully deleted Nov 6 01:53:02.126: INFO: >>> kubeConfig: /root/.kube/config I1106 01:53:02.216252 31 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/680094a3-787a-4e9f-9418-3890c67633b7/volumes/kubernetes.io~csi/pvc-68e534fc-2c1e-424f-b2a1-f95b6fd60054/mount"},"Response":{},"Error":"","FullError":null} I1106 01:53:02.225842 31 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1106 01:53:02.227504 31 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-68e534fc-2c1e-424f-b2a1-f95b6fd60054/globalmount"},"Response":{},"Error":"","FullError":null} STEP: Waiting for all remaining expected CSI calls STEP: Deleting pod pvc-volume-tester-lkjs2 Nov 6 01:53:10.365: INFO: Deleting pod "pvc-volume-tester-lkjs2" in namespace "csi-mock-volumes-1974" STEP: Deleting claim pvc-htnr9 Nov 6 01:53:10.377: INFO: Waiting up to 2m0s for PersistentVolume pvc-68e534fc-2c1e-424f-b2a1-f95b6fd60054 to get deleted Nov 6 01:53:10.379: INFO: PersistentVolume pvc-68e534fc-2c1e-424f-b2a1-f95b6fd60054 found and phase=Bound (2.362381ms) I1106 01:53:10.392717 31 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} Nov 6 01:53:12.383: INFO: PersistentVolume pvc-68e534fc-2c1e-424f-b2a1-f95b6fd60054 was removed STEP: Deleting storageclass csi-mock-volumes-1974-sch6c7b STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-1974 STEP: Waiting for namespaces [csi-mock-volumes-1974] to vanish STEP: uninstalling csi mock driver Nov 6 01:53:18.411: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1974-2039/csi-attacher Nov 6 01:53:18.414: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-1974 Nov 6 01:53:18.418: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-1974 Nov 6 01:53:18.421: INFO: deleting *v1.Role: csi-mock-volumes-1974-2039/external-attacher-cfg-csi-mock-volumes-1974 Nov 6 01:53:18.425: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1974-2039/csi-attacher-role-cfg Nov 6 01:53:18.428: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1974-2039/csi-provisioner Nov 6 01:53:18.432: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-1974 Nov 6 01:53:18.435: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-1974 Nov 6 01:53:18.438: INFO: deleting *v1.Role: csi-mock-volumes-1974-2039/external-provisioner-cfg-csi-mock-volumes-1974 Nov 6 01:53:18.442: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1974-2039/csi-provisioner-role-cfg Nov 6 01:53:18.446: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1974-2039/csi-resizer Nov 6 01:53:18.449: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-1974 Nov 6 01:53:18.453: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-1974 Nov 6 01:53:18.457: INFO: deleting *v1.Role: csi-mock-volumes-1974-2039/external-resizer-cfg-csi-mock-volumes-1974 Nov 6 01:53:18.460: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1974-2039/csi-resizer-role-cfg Nov 6 01:53:18.463: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1974-2039/csi-snapshotter Nov 6 01:53:18.467: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-1974 Nov 6 01:53:18.470: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-1974 Nov 6 01:53:18.473: INFO: deleting *v1.Role: csi-mock-volumes-1974-2039/external-snapshotter-leaderelection-csi-mock-volumes-1974 Nov 6 01:53:18.477: INFO: deleting *v1.RoleBinding: csi-mock-volumes-1974-2039/external-snapshotter-leaderelection Nov 6 01:53:18.480: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-1974-2039/csi-mock Nov 6 01:53:18.484: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-1974 Nov 6 01:53:18.487: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-1974 Nov 6 01:53:18.491: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-1974 Nov 6 01:53:18.494: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-1974 Nov 6 01:53:18.497: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-1974 Nov 6 01:53:18.500: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-1974 Nov 6 01:53:18.504: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-1974 Nov 6 01:53:18.507: INFO: deleting *v1.StatefulSet: csi-mock-volumes-1974-2039/csi-mockplugin Nov 6 01:53:18.511: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-1974 STEP: deleting the driver namespace: csi-mock-volumes-1974-2039 STEP: Waiting for namespaces [csi-mock-volumes-1974-2039] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:54:02.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:83.869 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI NodeStage error cases [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:734 should retry NodeStage after NodeStage final error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should retry NodeStage after NodeStage final error","total":-1,"completed":5,"skipped":171,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:54:02.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Nov 6 01:54:06.613: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-7971 PodName:hostexec-node1-7dbbj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:54:06.613: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:54:06.710: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Nov 6 01:54:06.710: INFO: exec node1: stdout: "0\n" Nov 6 01:54:06.710: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Nov 6 01:54:06.710: INFO: exec node1: exit code: 0 Nov 6 01:54:06.710: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:54:06.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7971" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [4.155 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1256 ------------------------------ SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:52:54.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should call NodeUnstage after NodeStage ephemeral error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 STEP: Building a driver namespace object, basename csi-mock-volumes-5561 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Nov 6 01:52:55.032: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5561-9308/csi-attacher Nov 6 01:52:55.035: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5561 Nov 6 01:52:55.035: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-5561 Nov 6 01:52:55.038: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5561 Nov 6 01:52:55.041: INFO: creating *v1.Role: csi-mock-volumes-5561-9308/external-attacher-cfg-csi-mock-volumes-5561 Nov 6 01:52:55.044: INFO: creating *v1.RoleBinding: csi-mock-volumes-5561-9308/csi-attacher-role-cfg Nov 6 01:52:55.046: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5561-9308/csi-provisioner Nov 6 01:52:55.049: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5561 Nov 6 01:52:55.049: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-5561 Nov 6 01:52:55.052: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5561 Nov 6 01:52:55.055: INFO: creating *v1.Role: csi-mock-volumes-5561-9308/external-provisioner-cfg-csi-mock-volumes-5561 Nov 6 01:52:55.058: INFO: creating *v1.RoleBinding: csi-mock-volumes-5561-9308/csi-provisioner-role-cfg Nov 6 01:52:55.060: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5561-9308/csi-resizer Nov 6 01:52:55.063: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5561 Nov 6 01:52:55.063: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-5561 Nov 6 01:52:55.066: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5561 Nov 6 01:52:55.069: INFO: creating *v1.Role: csi-mock-volumes-5561-9308/external-resizer-cfg-csi-mock-volumes-5561 Nov 6 01:52:55.072: INFO: creating *v1.RoleBinding: csi-mock-volumes-5561-9308/csi-resizer-role-cfg Nov 6 01:52:55.074: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5561-9308/csi-snapshotter Nov 6 01:52:55.076: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5561 Nov 6 01:52:55.077: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-5561 Nov 6 01:52:55.079: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5561 Nov 6 01:52:55.082: INFO: creating *v1.Role: csi-mock-volumes-5561-9308/external-snapshotter-leaderelection-csi-mock-volumes-5561 Nov 6 01:52:55.086: INFO: creating *v1.RoleBinding: csi-mock-volumes-5561-9308/external-snapshotter-leaderelection Nov 6 01:52:55.089: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5561-9308/csi-mock Nov 6 01:52:55.092: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5561 Nov 6 01:52:55.094: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5561 Nov 6 01:52:55.097: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5561 Nov 6 01:52:55.100: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5561 Nov 6 01:52:55.103: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5561 Nov 6 01:52:55.106: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5561 Nov 6 01:52:55.108: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5561 Nov 6 01:52:55.111: INFO: creating *v1.StatefulSet: csi-mock-volumes-5561-9308/csi-mockplugin Nov 6 01:52:55.115: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-5561 Nov 6 01:52:55.118: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-5561" Nov 6 01:52:55.120: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5561 to register on node node2 I1106 01:53:01.225836 32 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-5561","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1106 01:53:01.264622 32 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I1106 01:53:01.305926 32 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-5561","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1106 01:53:01.307634 32 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I1106 01:53:01.320023 32 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I1106 01:53:02.131521 32 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-5561"},"Error":"","FullError":null} STEP: Creating pod Nov 6 01:53:04.636: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 6 01:53:04.640: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-w2g69] to have phase Bound Nov 6 01:53:04.643: INFO: PersistentVolumeClaim pvc-w2g69 found but phase is Pending instead of Bound. I1106 01:53:04.652120 32 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-598ece76-fce7-4c07-8217-ebb9c41a75d8","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-598ece76-fce7-4c07-8217-ebb9c41a75d8"}}},"Error":"","FullError":null} Nov 6 01:53:06.647: INFO: PersistentVolumeClaim pvc-w2g69 found and phase=Bound (2.007010697s) Nov 6 01:53:06.663: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-w2g69] to have phase Bound Nov 6 01:53:06.676: INFO: PersistentVolumeClaim pvc-w2g69 found and phase=Bound (12.863614ms) STEP: Waiting for expected CSI calls I1106 01:53:07.499120 32 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1106 01:53:07.504558 32 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-598ece76-fce7-4c07-8217-ebb9c41a75d8/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-598ece76-fce7-4c07-8217-ebb9c41a75d8","storage.kubernetes.io/csiProvisionerIdentity":"1636163581324-8081-csi-mock-csi-mock-volumes-5561"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} STEP: Deleting the previously created pod Nov 6 01:53:07.677: INFO: Deleting pod "pvc-volume-tester-5jjfh" in namespace "csi-mock-volumes-5561" Nov 6 01:53:07.681: INFO: Wait up to 5m0s for pod "pvc-volume-tester-5jjfh" to be fully deleted I1106 01:53:08.102103 32 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1106 01:53:08.106724 32 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-598ece76-fce7-4c07-8217-ebb9c41a75d8/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-598ece76-fce7-4c07-8217-ebb9c41a75d8","storage.kubernetes.io/csiProvisionerIdentity":"1636163581324-8081-csi-mock-csi-mock-volumes-5561"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I1106 01:53:09.210164 32 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1106 01:53:09.212874 32 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-598ece76-fce7-4c07-8217-ebb9c41a75d8/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-598ece76-fce7-4c07-8217-ebb9c41a75d8","storage.kubernetes.io/csiProvisionerIdentity":"1636163581324-8081-csi-mock-csi-mock-volumes-5561"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I1106 01:53:11.242392 32 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1106 01:53:11.244944 32 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-598ece76-fce7-4c07-8217-ebb9c41a75d8/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-598ece76-fce7-4c07-8217-ebb9c41a75d8","storage.kubernetes.io/csiProvisionerIdentity":"1636163581324-8081-csi-mock-csi-mock-volumes-5561"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I1106 01:53:15.267665 32 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1106 01:53:15.270411 32 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-598ece76-fce7-4c07-8217-ebb9c41a75d8/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-598ece76-fce7-4c07-8217-ebb9c41a75d8","storage.kubernetes.io/csiProvisionerIdentity":"1636163581324-8081-csi-mock-csi-mock-volumes-5561"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} STEP: Waiting for all remaining expected CSI calls I1106 01:53:20.071710 32 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1106 01:53:20.075953 32 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-598ece76-fce7-4c07-8217-ebb9c41a75d8/globalmount"},"Response":{},"Error":"","FullError":null} STEP: Deleting pod pvc-volume-tester-5jjfh Nov 6 01:53:20.688: INFO: Deleting pod "pvc-volume-tester-5jjfh" in namespace "csi-mock-volumes-5561" STEP: Deleting claim pvc-w2g69 Nov 6 01:53:20.697: INFO: Waiting up to 2m0s for PersistentVolume pvc-598ece76-fce7-4c07-8217-ebb9c41a75d8 to get deleted Nov 6 01:53:20.700: INFO: PersistentVolume pvc-598ece76-fce7-4c07-8217-ebb9c41a75d8 found and phase=Bound (2.205777ms) I1106 01:53:20.710916 32 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} Nov 6 01:53:22.703: INFO: PersistentVolume pvc-598ece76-fce7-4c07-8217-ebb9c41a75d8 was removed STEP: Deleting storageclass csi-mock-volumes-5561-sct57fn STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-5561 STEP: Waiting for namespaces [csi-mock-volumes-5561] to vanish STEP: uninstalling csi mock driver Nov 6 01:53:28.730: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5561-9308/csi-attacher Nov 6 01:53:28.734: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5561 Nov 6 01:53:28.738: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5561 Nov 6 01:53:28.742: INFO: deleting *v1.Role: csi-mock-volumes-5561-9308/external-attacher-cfg-csi-mock-volumes-5561 Nov 6 01:53:28.746: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5561-9308/csi-attacher-role-cfg Nov 6 01:53:28.749: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5561-9308/csi-provisioner Nov 6 01:53:28.753: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5561 Nov 6 01:53:28.756: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5561 Nov 6 01:53:28.761: INFO: deleting *v1.Role: csi-mock-volumes-5561-9308/external-provisioner-cfg-csi-mock-volumes-5561 Nov 6 01:53:28.765: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5561-9308/csi-provisioner-role-cfg Nov 6 01:53:28.768: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5561-9308/csi-resizer Nov 6 01:53:28.771: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5561 Nov 6 01:53:28.775: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5561 Nov 6 01:53:28.778: INFO: deleting *v1.Role: csi-mock-volumes-5561-9308/external-resizer-cfg-csi-mock-volumes-5561 Nov 6 01:53:28.782: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5561-9308/csi-resizer-role-cfg Nov 6 01:53:28.785: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5561-9308/csi-snapshotter Nov 6 01:53:28.788: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5561 Nov 6 01:53:28.791: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5561 Nov 6 01:53:28.795: INFO: deleting *v1.Role: csi-mock-volumes-5561-9308/external-snapshotter-leaderelection-csi-mock-volumes-5561 Nov 6 01:53:28.797: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5561-9308/external-snapshotter-leaderelection Nov 6 01:53:28.801: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5561-9308/csi-mock Nov 6 01:53:28.804: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5561 Nov 6 01:53:28.807: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5561 Nov 6 01:53:28.810: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5561 Nov 6 01:53:28.813: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5561 Nov 6 01:53:28.816: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5561 Nov 6 01:53:28.819: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5561 Nov 6 01:53:28.822: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5561 Nov 6 01:53:28.825: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5561-9308/csi-mockplugin Nov 6 01:53:28.829: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-5561 STEP: deleting the driver namespace: csi-mock-volumes-5561-9308 STEP: Waiting for namespaces [csi-mock-volumes-5561-9308] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:54:12.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:77.893 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI NodeStage error cases [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:734 should call NodeUnstage after NodeStage ephemeral error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should call NodeUnstage after NodeStage ephemeral error","total":-1,"completed":3,"skipped":187,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:53:38.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:49 [It] should allow deletion of pod with invalid volume : secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 Nov 6 01:54:08.211: INFO: Deleting pod "pv-8688"/"pod-ephm-test-projected-rch5" Nov 6 01:54:08.211: INFO: Deleting pod "pod-ephm-test-projected-rch5" in namespace "pv-8688" Nov 6 01:54:08.216: INFO: Wait up to 5m0s for pod "pod-ephm-test-projected-rch5" to be fully deleted [AfterEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:54:20.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-8688" for this suite. • [SLOW TEST:42.054 seconds] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 When pod refers to non-existent ephemeral storage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53 should allow deletion of pod with invalid volume : secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 ------------------------------ {"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret","total":-1,"completed":6,"skipped":142,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:54:06.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-c6f2dc6a-0d5c-4d81-8978-0f3159bee6e3" Nov 6 01:54:10.800: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-c6f2dc6a-0d5c-4d81-8978-0f3159bee6e3" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-c6f2dc6a-0d5c-4d81-8978-0f3159bee6e3" "/tmp/local-volume-test-c6f2dc6a-0d5c-4d81-8978-0f3159bee6e3"] Namespace:persistent-local-volumes-test-5617 PodName:hostexec-node1-t9rql ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:54:10.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:54:10.976: INFO: Creating a PV followed by a PVC Nov 6 01:54:10.982: INFO: Waiting for PV local-pv94plb to bind to PVC pvc-ndxz5 Nov 6 01:54:10.982: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-ndxz5] to have phase Bound Nov 6 01:54:10.984: INFO: PersistentVolumeClaim pvc-ndxz5 found but phase is Pending instead of Bound. Nov 6 01:54:12.987: INFO: PersistentVolumeClaim pvc-ndxz5 found but phase is Pending instead of Bound. Nov 6 01:54:14.991: INFO: PersistentVolumeClaim pvc-ndxz5 found but phase is Pending instead of Bound. Nov 6 01:54:16.995: INFO: PersistentVolumeClaim pvc-ndxz5 found but phase is Pending instead of Bound. Nov 6 01:54:18.999: INFO: PersistentVolumeClaim pvc-ndxz5 found but phase is Pending instead of Bound. Nov 6 01:54:21.007: INFO: PersistentVolumeClaim pvc-ndxz5 found but phase is Pending instead of Bound. Nov 6 01:54:23.010: INFO: PersistentVolumeClaim pvc-ndxz5 found and phase=Bound (12.028189104s) Nov 6 01:54:23.010: INFO: Waiting up to 3m0s for PersistentVolume local-pv94plb to have phase Bound Nov 6 01:54:23.013: INFO: PersistentVolume local-pv94plb found and phase=Bound (2.451397ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Nov 6 01:54:27.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-5617 exec pod-a40eb5aa-67ce-4662-bae5-4d89985f2011 --namespace=persistent-local-volumes-test-5617 -- stat -c %g /mnt/volume1' Nov 6 01:54:27.309: INFO: stderr: "" Nov 6 01:54:27.309: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-a40eb5aa-67ce-4662-bae5-4d89985f2011 in namespace persistent-local-volumes-test-5617 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:54:27.314: INFO: Deleting PersistentVolumeClaim "pvc-ndxz5" Nov 6 01:54:27.318: INFO: Deleting PersistentVolume "local-pv94plb" STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-c6f2dc6a-0d5c-4d81-8978-0f3159bee6e3" Nov 6 01:54:27.323: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-c6f2dc6a-0d5c-4d81-8978-0f3159bee6e3"] Namespace:persistent-local-volumes-test-5617 PodName:hostexec-node1-t9rql ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:54:27.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:54:27.431: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c6f2dc6a-0d5c-4d81-8978-0f3159bee6e3] Namespace:persistent-local-volumes-test-5617 PodName:hostexec-node1-t9rql ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:54:27.431: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:54:27.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5617" for this suite. • [SLOW TEST:20.785 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":6,"skipped":201,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:54:01.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:391 STEP: Setting up local volumes on node "node1" STEP: Initializing test volumes Nov 6 01:54:03.382: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-d5206024-b9b2-42b4-9939-add38a233d7e] Namespace:persistent-local-volumes-test-3867 PodName:hostexec-node1-hcbrt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:54:03.382: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:54:03.698: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-db9a2a05-e368-45ae-9fdf-553125685d9b] Namespace:persistent-local-volumes-test-3867 PodName:hostexec-node1-hcbrt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:54:03.698: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:54:03.885: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-f06853e6-de47-4151-8112-0965df92a9bc] Namespace:persistent-local-volumes-test-3867 PodName:hostexec-node1-hcbrt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:54:03.885: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:54:04.033: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-8eb75af1-f8bf-48a3-8fac-070855237b1f] Namespace:persistent-local-volumes-test-3867 PodName:hostexec-node1-hcbrt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:54:04.033: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:54:04.149: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-0973f74e-f8cd-49d2-9edd-c6acc932c342] Namespace:persistent-local-volumes-test-3867 PodName:hostexec-node1-hcbrt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:54:04.149: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:54:04.236: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-8de0688a-ea73-4aef-81e7-f8c7cc3b9f2a] Namespace:persistent-local-volumes-test-3867 PodName:hostexec-node1-hcbrt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:54:04.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:54:04.355: INFO: Creating a PV followed by a PVC Nov 6 01:54:04.363: INFO: Creating a PV followed by a PVC Nov 6 01:54:04.369: INFO: Creating a PV followed by a PVC Nov 6 01:54:04.375: INFO: Creating a PV followed by a PVC Nov 6 01:54:04.383: INFO: Creating a PV followed by a PVC Nov 6 01:54:04.389: INFO: Creating a PV followed by a PVC Nov 6 01:54:14.434: INFO: PVCs were not bound within 10s (that's good) STEP: Setting up local volumes on node "node2" STEP: Initializing test volumes Nov 6 01:54:16.450: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-c84f75cd-5e67-4f24-891a-68fda0b4606b] Namespace:persistent-local-volumes-test-3867 PodName:hostexec-node2-pq789 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:54:16.450: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:54:16.535: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-1cbb539a-e2b1-4ea4-a3b1-d5d401c76f85] Namespace:persistent-local-volumes-test-3867 PodName:hostexec-node2-pq789 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:54:16.535: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:54:16.630: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-887a18b9-9750-4e84-912a-bdc646c9dcba] Namespace:persistent-local-volumes-test-3867 PodName:hostexec-node2-pq789 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:54:16.630: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:54:16.731: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-dd2eebd9-0d56-47b9-94b2-f3edd6a9c94a] Namespace:persistent-local-volumes-test-3867 PodName:hostexec-node2-pq789 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:54:16.731: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:54:16.857: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-4c01d81c-9ca1-4ae2-aa01-07a60eabeef4] Namespace:persistent-local-volumes-test-3867 PodName:hostexec-node2-pq789 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:54:16.857: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:54:16.983: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-a16a7721-1572-4738-8fb5-4d6f5d6dc179] Namespace:persistent-local-volumes-test-3867 PodName:hostexec-node2-pq789 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:54:16.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:54:17.095: INFO: Creating a PV followed by a PVC Nov 6 01:54:17.103: INFO: Creating a PV followed by a PVC Nov 6 01:54:17.109: INFO: Creating a PV followed by a PVC Nov 6 01:54:17.115: INFO: Creating a PV followed by a PVC Nov 6 01:54:17.120: INFO: Creating a PV followed by a PVC Nov 6 01:54:17.126: INFO: Creating a PV followed by a PVC Nov 6 01:54:27.171: INFO: PVCs were not bound within 10s (that's good) [It] should use volumes spread across nodes when pod management is parallel and pod has anti-affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:425 Nov 6 01:54:27.171: INFO: Runs only when number of nodes >= 3 [AfterEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:403 STEP: Cleaning up PVC and PV Nov 6 01:54:27.172: INFO: Deleting PersistentVolumeClaim "pvc-ggbw8" Nov 6 01:54:27.176: INFO: Deleting PersistentVolume "local-pvt5vd2" STEP: Cleaning up PVC and PV Nov 6 01:54:27.180: INFO: Deleting PersistentVolumeClaim "pvc-2wgqf" Nov 6 01:54:27.184: INFO: Deleting PersistentVolume "local-pvdvqsw" STEP: Cleaning up PVC and PV Nov 6 01:54:27.188: INFO: Deleting PersistentVolumeClaim "pvc-ddwlp" Nov 6 01:54:27.192: INFO: Deleting PersistentVolume "local-pv4dhqk" STEP: Cleaning up PVC and PV Nov 6 01:54:27.195: INFO: Deleting PersistentVolumeClaim "pvc-lt94s" Nov 6 01:54:27.198: INFO: Deleting PersistentVolume "local-pv97cm9" STEP: Cleaning up PVC and PV Nov 6 01:54:27.202: INFO: Deleting PersistentVolumeClaim "pvc-lfgch" Nov 6 01:54:27.206: INFO: Deleting PersistentVolume "local-pvv6wbm" STEP: Cleaning up PVC and PV Nov 6 01:54:27.212: INFO: Deleting PersistentVolumeClaim "pvc-zt5dk" Nov 6 01:54:27.215: INFO: Deleting PersistentVolume "local-pvpfrqb" STEP: Removing the test directory Nov 6 01:54:27.219: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-d5206024-b9b2-42b4-9939-add38a233d7e] Namespace:persistent-local-volumes-test-3867 PodName:hostexec-node1-hcbrt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:54:27.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:54:27.316: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-db9a2a05-e368-45ae-9fdf-553125685d9b] Namespace:persistent-local-volumes-test-3867 PodName:hostexec-node1-hcbrt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:54:27.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:54:27.410: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f06853e6-de47-4151-8112-0965df92a9bc] Namespace:persistent-local-volumes-test-3867 PodName:hostexec-node1-hcbrt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:54:27.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:54:27.511: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8eb75af1-f8bf-48a3-8fac-070855237b1f] Namespace:persistent-local-volumes-test-3867 PodName:hostexec-node1-hcbrt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:54:27.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:54:27.629: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0973f74e-f8cd-49d2-9edd-c6acc932c342] Namespace:persistent-local-volumes-test-3867 PodName:hostexec-node1-hcbrt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:54:27.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:54:27.716: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8de0688a-ea73-4aef-81e7-f8c7cc3b9f2a] Namespace:persistent-local-volumes-test-3867 PodName:hostexec-node1-hcbrt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:54:27.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up PVC and PV Nov 6 01:54:27.807: INFO: Deleting PersistentVolumeClaim "pvc-kvwmn" Nov 6 01:54:27.811: INFO: Deleting PersistentVolume "local-pvhqrm9" STEP: Cleaning up PVC and PV Nov 6 01:54:27.815: INFO: Deleting PersistentVolumeClaim "pvc-ktl4s" Nov 6 01:54:27.819: INFO: Deleting PersistentVolume "local-pvckpm2" STEP: Cleaning up PVC and PV Nov 6 01:54:27.822: INFO: Deleting PersistentVolumeClaim "pvc-6nml6" Nov 6 01:54:27.825: INFO: Deleting PersistentVolume "local-pvvpc2m" STEP: Cleaning up PVC and PV Nov 6 01:54:27.829: INFO: Deleting PersistentVolumeClaim "pvc-hsshr" Nov 6 01:54:27.833: INFO: Deleting PersistentVolume "local-pvtvrb6" STEP: Cleaning up PVC and PV Nov 6 01:54:27.837: INFO: Deleting PersistentVolumeClaim "pvc-7d2kj" Nov 6 01:54:27.840: INFO: Deleting PersistentVolume "local-pv26lbc" STEP: Cleaning up PVC and PV Nov 6 01:54:27.843: INFO: Deleting PersistentVolumeClaim "pvc-j4wpf" Nov 6 01:54:27.846: INFO: Deleting PersistentVolume "local-pvgkl9k" STEP: Removing the test directory Nov 6 01:54:27.850: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c84f75cd-5e67-4f24-891a-68fda0b4606b] Namespace:persistent-local-volumes-test-3867 PodName:hostexec-node2-pq789 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:54:27.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:54:27.933: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-1cbb539a-e2b1-4ea4-a3b1-d5d401c76f85] Namespace:persistent-local-volumes-test-3867 PodName:hostexec-node2-pq789 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:54:27.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:54:28.070: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-887a18b9-9750-4e84-912a-bdc646c9dcba] Namespace:persistent-local-volumes-test-3867 PodName:hostexec-node2-pq789 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:54:28.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:54:28.166: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-dd2eebd9-0d56-47b9-94b2-f3edd6a9c94a] Namespace:persistent-local-volumes-test-3867 PodName:hostexec-node2-pq789 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:54:28.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:54:28.261: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4c01d81c-9ca1-4ae2-aa01-07a60eabeef4] Namespace:persistent-local-volumes-test-3867 PodName:hostexec-node2-pq789 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:54:28.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:54:28.364: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a16a7721-1572-4738-8fb5-4d6f5d6dc179] Namespace:persistent-local-volumes-test-3867 PodName:hostexec-node2-pq789 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:54:28.364: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:54:28.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3867" for this suite. S [SKIPPING] [27.127 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:384 should use volumes spread across nodes when pod management is parallel and pod has anti-affinity [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:425 Runs only when number of nodes >= 3 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:427 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:53:15.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity unused /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 STEP: Building a driver namespace object, basename csi-mock-volumes-4561 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 6 01:53:15.395: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4561-6160/csi-attacher Nov 6 01:53:15.398: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4561 Nov 6 01:53:15.398: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4561 Nov 6 01:53:15.401: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4561 Nov 6 01:53:15.403: INFO: creating *v1.Role: csi-mock-volumes-4561-6160/external-attacher-cfg-csi-mock-volumes-4561 Nov 6 01:53:15.406: INFO: creating *v1.RoleBinding: csi-mock-volumes-4561-6160/csi-attacher-role-cfg Nov 6 01:53:15.409: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4561-6160/csi-provisioner Nov 6 01:53:15.412: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4561 Nov 6 01:53:15.412: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4561 Nov 6 01:53:15.414: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4561 Nov 6 01:53:15.417: INFO: creating *v1.Role: csi-mock-volumes-4561-6160/external-provisioner-cfg-csi-mock-volumes-4561 Nov 6 01:53:15.420: INFO: creating *v1.RoleBinding: csi-mock-volumes-4561-6160/csi-provisioner-role-cfg Nov 6 01:53:15.423: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4561-6160/csi-resizer Nov 6 01:53:15.425: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4561 Nov 6 01:53:15.425: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4561 Nov 6 01:53:15.428: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4561 Nov 6 01:53:15.433: INFO: creating *v1.Role: csi-mock-volumes-4561-6160/external-resizer-cfg-csi-mock-volumes-4561 Nov 6 01:53:15.435: INFO: creating *v1.RoleBinding: csi-mock-volumes-4561-6160/csi-resizer-role-cfg Nov 6 01:53:15.437: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4561-6160/csi-snapshotter Nov 6 01:53:15.440: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4561 Nov 6 01:53:15.440: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4561 Nov 6 01:53:15.443: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4561 Nov 6 01:53:15.445: INFO: creating *v1.Role: csi-mock-volumes-4561-6160/external-snapshotter-leaderelection-csi-mock-volumes-4561 Nov 6 01:53:15.448: INFO: creating *v1.RoleBinding: csi-mock-volumes-4561-6160/external-snapshotter-leaderelection Nov 6 01:53:15.450: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4561-6160/csi-mock Nov 6 01:53:15.453: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4561 Nov 6 01:53:15.455: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4561 Nov 6 01:53:15.458: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4561 Nov 6 01:53:15.460: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4561 Nov 6 01:53:15.463: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4561 Nov 6 01:53:15.465: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4561 Nov 6 01:53:15.468: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4561 Nov 6 01:53:15.471: INFO: creating *v1.StatefulSet: csi-mock-volumes-4561-6160/csi-mockplugin Nov 6 01:53:15.475: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4561 Nov 6 01:53:15.478: INFO: creating *v1.StatefulSet: csi-mock-volumes-4561-6160/csi-mockplugin-attacher Nov 6 01:53:15.481: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4561" Nov 6 01:53:15.484: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4561 to register on node node2 STEP: Creating pod Nov 6 01:53:25.505: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Deleting the previously created pod Nov 6 01:53:39.531: INFO: Deleting pod "pvc-volume-tester-dqx4z" in namespace "csi-mock-volumes-4561" Nov 6 01:53:39.538: INFO: Wait up to 5m0s for pod "pvc-volume-tester-dqx4z" to be fully deleted STEP: Deleting pod pvc-volume-tester-dqx4z Nov 6 01:53:45.546: INFO: Deleting pod "pvc-volume-tester-dqx4z" in namespace "csi-mock-volumes-4561" STEP: Deleting claim pvc-x2s2f Nov 6 01:53:45.557: INFO: Waiting up to 2m0s for PersistentVolume pvc-ddbcdfd5-b588-4c6e-ad7e-7b3b829928d4 to get deleted Nov 6 01:53:45.559: INFO: PersistentVolume pvc-ddbcdfd5-b588-4c6e-ad7e-7b3b829928d4 found and phase=Bound (2.184547ms) Nov 6 01:53:47.562: INFO: PersistentVolume pvc-ddbcdfd5-b588-4c6e-ad7e-7b3b829928d4 found and phase=Released (2.005054122s) Nov 6 01:53:49.566: INFO: PersistentVolume pvc-ddbcdfd5-b588-4c6e-ad7e-7b3b829928d4 found and phase=Released (4.008925136s) Nov 6 01:53:51.570: INFO: PersistentVolume pvc-ddbcdfd5-b588-4c6e-ad7e-7b3b829928d4 found and phase=Released (6.013103481s) Nov 6 01:53:53.574: INFO: PersistentVolume pvc-ddbcdfd5-b588-4c6e-ad7e-7b3b829928d4 found and phase=Released (8.017362813s) Nov 6 01:53:55.578: INFO: PersistentVolume pvc-ddbcdfd5-b588-4c6e-ad7e-7b3b829928d4 found and phase=Released (10.021262307s) Nov 6 01:53:57.582: INFO: PersistentVolume pvc-ddbcdfd5-b588-4c6e-ad7e-7b3b829928d4 found and phase=Released (12.025841375s) Nov 6 01:53:59.588: INFO: PersistentVolume pvc-ddbcdfd5-b588-4c6e-ad7e-7b3b829928d4 found and phase=Released (14.03117097s) Nov 6 01:54:01.591: INFO: PersistentVolume pvc-ddbcdfd5-b588-4c6e-ad7e-7b3b829928d4 was removed STEP: Deleting storageclass mock-csi-storage-capacity-csi-mock-volumes-4561 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4561 STEP: Waiting for namespaces [csi-mock-volumes-4561] to vanish STEP: uninstalling csi mock driver Nov 6 01:54:07.603: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4561-6160/csi-attacher Nov 6 01:54:07.608: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4561 Nov 6 01:54:07.612: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4561 Nov 6 01:54:07.616: INFO: deleting *v1.Role: csi-mock-volumes-4561-6160/external-attacher-cfg-csi-mock-volumes-4561 Nov 6 01:54:07.620: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4561-6160/csi-attacher-role-cfg Nov 6 01:54:07.623: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4561-6160/csi-provisioner Nov 6 01:54:07.627: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4561 Nov 6 01:54:07.630: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4561 Nov 6 01:54:07.633: INFO: deleting *v1.Role: csi-mock-volumes-4561-6160/external-provisioner-cfg-csi-mock-volumes-4561 Nov 6 01:54:07.637: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4561-6160/csi-provisioner-role-cfg Nov 6 01:54:07.641: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4561-6160/csi-resizer Nov 6 01:54:07.645: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4561 Nov 6 01:54:07.648: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4561 Nov 6 01:54:07.651: INFO: deleting *v1.Role: csi-mock-volumes-4561-6160/external-resizer-cfg-csi-mock-volumes-4561 Nov 6 01:54:07.655: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4561-6160/csi-resizer-role-cfg Nov 6 01:54:07.658: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4561-6160/csi-snapshotter Nov 6 01:54:07.661: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4561 Nov 6 01:54:07.665: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4561 Nov 6 01:54:07.668: INFO: deleting *v1.Role: csi-mock-volumes-4561-6160/external-snapshotter-leaderelection-csi-mock-volumes-4561 Nov 6 01:54:07.673: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4561-6160/external-snapshotter-leaderelection Nov 6 01:54:07.677: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4561-6160/csi-mock Nov 6 01:54:07.681: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4561 Nov 6 01:54:07.684: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4561 Nov 6 01:54:07.687: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4561 Nov 6 01:54:07.691: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4561 Nov 6 01:54:07.694: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4561 Nov 6 01:54:07.697: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4561 Nov 6 01:54:07.700: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4561 Nov 6 01:54:07.704: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4561-6160/csi-mockplugin Nov 6 01:54:07.707: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-4561 Nov 6 01:54:07.710: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4561-6160/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-4561-6160 STEP: Waiting for namespaces [csi-mock-volumes-4561-6160] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:54:35.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:80.392 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134 CSIStorageCapacity unused /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity unused","total":-1,"completed":5,"skipped":124,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:54:36.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:77 Nov 6 01:54:36.104: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:54:36.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3024" for this suite. [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:111 Nov 6 01:54:36.115: INFO: AfterEach: Cleaning up test resources Nov 6 01:54:36.115: INFO: pvc is nil Nov 6 01:54:36.115: INFO: pv is nil S [SKIPPING] in Spec Setup (BeforeEach) [0.037 seconds] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:142 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:54:36.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename multi-az STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:39 Nov 6 01:54:36.196: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:54:36.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "multi-az-445" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-storage] Multi-AZ Cluster Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should schedule pods in the same zones as statically provisioned PVs [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:50 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:40 ------------------------------ SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:54:27.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 6 01:54:31.605: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-990247c9-8d11-4ca5-b31a-93fc0d9c1b8c-backend && mount --bind /tmp/local-volume-test-990247c9-8d11-4ca5-b31a-93fc0d9c1b8c-backend /tmp/local-volume-test-990247c9-8d11-4ca5-b31a-93fc0d9c1b8c-backend && ln -s /tmp/local-volume-test-990247c9-8d11-4ca5-b31a-93fc0d9c1b8c-backend /tmp/local-volume-test-990247c9-8d11-4ca5-b31a-93fc0d9c1b8c] Namespace:persistent-local-volumes-test-2762 PodName:hostexec-node2-fspvx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:54:31.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:54:31.700: INFO: Creating a PV followed by a PVC Nov 6 01:54:31.707: INFO: Waiting for PV local-pv56zbq to bind to PVC pvc-p88bj Nov 6 01:54:31.707: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-p88bj] to have phase Bound Nov 6 01:54:31.709: INFO: PersistentVolumeClaim pvc-p88bj found but phase is Pending instead of Bound. Nov 6 01:54:33.712: INFO: PersistentVolumeClaim pvc-p88bj found but phase is Pending instead of Bound. Nov 6 01:54:35.717: INFO: PersistentVolumeClaim pvc-p88bj found but phase is Pending instead of Bound. Nov 6 01:54:37.721: INFO: PersistentVolumeClaim pvc-p88bj found and phase=Bound (6.013384381s) Nov 6 01:54:37.721: INFO: Waiting up to 3m0s for PersistentVolume local-pv56zbq to have phase Bound Nov 6 01:54:37.722: INFO: PersistentVolume local-pv56zbq found and phase=Bound (1.749754ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Nov 6 01:54:37.727: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:54:37.728: INFO: Deleting PersistentVolumeClaim "pvc-p88bj" Nov 6 01:54:37.733: INFO: Deleting PersistentVolume "local-pv56zbq" STEP: Removing the test directory Nov 6 01:54:37.737: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-990247c9-8d11-4ca5-b31a-93fc0d9c1b8c && umount /tmp/local-volume-test-990247c9-8d11-4ca5-b31a-93fc0d9c1b8c-backend && rm -r /tmp/local-volume-test-990247c9-8d11-4ca5-b31a-93fc0d9c1b8c-backend] Namespace:persistent-local-volumes-test-2762 PodName:hostexec-node2-fspvx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:54:37.737: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:54:37.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2762" for this suite. S [SKIPPING] [10.337 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:54:37.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename regional-pd STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:68 Nov 6 01:54:37.971: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:54:37.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "regional-pd-4822" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 RegionalPD [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:76 should provision storage with delayed binding [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:81 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:54:36.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:106 STEP: Creating a pod to test downward API volume plugin Nov 6 01:54:36.261: INFO: Waiting up to 5m0s for pod "metadata-volume-de175108-2dec-411e-9733-c54bb35ebb1f" in namespace "projected-354" to be "Succeeded or Failed" Nov 6 01:54:36.268: INFO: Pod "metadata-volume-de175108-2dec-411e-9733-c54bb35ebb1f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.386427ms Nov 6 01:54:38.271: INFO: Pod "metadata-volume-de175108-2dec-411e-9733-c54bb35ebb1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010576569s Nov 6 01:54:40.276: INFO: Pod "metadata-volume-de175108-2dec-411e-9733-c54bb35ebb1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015121802s STEP: Saw pod success Nov 6 01:54:40.276: INFO: Pod "metadata-volume-de175108-2dec-411e-9733-c54bb35ebb1f" satisfied condition "Succeeded or Failed" Nov 6 01:54:40.278: INFO: Trying to get logs from node node2 pod metadata-volume-de175108-2dec-411e-9733-c54bb35ebb1f container client-container: STEP: delete the pod Nov 6 01:54:40.297: INFO: Waiting for pod metadata-volume-de175108-2dec-411e-9733-c54bb35ebb1f to disappear Nov 6 01:54:40.299: INFO: Pod metadata-volume-de175108-2dec-411e-9733-c54bb35ebb1f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:54:40.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-354" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":6,"skipped":346,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:54:40.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pvc-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:72 Nov 6 01:54:40.370: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PVC Nov 6 01:54:40.376: INFO: error finding default storageClass : No default storage class found [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:54:40.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pvc-protection-4763" for this suite. [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:108 S [SKIPPING] in Spec Setup (BeforeEach) [0.039 seconds] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:145 error finding default storageClass : No default storage class found /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pv/pv.go:819 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:54:40.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:110 STEP: Creating configMap with name projected-configmap-test-volume-map-da1f0564-5ada-4c1d-809a-dafa6aa0e99c STEP: Creating a pod to test consume configMaps Nov 6 01:54:40.469: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d652a94e-2ce2-4046-878a-8b60e1b15190" in namespace "projected-661" to be "Succeeded or Failed" Nov 6 01:54:40.474: INFO: Pod "pod-projected-configmaps-d652a94e-2ce2-4046-878a-8b60e1b15190": Phase="Pending", Reason="", readiness=false. Elapsed: 4.684122ms Nov 6 01:54:42.478: INFO: Pod "pod-projected-configmaps-d652a94e-2ce2-4046-878a-8b60e1b15190": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008987082s Nov 6 01:54:44.481: INFO: Pod "pod-projected-configmaps-d652a94e-2ce2-4046-878a-8b60e1b15190": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011808626s STEP: Saw pod success Nov 6 01:54:44.481: INFO: Pod "pod-projected-configmaps-d652a94e-2ce2-4046-878a-8b60e1b15190" satisfied condition "Succeeded or Failed" Nov 6 01:54:44.483: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-d652a94e-2ce2-4046-878a-8b60e1b15190 container agnhost-container: STEP: delete the pod Nov 6 01:54:44.494: INFO: Waiting for pod pod-projected-configmaps-d652a94e-2ce2-4046-878a-8b60e1b15190 to disappear Nov 6 01:54:44.496: INFO: Pod pod-projected-configmaps-d652a94e-2ce2-4046-878a-8b60e1b15190 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:54:44.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-661" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":7,"skipped":387,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:54:38.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Nov 6 01:54:44.112: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-8928 PodName:hostexec-node1-ph4q5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:54:44.112: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:54:44.573: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Nov 6 01:54:44.573: INFO: exec node1: stdout: "0\n" Nov 6 01:54:44.573: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Nov 6 01:54:44.573: INFO: exec node1: exit code: 0 Nov 6 01:54:44.573: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:54:44.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8928" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [6.516 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1256 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:54:44.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:112 [It] should be reschedulable [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:326 Nov 6 01:54:44.601: INFO: Only supported for providers [openstack gce gke vsphere azure] (not local) [AfterEach] pods that use multiple volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:322 [AfterEach] [sig-storage] PersistentVolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:54:44.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3333" for this suite. S [SKIPPING] [0.039 seconds] [sig-storage] PersistentVolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Default StorageClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:319 pods that use multiple volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:320 should be reschedulable [Slow] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:326 Only supported for providers [openstack gce gke vsphere azure] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:328 ------------------------------ SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:54:44.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37 [It] should support subPath [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93 STEP: Creating a pod to test hostPath subPath Nov 6 01:54:44.649: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1869" to be "Succeeded or Failed" Nov 6 01:54:44.653: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091966ms Nov 6 01:54:46.658: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00874661s Nov 6 01:54:48.661: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011780022s Nov 6 01:54:50.665: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016460084s STEP: Saw pod success Nov 6 01:54:50.665: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Nov 6 01:54:50.668: INFO: Trying to get logs from node node2 pod pod-host-path-test container test-container-2: STEP: delete the pod Nov 6 01:54:50.697: INFO: Waiting for pod pod-host-path-test to disappear Nov 6 01:54:50.699: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:54:50.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-1869" for this suite. • [SLOW TEST:6.089 seconds] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support subPath [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should support subPath [NodeConformance]","total":-1,"completed":7,"skipped":298,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:54:28.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 6 01:54:32.597: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-b5e8f583-e09c-430e-89ce-1a1a8e990e51-backend && mount --bind /tmp/local-volume-test-b5e8f583-e09c-430e-89ce-1a1a8e990e51-backend /tmp/local-volume-test-b5e8f583-e09c-430e-89ce-1a1a8e990e51-backend && ln -s /tmp/local-volume-test-b5e8f583-e09c-430e-89ce-1a1a8e990e51-backend /tmp/local-volume-test-b5e8f583-e09c-430e-89ce-1a1a8e990e51] Namespace:persistent-local-volumes-test-6332 PodName:hostexec-node1-7p8nd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:54:32.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:54:32.691: INFO: Creating a PV followed by a PVC Nov 6 01:54:32.698: INFO: Waiting for PV local-pv8zkxb to bind to PVC pvc-n22lj Nov 6 01:54:32.698: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-n22lj] to have phase Bound Nov 6 01:54:32.701: INFO: PersistentVolumeClaim pvc-n22lj found but phase is Pending instead of Bound. Nov 6 01:54:34.704: INFO: PersistentVolumeClaim pvc-n22lj found but phase is Pending instead of Bound. Nov 6 01:54:36.707: INFO: PersistentVolumeClaim pvc-n22lj found but phase is Pending instead of Bound. Nov 6 01:54:38.711: INFO: PersistentVolumeClaim pvc-n22lj found and phase=Bound (6.01228838s) Nov 6 01:54:38.711: INFO: Waiting up to 3m0s for PersistentVolume local-pv8zkxb to have phase Bound Nov 6 01:54:38.713: INFO: PersistentVolume local-pv8zkxb found and phase=Bound (2.778562ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Nov 6 01:54:42.743: INFO: pod "pod-da7fab1e-bb18-405d-a2c2-94facd9bb30c" created on Node "node1" STEP: Writing in pod1 Nov 6 01:54:42.743: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6332 PodName:pod-da7fab1e-bb18-405d-a2c2-94facd9bb30c ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:54:42.743: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:54:42.837: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 6 01:54:42.837: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6332 PodName:pod-da7fab1e-bb18-405d-a2c2-94facd9bb30c ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:54:42.837: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:54:44.547: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-da7fab1e-bb18-405d-a2c2-94facd9bb30c in namespace persistent-local-volumes-test-6332 STEP: Creating pod2 STEP: Creating a pod Nov 6 01:54:50.574: INFO: pod "pod-d7e5b90d-c51f-49d0-8d5b-078cf8f34de9" created on Node "node1" STEP: Reading in pod2 Nov 6 01:54:50.574: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6332 PodName:pod-d7e5b90d-c51f-49d0-8d5b-078cf8f34de9 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:54:50.574: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:54:50.673: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-d7e5b90d-c51f-49d0-8d5b-078cf8f34de9 in namespace persistent-local-volumes-test-6332 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:54:50.680: INFO: Deleting PersistentVolumeClaim "pvc-n22lj" Nov 6 01:54:50.684: INFO: Deleting PersistentVolume "local-pv8zkxb" STEP: Removing the test directory Nov 6 01:54:50.689: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-b5e8f583-e09c-430e-89ce-1a1a8e990e51 && umount /tmp/local-volume-test-b5e8f583-e09c-430e-89ce-1a1a8e990e51-backend && rm -r /tmp/local-volume-test-b5e8f583-e09c-430e-89ce-1a1a8e990e51-backend] Namespace:persistent-local-volumes-test-6332 PodName:hostexec-node1-7p8nd ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:54:50.689: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:54:50.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6332" for this suite. • [SLOW TEST:22.273 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":8,"skipped":229,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:54:44.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity used, insufficient capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 STEP: Building a driver namespace object, basename csi-mock-volumes-7342 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 6 01:54:44.694: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7342-7269/csi-attacher Nov 6 01:54:44.696: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7342 Nov 6 01:54:44.696: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-7342 Nov 6 01:54:44.699: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7342 Nov 6 01:54:44.702: INFO: creating *v1.Role: csi-mock-volumes-7342-7269/external-attacher-cfg-csi-mock-volumes-7342 Nov 6 01:54:44.705: INFO: creating *v1.RoleBinding: csi-mock-volumes-7342-7269/csi-attacher-role-cfg Nov 6 01:54:44.708: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7342-7269/csi-provisioner Nov 6 01:54:44.711: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7342 Nov 6 01:54:44.711: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-7342 Nov 6 01:54:44.714: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7342 Nov 6 01:54:44.717: INFO: creating *v1.Role: csi-mock-volumes-7342-7269/external-provisioner-cfg-csi-mock-volumes-7342 Nov 6 01:54:44.720: INFO: creating *v1.RoleBinding: csi-mock-volumes-7342-7269/csi-provisioner-role-cfg Nov 6 01:54:44.723: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7342-7269/csi-resizer Nov 6 01:54:44.726: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7342 Nov 6 01:54:44.726: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-7342 Nov 6 01:54:44.728: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7342 Nov 6 01:54:44.731: INFO: creating *v1.Role: csi-mock-volumes-7342-7269/external-resizer-cfg-csi-mock-volumes-7342 Nov 6 01:54:44.733: INFO: creating *v1.RoleBinding: csi-mock-volumes-7342-7269/csi-resizer-role-cfg Nov 6 01:54:44.736: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7342-7269/csi-snapshotter Nov 6 01:54:44.738: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7342 Nov 6 01:54:44.738: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-7342 Nov 6 01:54:44.740: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7342 Nov 6 01:54:44.743: INFO: creating *v1.Role: csi-mock-volumes-7342-7269/external-snapshotter-leaderelection-csi-mock-volumes-7342 Nov 6 01:54:44.745: INFO: creating *v1.RoleBinding: csi-mock-volumes-7342-7269/external-snapshotter-leaderelection Nov 6 01:54:44.749: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7342-7269/csi-mock Nov 6 01:54:44.751: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7342 Nov 6 01:54:44.755: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7342 Nov 6 01:54:44.761: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7342 Nov 6 01:54:44.763: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7342 Nov 6 01:54:44.767: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7342 Nov 6 01:54:44.775: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7342 Nov 6 01:54:44.778: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7342 Nov 6 01:54:44.781: INFO: creating *v1.StatefulSet: csi-mock-volumes-7342-7269/csi-mockplugin Nov 6 01:54:44.785: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-7342 Nov 6 01:54:44.788: INFO: creating *v1.StatefulSet: csi-mock-volumes-7342-7269/csi-mockplugin-attacher Nov 6 01:54:44.791: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-7342" Nov 6 01:54:44.794: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-7342 to register on node node2 STEP: Creating pod Nov 6 01:54:59.317: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Deleting the previously created pod Nov 6 01:54:59.335: INFO: Deleting pod "pvc-volume-tester-g2845" in namespace "csi-mock-volumes-7342" Nov 6 01:54:59.340: INFO: Wait up to 5m0s for pod "pvc-volume-tester-g2845" to be fully deleted STEP: Deleting pod pvc-volume-tester-g2845 Nov 6 01:54:59.342: INFO: Deleting pod "pvc-volume-tester-g2845" in namespace "csi-mock-volumes-7342" STEP: Deleting claim pvc-6t4gl STEP: Deleting storageclass mock-csi-storage-capacity-csi-mock-volumes-7342 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-7342 STEP: Waiting for namespaces [csi-mock-volumes-7342] to vanish STEP: uninstalling csi mock driver Nov 6 01:55:05.362: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7342-7269/csi-attacher Nov 6 01:55:05.366: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7342 Nov 6 01:55:05.370: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7342 Nov 6 01:55:05.373: INFO: deleting *v1.Role: csi-mock-volumes-7342-7269/external-attacher-cfg-csi-mock-volumes-7342 Nov 6 01:55:05.377: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7342-7269/csi-attacher-role-cfg Nov 6 01:55:05.381: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7342-7269/csi-provisioner Nov 6 01:55:05.384: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7342 Nov 6 01:55:05.389: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7342 Nov 6 01:55:05.392: INFO: deleting *v1.Role: csi-mock-volumes-7342-7269/external-provisioner-cfg-csi-mock-volumes-7342 Nov 6 01:55:05.395: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7342-7269/csi-provisioner-role-cfg Nov 6 01:55:05.399: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7342-7269/csi-resizer Nov 6 01:55:05.402: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7342 Nov 6 01:55:05.406: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7342 Nov 6 01:55:05.409: INFO: deleting *v1.Role: csi-mock-volumes-7342-7269/external-resizer-cfg-csi-mock-volumes-7342 Nov 6 01:55:05.412: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7342-7269/csi-resizer-role-cfg Nov 6 01:55:05.415: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7342-7269/csi-snapshotter Nov 6 01:55:05.418: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7342 Nov 6 01:55:05.422: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7342 Nov 6 01:55:05.425: INFO: deleting *v1.Role: csi-mock-volumes-7342-7269/external-snapshotter-leaderelection-csi-mock-volumes-7342 Nov 6 01:55:05.428: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7342-7269/external-snapshotter-leaderelection Nov 6 01:55:05.431: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7342-7269/csi-mock Nov 6 01:55:05.435: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7342 Nov 6 01:55:05.438: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7342 Nov 6 01:55:05.442: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7342 Nov 6 01:55:05.447: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7342 Nov 6 01:55:05.451: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7342 Nov 6 01:55:05.454: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7342 Nov 6 01:55:05.458: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7342 Nov 6 01:55:05.461: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7342-7269/csi-mockplugin Nov 6 01:55:05.465: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-7342 Nov 6 01:55:05.469: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7342-7269/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-7342-7269 STEP: Waiting for namespaces [csi-mock-volumes-7342-7269] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:55:17.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:32.849 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134 CSIStorageCapacity used, insufficient capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","total":-1,"completed":8,"skipped":433,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:54:50.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Nov 6 01:54:51.028: INFO: The status of Pod test-hostpath-type-nmgr5 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:54:53.031: INFO: The status of Pod test-hostpath-type-nmgr5 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:54:55.030: INFO: The status of Pod test-hostpath-type-nmgr5 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:54:57.032: INFO: The status of Pod test-hostpath-type-nmgr5 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:54:59.032: INFO: The status of Pod test-hostpath-type-nmgr5 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:55:01.031: INFO: The status of Pod test-hostpath-type-nmgr5 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:55:03.031: INFO: The status of Pod test-hostpath-type-nmgr5 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:55:05.032: INFO: The status of Pod test-hostpath-type-nmgr5 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:55:07.031: INFO: The status of Pod test-hostpath-type-nmgr5 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:55:09.032: INFO: The status of Pod test-hostpath-type-nmgr5 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:55:11.033: INFO: The status of Pod test-hostpath-type-nmgr5 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:55:13.031: INFO: The status of Pod test-hostpath-type-nmgr5 is Running (Ready = true) STEP: running on node node2 STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should be able to mount file 'afile' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:147 [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:55:21.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-146" for this suite. • [SLOW TEST:30.096 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount file 'afile' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:147 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should be able to mount file 'afile' successfully when HostPathType is HostPathUnset","total":-1,"completed":9,"skipped":319,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:54:50.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should let an external dynamic provisioner create and delete persistent volumes [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:627 Nov 6 01:54:50.766: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: creating an external dynamic provisioner pod STEP: locating the provisioner pod STEP: creating a StorageClass STEP: Creating a StorageClass Nov 6 01:55:12.919: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: creating a claim with a external provisioning annotation STEP: creating claim=&PersistentVolumeClaim{ObjectMeta:{ pvc- volume-provisioning-4136 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1572864000 0} {} 1500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*volume-provisioning-4136-externalq4x8h,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} Nov 6 01:55:12.925: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-lfbh4] to have phase Bound Nov 6 01:55:12.928: INFO: PersistentVolumeClaim pvc-lfbh4 found but phase is Pending instead of Bound. Nov 6 01:55:14.931: INFO: PersistentVolumeClaim pvc-lfbh4 found but phase is Pending instead of Bound. Nov 6 01:55:16.935: INFO: PersistentVolumeClaim pvc-lfbh4 found and phase=Bound (4.00941889s) STEP: checking the claim STEP: checking the PV STEP: deleting claim "volume-provisioning-4136"/"pvc-lfbh4" STEP: deleting the claim's PV "pvc-2b18dc51-3221-4d79-bf13-78feeecc33d8" Nov 6 01:55:16.944: INFO: Waiting up to 20m0s for PersistentVolume pvc-2b18dc51-3221-4d79-bf13-78feeecc33d8 to get deleted Nov 6 01:55:16.946: INFO: PersistentVolume pvc-2b18dc51-3221-4d79-bf13-78feeecc33d8 found and phase=Bound (2.332234ms) Nov 6 01:55:21.949: INFO: PersistentVolume pvc-2b18dc51-3221-4d79-bf13-78feeecc33d8 was removed Nov 6 01:55:21.949: INFO: deleting claim "volume-provisioning-4136"/"pvc-lfbh4" Nov 6 01:55:21.951: INFO: deleting storage class volume-provisioning-4136-externalq4x8h STEP: Deleting pod external-provisioner-drpqh in namespace volume-provisioning-4136 [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:55:21.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-4136" for this suite. • [SLOW TEST:31.233 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 DynamicProvisioner External /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:626 should let an external dynamic provisioner create and delete persistent volumes [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:627 ------------------------------ {"msg":"PASSED [sig-storage] Dynamic Provisioning DynamicProvisioner External should let an external dynamic provisioner create and delete persistent volumes [Slow]","total":-1,"completed":8,"skipped":310,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:55:22.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support memory backed volumes of specified size /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:298 [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:55:22.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2903" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size","total":-1,"completed":9,"skipped":335,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:54:20.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should call NodeUnstage after NodeStage success /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 STEP: Building a driver namespace object, basename csi-mock-volumes-3788 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 6 01:54:20.309: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3788-5300/csi-attacher Nov 6 01:54:20.311: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3788 Nov 6 01:54:20.311: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-3788 Nov 6 01:54:20.314: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3788 Nov 6 01:54:20.317: INFO: creating *v1.Role: csi-mock-volumes-3788-5300/external-attacher-cfg-csi-mock-volumes-3788 Nov 6 01:54:20.319: INFO: creating *v1.RoleBinding: csi-mock-volumes-3788-5300/csi-attacher-role-cfg Nov 6 01:54:20.321: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3788-5300/csi-provisioner Nov 6 01:54:20.324: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3788 Nov 6 01:54:20.324: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-3788 Nov 6 01:54:20.326: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3788 Nov 6 01:54:20.328: INFO: creating *v1.Role: csi-mock-volumes-3788-5300/external-provisioner-cfg-csi-mock-volumes-3788 Nov 6 01:54:20.331: INFO: creating *v1.RoleBinding: csi-mock-volumes-3788-5300/csi-provisioner-role-cfg Nov 6 01:54:20.334: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3788-5300/csi-resizer Nov 6 01:54:20.336: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3788 Nov 6 01:54:20.336: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-3788 Nov 6 01:54:20.339: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3788 Nov 6 01:54:20.342: INFO: creating *v1.Role: csi-mock-volumes-3788-5300/external-resizer-cfg-csi-mock-volumes-3788 Nov 6 01:54:20.345: INFO: creating *v1.RoleBinding: csi-mock-volumes-3788-5300/csi-resizer-role-cfg Nov 6 01:54:20.348: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3788-5300/csi-snapshotter Nov 6 01:54:20.350: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3788 Nov 6 01:54:20.350: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-3788 Nov 6 01:54:20.353: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3788 Nov 6 01:54:20.355: INFO: creating *v1.Role: csi-mock-volumes-3788-5300/external-snapshotter-leaderelection-csi-mock-volumes-3788 Nov 6 01:54:20.357: INFO: creating *v1.RoleBinding: csi-mock-volumes-3788-5300/external-snapshotter-leaderelection Nov 6 01:54:20.360: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3788-5300/csi-mock Nov 6 01:54:20.362: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3788 Nov 6 01:54:20.364: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3788 Nov 6 01:54:20.368: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3788 Nov 6 01:54:20.370: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3788 Nov 6 01:54:20.373: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3788 Nov 6 01:54:20.375: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3788 Nov 6 01:54:20.378: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3788 Nov 6 01:54:20.381: INFO: creating *v1.StatefulSet: csi-mock-volumes-3788-5300/csi-mockplugin Nov 6 01:54:20.386: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-3788 Nov 6 01:54:20.388: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-3788" Nov 6 01:54:20.390: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-3788 to register on node node1 STEP: Creating pod Nov 6 01:54:29.908: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 6 01:54:29.913: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-cwww4] to have phase Bound Nov 6 01:54:29.915: INFO: PersistentVolumeClaim pvc-cwww4 found but phase is Pending instead of Bound. Nov 6 01:54:31.918: INFO: PersistentVolumeClaim pvc-cwww4 found and phase=Bound (2.005282669s) Nov 6 01:54:31.933: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-cwww4] to have phase Bound Nov 6 01:54:31.940: INFO: PersistentVolumeClaim pvc-cwww4 found and phase=Bound (6.658358ms) STEP: Waiting for expected CSI calls STEP: Waiting for pod to be running STEP: Deleting the previously created pod Nov 6 01:54:37.033: INFO: Deleting pod "pvc-volume-tester-prggx" in namespace "csi-mock-volumes-3788" Nov 6 01:54:37.039: INFO: Wait up to 5m0s for pod "pvc-volume-tester-prggx" to be fully deleted STEP: Waiting for all remaining expected CSI calls STEP: Deleting pod pvc-volume-tester-prggx Nov 6 01:54:50.054: INFO: Deleting pod "pvc-volume-tester-prggx" in namespace "csi-mock-volumes-3788" STEP: Deleting claim pvc-cwww4 Nov 6 01:54:50.061: INFO: Waiting up to 2m0s for PersistentVolume pvc-d2348400-ae0d-4ac6-b14e-5a1a816b35fa to get deleted Nov 6 01:54:50.064: INFO: PersistentVolume pvc-d2348400-ae0d-4ac6-b14e-5a1a816b35fa found and phase=Bound (2.574709ms) Nov 6 01:54:52.067: INFO: PersistentVolume pvc-d2348400-ae0d-4ac6-b14e-5a1a816b35fa was removed STEP: Deleting storageclass csi-mock-volumes-3788-scj5sbm STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-3788 STEP: Waiting for namespaces [csi-mock-volumes-3788] to vanish STEP: uninstalling csi mock driver Nov 6 01:54:58.082: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3788-5300/csi-attacher Nov 6 01:54:58.087: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3788 Nov 6 01:54:58.090: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3788 Nov 6 01:54:58.094: INFO: deleting *v1.Role: csi-mock-volumes-3788-5300/external-attacher-cfg-csi-mock-volumes-3788 Nov 6 01:54:58.097: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3788-5300/csi-attacher-role-cfg Nov 6 01:54:58.102: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3788-5300/csi-provisioner Nov 6 01:54:58.107: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3788 Nov 6 01:54:58.111: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3788 Nov 6 01:54:58.117: INFO: deleting *v1.Role: csi-mock-volumes-3788-5300/external-provisioner-cfg-csi-mock-volumes-3788 Nov 6 01:54:58.121: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3788-5300/csi-provisioner-role-cfg Nov 6 01:54:58.128: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3788-5300/csi-resizer Nov 6 01:54:58.134: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3788 Nov 6 01:54:58.138: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3788 Nov 6 01:54:58.141: INFO: deleting *v1.Role: csi-mock-volumes-3788-5300/external-resizer-cfg-csi-mock-volumes-3788 Nov 6 01:54:58.145: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3788-5300/csi-resizer-role-cfg Nov 6 01:54:58.148: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3788-5300/csi-snapshotter Nov 6 01:54:58.151: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3788 Nov 6 01:54:58.155: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3788 Nov 6 01:54:58.158: INFO: deleting *v1.Role: csi-mock-volumes-3788-5300/external-snapshotter-leaderelection-csi-mock-volumes-3788 Nov 6 01:54:58.161: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3788-5300/external-snapshotter-leaderelection Nov 6 01:54:58.164: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3788-5300/csi-mock Nov 6 01:54:58.167: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3788 Nov 6 01:54:58.171: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3788 Nov 6 01:54:58.174: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3788 Nov 6 01:54:58.177: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3788 Nov 6 01:54:58.180: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3788 Nov 6 01:54:58.188: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3788 Nov 6 01:54:58.191: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3788 Nov 6 01:54:58.194: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3788-5300/csi-mockplugin Nov 6 01:54:58.198: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-3788 STEP: deleting the driver namespace: csi-mock-volumes-3788-5300 STEP: Waiting for namespaces [csi-mock-volumes-3788-5300] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:55:26.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:65.972 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI NodeStage error cases [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:734 should call NodeUnstage after NodeStage success /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should call NodeUnstage after NodeStage success","total":-1,"completed":7,"skipped":147,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:55:17.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 6 01:55:21.588: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-758a9acc-3f72-4e22-b881-a005331c59be-backend && mount --bind /tmp/local-volume-test-758a9acc-3f72-4e22-b881-a005331c59be-backend /tmp/local-volume-test-758a9acc-3f72-4e22-b881-a005331c59be-backend && ln -s /tmp/local-volume-test-758a9acc-3f72-4e22-b881-a005331c59be-backend /tmp/local-volume-test-758a9acc-3f72-4e22-b881-a005331c59be] Namespace:persistent-local-volumes-test-6085 PodName:hostexec-node1-xkp5k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:55:21.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:55:21.685: INFO: Creating a PV followed by a PVC Nov 6 01:55:21.693: INFO: Waiting for PV local-pvwhbdx to bind to PVC pvc-vz9nl Nov 6 01:55:21.693: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-vz9nl] to have phase Bound Nov 6 01:55:21.696: INFO: PersistentVolumeClaim pvc-vz9nl found but phase is Pending instead of Bound. Nov 6 01:55:23.699: INFO: PersistentVolumeClaim pvc-vz9nl found and phase=Bound (2.00622914s) Nov 6 01:55:23.699: INFO: Waiting up to 3m0s for PersistentVolume local-pvwhbdx to have phase Bound Nov 6 01:55:23.702: INFO: PersistentVolume local-pvwhbdx found and phase=Bound (2.551579ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 6 01:55:27.731: INFO: pod "pod-c9dbc734-34f9-4db2-a5fa-03e7b7a3d826" created on Node "node1" STEP: Writing in pod1 Nov 6 01:55:27.731: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6085 PodName:pod-c9dbc734-34f9-4db2-a5fa-03e7b7a3d826 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:55:27.731: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:55:27.844: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Nov 6 01:55:27.844: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6085 PodName:pod-c9dbc734-34f9-4db2-a5fa-03e7b7a3d826 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:55:27.844: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:55:27.926: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-c9dbc734-34f9-4db2-a5fa-03e7b7a3d826 in namespace persistent-local-volumes-test-6085 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:55:27.931: INFO: Deleting PersistentVolumeClaim "pvc-vz9nl" Nov 6 01:55:27.934: INFO: Deleting PersistentVolume "local-pvwhbdx" STEP: Removing the test directory Nov 6 01:55:27.937: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-758a9acc-3f72-4e22-b881-a005331c59be && umount /tmp/local-volume-test-758a9acc-3f72-4e22-b881-a005331c59be-backend && rm -r /tmp/local-volume-test-758a9acc-3f72-4e22-b881-a005331c59be-backend] Namespace:persistent-local-volumes-test-6085 PodName:hostexec-node1-xkp5k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:55:27.937: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:55:28.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6085" for this suite. • [SLOW TEST:10.514 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":9,"skipped":457,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:55:26.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:51 Nov 6 01:55:26.410: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PV STEP: Waiting for PV to enter phase Available Nov 6 01:55:26.416: INFO: Waiting up to 30s for PersistentVolume hostpath-7c6jb to have phase Available Nov 6 01:55:26.417: INFO: PersistentVolume hostpath-7c6jb found but phase is Pending instead of Available. Nov 6 01:55:27.420: INFO: PersistentVolume hostpath-7c6jb found and phase=Available (1.004201853s) STEP: Checking that PV Protection finalizer is set [It] Verify "immediate" deletion of a PV that is not bound to a PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:99 STEP: Deleting the PV Nov 6 01:55:27.427: INFO: Waiting up to 3m0s for PersistentVolume hostpath-7c6jb to get deleted Nov 6 01:55:27.429: INFO: PersistentVolume hostpath-7c6jb found and phase=Available (1.881465ms) Nov 6 01:55:29.434: INFO: PersistentVolume hostpath-7c6jb was removed [AfterEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:55:29.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-protection-803" for this suite. [AfterEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:92 Nov 6 01:55:29.443: INFO: AfterEach: Cleaning up test resources. Nov 6 01:55:29.443: INFO: pvc is nil Nov 6 01:55:29.443: INFO: Deleting PersistentVolume "hostpath-7c6jb" • ------------------------------ {"msg":"PASSED [sig-storage] PV Protection Verify \"immediate\" deletion of a PV that is not bound to a PVC","total":-1,"completed":8,"skipped":237,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:55:22.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Nov 6 01:55:22.114: INFO: The status of Pod test-hostpath-type-cx8dr is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:55:24.119: INFO: The status of Pod test-hostpath-type-cx8dr is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:55:26.119: INFO: The status of Pod test-hostpath-type-cx8dr is Running (Ready = true) STEP: running on node node2 STEP: Create a character device for further testing Nov 6 01:55:26.121: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-2141 PodName:test-hostpath-type-cx8dr ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:55:26.121: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to mount character device 'achardev' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:281 [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:55:30.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-2141" for this suite. • [SLOW TEST:8.189 seconds] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount character device 'achardev' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:281 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should be able to mount character device 'achardev' successfully when HostPathType is HostPathUnset","total":-1,"completed":10,"skipped":343,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:55:30.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should create and delete default persistent volumes [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:692 Nov 6 01:55:30.497: INFO: Only supported for providers [openstack gce aws gke vsphere azure] (not local) [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:55:30.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-2012" for this suite. S [SKIPPING] [0.038 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 DynamicProvisioner Default /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:691 should create and delete default persistent volumes [Slow] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:692 Only supported for providers [openstack gce aws gke vsphere azure] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:693 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:50:33.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap W1106 01:50:33.430951 30 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 6 01:50:33.431: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 6 01:50:33.432: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:548 STEP: Creating the pod [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:55:33.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6711" for this suite. • [SLOW TEST:300.089 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:548 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap Should fail non-optional pod creation due to configMap object does not exist [Slow]","total":-1,"completed":1,"skipped":1,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:55:30.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Nov 6 01:55:30.648: INFO: The status of Pod test-hostpath-type-mwjjz is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:55:32.651: INFO: The status of Pod test-hostpath-type-mwjjz is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:55:34.652: INFO: The status of Pod test-hostpath-type-mwjjz is Running (Ready = true) STEP: running on node node2 STEP: Create a block device for further testing Nov 6 01:55:34.654: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-6410 PodName:test-hostpath-type-mwjjz ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:55:34.654: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:364 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:55:36.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-6410" for this suite. • [SLOW TEST:6.218 seconds] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting block device 'ablkdev' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:364 ------------------------------ [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:55:28.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Nov 6 01:55:28.116: INFO: The status of Pod test-hostpath-type-mwxwj is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:55:30.120: INFO: The status of Pod test-hostpath-type-mwxwj is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:55:32.119: INFO: The status of Pod test-hostpath-type-mwxwj is Running (Ready = true) STEP: running on node node2 STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should fail on mounting file 'afile' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:151 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:55:38.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-3507" for this suite. • [SLOW TEST:10.102 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting file 'afile' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:151 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathDirectory","total":-1,"completed":10,"skipped":470,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:54:12.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should report attach limit when limit is bigger than 0 [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:529 STEP: Building a driver namespace object, basename csi-mock-volumes-7333 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 6 01:54:12.979: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7333-7265/csi-attacher Nov 6 01:54:12.982: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7333 Nov 6 01:54:12.982: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-7333 Nov 6 01:54:12.985: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7333 Nov 6 01:54:12.987: INFO: creating *v1.Role: csi-mock-volumes-7333-7265/external-attacher-cfg-csi-mock-volumes-7333 Nov 6 01:54:12.990: INFO: creating *v1.RoleBinding: csi-mock-volumes-7333-7265/csi-attacher-role-cfg Nov 6 01:54:12.993: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7333-7265/csi-provisioner Nov 6 01:54:12.997: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7333 Nov 6 01:54:12.997: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-7333 Nov 6 01:54:12.999: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7333 Nov 6 01:54:13.002: INFO: creating *v1.Role: csi-mock-volumes-7333-7265/external-provisioner-cfg-csi-mock-volumes-7333 Nov 6 01:54:13.004: INFO: creating *v1.RoleBinding: csi-mock-volumes-7333-7265/csi-provisioner-role-cfg Nov 6 01:54:13.007: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7333-7265/csi-resizer Nov 6 01:54:13.009: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7333 Nov 6 01:54:13.009: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-7333 Nov 6 01:54:13.012: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7333 Nov 6 01:54:13.014: INFO: creating *v1.Role: csi-mock-volumes-7333-7265/external-resizer-cfg-csi-mock-volumes-7333 Nov 6 01:54:13.017: INFO: creating *v1.RoleBinding: csi-mock-volumes-7333-7265/csi-resizer-role-cfg Nov 6 01:54:13.019: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7333-7265/csi-snapshotter Nov 6 01:54:13.022: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7333 Nov 6 01:54:13.022: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-7333 Nov 6 01:54:13.024: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7333 Nov 6 01:54:13.026: INFO: creating *v1.Role: csi-mock-volumes-7333-7265/external-snapshotter-leaderelection-csi-mock-volumes-7333 Nov 6 01:54:13.029: INFO: creating *v1.RoleBinding: csi-mock-volumes-7333-7265/external-snapshotter-leaderelection Nov 6 01:54:13.031: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7333-7265/csi-mock Nov 6 01:54:13.034: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7333 Nov 6 01:54:13.037: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7333 Nov 6 01:54:13.040: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7333 Nov 6 01:54:13.042: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7333 Nov 6 01:54:13.044: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7333 Nov 6 01:54:13.047: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7333 Nov 6 01:54:13.049: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7333 Nov 6 01:54:13.052: INFO: creating *v1.StatefulSet: csi-mock-volumes-7333-7265/csi-mockplugin Nov 6 01:54:13.057: INFO: creating *v1.StatefulSet: csi-mock-volumes-7333-7265/csi-mockplugin-attacher Nov 6 01:54:13.060: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-7333 to register on node node1 STEP: Creating pod Nov 6 01:54:18.073: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 6 01:54:18.078: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-h5w94] to have phase Bound Nov 6 01:54:18.080: INFO: PersistentVolumeClaim pvc-h5w94 found but phase is Pending instead of Bound. Nov 6 01:54:20.085: INFO: PersistentVolumeClaim pvc-h5w94 found and phase=Bound (2.006956818s) STEP: Creating pod Nov 6 01:54:42.112: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 6 01:54:42.115: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-vwv82] to have phase Bound Nov 6 01:54:42.118: INFO: PersistentVolumeClaim pvc-vwv82 found but phase is Pending instead of Bound. Nov 6 01:54:44.124: INFO: PersistentVolumeClaim pvc-vwv82 found and phase=Bound (2.008899732s) STEP: Creating pod Nov 6 01:54:56.154: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 6 01:54:56.159: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-2g92m] to have phase Bound Nov 6 01:54:56.161: INFO: PersistentVolumeClaim pvc-2g92m found but phase is Pending instead of Bound. Nov 6 01:54:58.164: INFO: PersistentVolumeClaim pvc-2g92m found and phase=Bound (2.005510161s) STEP: Deleting pod pvc-volume-tester-pg8r6 Nov 6 01:55:08.194: INFO: Deleting pod "pvc-volume-tester-pg8r6" in namespace "csi-mock-volumes-7333" Nov 6 01:55:08.199: INFO: Wait up to 5m0s for pod "pvc-volume-tester-pg8r6" to be fully deleted STEP: Deleting pod pvc-volume-tester-2twr7 Nov 6 01:55:12.206: INFO: Deleting pod "pvc-volume-tester-2twr7" in namespace "csi-mock-volumes-7333" Nov 6 01:55:12.210: INFO: Wait up to 5m0s for pod "pvc-volume-tester-2twr7" to be fully deleted STEP: Deleting pod pvc-volume-tester-f6p2c Nov 6 01:55:16.215: INFO: Deleting pod "pvc-volume-tester-f6p2c" in namespace "csi-mock-volumes-7333" Nov 6 01:55:16.220: INFO: Wait up to 5m0s for pod "pvc-volume-tester-f6p2c" to be fully deleted STEP: Deleting claim pvc-h5w94 Nov 6 01:55:20.236: INFO: Waiting up to 2m0s for PersistentVolume pvc-3edecb13-c919-4f59-8053-0e3a063bdddf to get deleted Nov 6 01:55:20.238: INFO: PersistentVolume pvc-3edecb13-c919-4f59-8053-0e3a063bdddf found and phase=Bound (2.107433ms) Nov 6 01:55:22.241: INFO: PersistentVolume pvc-3edecb13-c919-4f59-8053-0e3a063bdddf was removed STEP: Deleting claim pvc-vwv82 Nov 6 01:55:22.247: INFO: Waiting up to 2m0s for PersistentVolume pvc-58733f0e-7033-4c88-a0c6-ca2d315d8dae to get deleted Nov 6 01:55:22.249: INFO: PersistentVolume pvc-58733f0e-7033-4c88-a0c6-ca2d315d8dae found and phase=Bound (2.136916ms) Nov 6 01:55:24.255: INFO: PersistentVolume pvc-58733f0e-7033-4c88-a0c6-ca2d315d8dae was removed STEP: Deleting claim pvc-2g92m Nov 6 01:55:24.262: INFO: Waiting up to 2m0s for PersistentVolume pvc-8c759b42-d628-4fc3-a37c-3dae2bf65d61 to get deleted Nov 6 01:55:24.264: INFO: PersistentVolume pvc-8c759b42-d628-4fc3-a37c-3dae2bf65d61 found and phase=Bound (2.064162ms) Nov 6 01:55:26.268: INFO: PersistentVolume pvc-8c759b42-d628-4fc3-a37c-3dae2bf65d61 was removed STEP: Deleting storageclass csi-mock-volumes-7333-scnzskr STEP: Deleting storageclass csi-mock-volumes-7333-scnj8m4 STEP: Deleting storageclass csi-mock-volumes-7333-sc2d6gf STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-7333 STEP: Waiting for namespaces [csi-mock-volumes-7333] to vanish STEP: uninstalling csi mock driver Nov 6 01:55:32.285: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7333-7265/csi-attacher Nov 6 01:55:32.291: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7333 Nov 6 01:55:32.294: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7333 Nov 6 01:55:32.298: INFO: deleting *v1.Role: csi-mock-volumes-7333-7265/external-attacher-cfg-csi-mock-volumes-7333 Nov 6 01:55:32.301: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7333-7265/csi-attacher-role-cfg Nov 6 01:55:32.304: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7333-7265/csi-provisioner Nov 6 01:55:32.310: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7333 Nov 6 01:55:32.313: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7333 Nov 6 01:55:32.318: INFO: deleting *v1.Role: csi-mock-volumes-7333-7265/external-provisioner-cfg-csi-mock-volumes-7333 Nov 6 01:55:32.321: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7333-7265/csi-provisioner-role-cfg Nov 6 01:55:32.324: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7333-7265/csi-resizer Nov 6 01:55:32.327: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7333 Nov 6 01:55:32.331: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7333 Nov 6 01:55:32.334: INFO: deleting *v1.Role: csi-mock-volumes-7333-7265/external-resizer-cfg-csi-mock-volumes-7333 Nov 6 01:55:32.337: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7333-7265/csi-resizer-role-cfg Nov 6 01:55:32.341: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7333-7265/csi-snapshotter Nov 6 01:55:32.343: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7333 Nov 6 01:55:32.346: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7333 Nov 6 01:55:32.350: INFO: deleting *v1.Role: csi-mock-volumes-7333-7265/external-snapshotter-leaderelection-csi-mock-volumes-7333 Nov 6 01:55:32.353: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7333-7265/external-snapshotter-leaderelection Nov 6 01:55:32.356: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7333-7265/csi-mock Nov 6 01:55:32.359: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7333 Nov 6 01:55:32.362: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7333 Nov 6 01:55:32.366: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7333 Nov 6 01:55:32.369: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7333 Nov 6 01:55:32.371: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7333 Nov 6 01:55:32.375: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7333 Nov 6 01:55:32.378: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7333 Nov 6 01:55:32.382: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7333-7265/csi-mockplugin Nov 6 01:55:32.385: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7333-7265/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-7333-7265 STEP: Waiting for namespaces [csi-mock-volumes-7333-7265] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:55:44.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:91.480 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI volume limit information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:528 should report attach limit when limit is bigger than 0 [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:529 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI volume limit information using mock driver should report attach limit when limit is bigger than 0 [Slow]","total":-1,"completed":4,"skipped":220,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:54:01.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should modify fsGroup if fsGroupPolicy=default /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1460 STEP: Building a driver namespace object, basename csi-mock-volumes-4075 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 6 01:54:01.339: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4075-257/csi-attacher Nov 6 01:54:01.341: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4075 Nov 6 01:54:01.341: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4075 Nov 6 01:54:01.344: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4075 Nov 6 01:54:01.347: INFO: creating *v1.Role: csi-mock-volumes-4075-257/external-attacher-cfg-csi-mock-volumes-4075 Nov 6 01:54:01.349: INFO: creating *v1.RoleBinding: csi-mock-volumes-4075-257/csi-attacher-role-cfg Nov 6 01:54:01.353: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4075-257/csi-provisioner Nov 6 01:54:01.355: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4075 Nov 6 01:54:01.355: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4075 Nov 6 01:54:01.358: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4075 Nov 6 01:54:01.361: INFO: creating *v1.Role: csi-mock-volumes-4075-257/external-provisioner-cfg-csi-mock-volumes-4075 Nov 6 01:54:01.363: INFO: creating *v1.RoleBinding: csi-mock-volumes-4075-257/csi-provisioner-role-cfg Nov 6 01:54:01.365: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4075-257/csi-resizer Nov 6 01:54:01.368: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4075 Nov 6 01:54:01.368: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4075 Nov 6 01:54:01.370: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4075 Nov 6 01:54:01.373: INFO: creating *v1.Role: csi-mock-volumes-4075-257/external-resizer-cfg-csi-mock-volumes-4075 Nov 6 01:54:01.379: INFO: creating *v1.RoleBinding: csi-mock-volumes-4075-257/csi-resizer-role-cfg Nov 6 01:54:01.382: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4075-257/csi-snapshotter Nov 6 01:54:01.384: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4075 Nov 6 01:54:01.385: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4075 Nov 6 01:54:01.388: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4075 Nov 6 01:54:01.390: INFO: creating *v1.Role: csi-mock-volumes-4075-257/external-snapshotter-leaderelection-csi-mock-volumes-4075 Nov 6 01:54:01.393: INFO: creating *v1.RoleBinding: csi-mock-volumes-4075-257/external-snapshotter-leaderelection Nov 6 01:54:01.396: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4075-257/csi-mock Nov 6 01:54:01.398: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4075 Nov 6 01:54:01.401: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4075 Nov 6 01:54:01.403: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4075 Nov 6 01:54:01.406: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4075 Nov 6 01:54:01.409: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4075 Nov 6 01:54:01.412: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4075 Nov 6 01:54:01.413: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4075 Nov 6 01:54:01.416: INFO: creating *v1.StatefulSet: csi-mock-volumes-4075-257/csi-mockplugin Nov 6 01:54:01.420: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4075 Nov 6 01:54:01.422: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4075" Nov 6 01:54:01.424: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4075 to register on node node1 STEP: Creating pod with fsGroup Nov 6 01:54:15.946: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 6 01:54:15.951: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-9b77k] to have phase Bound Nov 6 01:54:15.953: INFO: PersistentVolumeClaim pvc-9b77k found but phase is Pending instead of Bound. Nov 6 01:54:17.959: INFO: PersistentVolumeClaim pvc-9b77k found and phase=Bound (2.007505241s) Nov 6 01:54:21.984: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir /mnt/test/csi-mock-volumes-4075] Namespace:csi-mock-volumes-4075 PodName:pvc-volume-tester-rh9j2 ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:54:21.985: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:54:22.091: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'filecontents' > '/mnt/test/csi-mock-volumes-4075/csi-mock-volumes-4075'; sync] Namespace:csi-mock-volumes-4075 PodName:pvc-volume-tester-rh9j2 ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:54:22.091: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:54:25.101: INFO: ExecWithOptions {Command:[/bin/sh -c ls -l /mnt/test/csi-mock-volumes-4075/csi-mock-volumes-4075] Namespace:csi-mock-volumes-4075 PodName:pvc-volume-tester-rh9j2 ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:54:25.101: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:54:25.242: INFO: pod csi-mock-volumes-4075/pvc-volume-tester-rh9j2 exec for cmd ls -l /mnt/test/csi-mock-volumes-4075/csi-mock-volumes-4075, stdout: -rw-r--r-- 1 root 17717 13 Nov 6 01:54 /mnt/test/csi-mock-volumes-4075/csi-mock-volumes-4075, stderr: Nov 6 01:54:25.242: INFO: ExecWithOptions {Command:[/bin/sh -c rm -fr /mnt/test/csi-mock-volumes-4075] Namespace:csi-mock-volumes-4075 PodName:pvc-volume-tester-rh9j2 ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:54:25.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod pvc-volume-tester-rh9j2 Nov 6 01:54:25.351: INFO: Deleting pod "pvc-volume-tester-rh9j2" in namespace "csi-mock-volumes-4075" Nov 6 01:54:25.355: INFO: Wait up to 5m0s for pod "pvc-volume-tester-rh9j2" to be fully deleted STEP: Deleting claim pvc-9b77k Nov 6 01:55:09.368: INFO: Waiting up to 2m0s for PersistentVolume pvc-d053d717-f794-4506-a4fc-37ca8a03a434 to get deleted Nov 6 01:55:09.370: INFO: PersistentVolume pvc-d053d717-f794-4506-a4fc-37ca8a03a434 found and phase=Bound (1.839224ms) Nov 6 01:55:11.379: INFO: PersistentVolume pvc-d053d717-f794-4506-a4fc-37ca8a03a434 was removed STEP: Deleting storageclass csi-mock-volumes-4075-scslmfj STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4075 STEP: Waiting for namespaces [csi-mock-volumes-4075] to vanish STEP: uninstalling csi mock driver Nov 6 01:55:17.393: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4075-257/csi-attacher Nov 6 01:55:17.397: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4075 Nov 6 01:55:17.401: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4075 Nov 6 01:55:17.404: INFO: deleting *v1.Role: csi-mock-volumes-4075-257/external-attacher-cfg-csi-mock-volumes-4075 Nov 6 01:55:17.408: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4075-257/csi-attacher-role-cfg Nov 6 01:55:17.412: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4075-257/csi-provisioner Nov 6 01:55:17.416: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4075 Nov 6 01:55:17.439: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4075 Nov 6 01:55:17.447: INFO: deleting *v1.Role: csi-mock-volumes-4075-257/external-provisioner-cfg-csi-mock-volumes-4075 Nov 6 01:55:17.452: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4075-257/csi-provisioner-role-cfg Nov 6 01:55:17.456: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4075-257/csi-resizer Nov 6 01:55:17.459: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4075 Nov 6 01:55:17.462: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4075 Nov 6 01:55:17.465: INFO: deleting *v1.Role: csi-mock-volumes-4075-257/external-resizer-cfg-csi-mock-volumes-4075 Nov 6 01:55:17.469: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4075-257/csi-resizer-role-cfg Nov 6 01:55:17.473: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4075-257/csi-snapshotter Nov 6 01:55:17.476: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4075 Nov 6 01:55:17.479: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4075 Nov 6 01:55:17.482: INFO: deleting *v1.Role: csi-mock-volumes-4075-257/external-snapshotter-leaderelection-csi-mock-volumes-4075 Nov 6 01:55:17.486: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4075-257/external-snapshotter-leaderelection Nov 6 01:55:17.489: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4075-257/csi-mock Nov 6 01:55:17.492: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4075 Nov 6 01:55:17.495: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4075 Nov 6 01:55:17.498: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4075 Nov 6 01:55:17.502: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4075 Nov 6 01:55:17.506: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4075 Nov 6 01:55:17.509: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4075 Nov 6 01:55:17.513: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4075 Nov 6 01:55:17.517: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4075-257/csi-mockplugin Nov 6 01:55:17.521: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-4075 STEP: deleting the driver namespace: csi-mock-volumes-4075-257 STEP: Waiting for namespaces [csi-mock-volumes-4075-257] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:55:45.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:104.265 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI FSGroupPolicy [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1436 should modify fsGroup if fsGroupPolicy=default /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1460 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","total":-1,"completed":4,"skipped":109,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:50:33.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes W1106 01:50:33.470857 37 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 6 01:50:33.471: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 6 01:50:33.472: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should bringup pod after deploying CSIDriver attach=false [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:373 STEP: Building a driver namespace object, basename csi-mock-volumes-6601 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 6 01:50:33.523: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6601-6928/csi-attacher Nov 6 01:50:33.526: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6601 Nov 6 01:50:33.526: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-6601 Nov 6 01:50:33.529: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6601 Nov 6 01:50:33.532: INFO: creating *v1.Role: csi-mock-volumes-6601-6928/external-attacher-cfg-csi-mock-volumes-6601 Nov 6 01:50:33.534: INFO: creating *v1.RoleBinding: csi-mock-volumes-6601-6928/csi-attacher-role-cfg Nov 6 01:50:33.537: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6601-6928/csi-provisioner Nov 6 01:50:33.539: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6601 Nov 6 01:50:33.539: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-6601 Nov 6 01:50:33.542: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6601 Nov 6 01:50:33.545: INFO: creating *v1.Role: csi-mock-volumes-6601-6928/external-provisioner-cfg-csi-mock-volumes-6601 Nov 6 01:50:33.548: INFO: creating *v1.RoleBinding: csi-mock-volumes-6601-6928/csi-provisioner-role-cfg Nov 6 01:50:33.550: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6601-6928/csi-resizer Nov 6 01:50:33.553: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6601 Nov 6 01:50:33.553: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-6601 Nov 6 01:50:33.555: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6601 Nov 6 01:50:33.558: INFO: creating *v1.Role: csi-mock-volumes-6601-6928/external-resizer-cfg-csi-mock-volumes-6601 Nov 6 01:50:33.561: INFO: creating *v1.RoleBinding: csi-mock-volumes-6601-6928/csi-resizer-role-cfg Nov 6 01:50:33.563: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6601-6928/csi-snapshotter Nov 6 01:50:33.565: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6601 Nov 6 01:50:33.565: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-6601 Nov 6 01:50:33.568: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6601 Nov 6 01:50:33.571: INFO: creating *v1.Role: csi-mock-volumes-6601-6928/external-snapshotter-leaderelection-csi-mock-volumes-6601 Nov 6 01:50:33.574: INFO: creating *v1.RoleBinding: csi-mock-volumes-6601-6928/external-snapshotter-leaderelection Nov 6 01:50:33.576: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6601-6928/csi-mock Nov 6 01:50:33.579: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6601 Nov 6 01:50:33.582: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6601 Nov 6 01:50:33.585: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6601 Nov 6 01:50:33.589: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6601 Nov 6 01:50:33.593: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6601 Nov 6 01:50:33.595: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6601 Nov 6 01:50:33.598: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6601 Nov 6 01:50:33.601: INFO: creating *v1.StatefulSet: csi-mock-volumes-6601-6928/csi-mockplugin Nov 6 01:50:33.606: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-6601 to register on node node1 STEP: Creating pod Nov 6 01:50:49.879: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 6 01:50:49.885: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-wh2pq] to have phase Bound Nov 6 01:50:49.887: INFO: PersistentVolumeClaim pvc-wh2pq found but phase is Pending instead of Bound. Nov 6 01:50:51.891: INFO: PersistentVolumeClaim pvc-wh2pq found and phase=Bound (2.005686067s) STEP: Checking if attaching failed and pod cannot start STEP: Checking if VolumeAttachment was created for the pod STEP: Deploy CSIDriver object with attachRequired=false Nov 6 01:52:53.922: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-6601 STEP: Wait for the pod in running status STEP: Wait for the volumeattachment to be deleted up to 7m0s STEP: Deleting pod pvc-volume-tester-cfwwt Nov 6 01:55:01.935: INFO: Deleting pod "pvc-volume-tester-cfwwt" in namespace "csi-mock-volumes-6601" Nov 6 01:55:01.941: INFO: Wait up to 5m0s for pod "pvc-volume-tester-cfwwt" to be fully deleted STEP: Deleting claim pvc-wh2pq Nov 6 01:55:09.954: INFO: Waiting up to 2m0s for PersistentVolume pvc-c9930692-aea8-43a1-bdb2-2703bc0bcda6 to get deleted Nov 6 01:55:09.956: INFO: PersistentVolume pvc-c9930692-aea8-43a1-bdb2-2703bc0bcda6 found and phase=Bound (1.666907ms) Nov 6 01:55:11.960: INFO: PersistentVolume pvc-c9930692-aea8-43a1-bdb2-2703bc0bcda6 was removed STEP: Deleting storageclass csi-mock-volumes-6601-scr88mr STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-6601 STEP: Waiting for namespaces [csi-mock-volumes-6601] to vanish STEP: uninstalling csi mock driver Nov 6 01:55:17.977: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6601-6928/csi-attacher Nov 6 01:55:17.981: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6601 Nov 6 01:55:17.984: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6601 Nov 6 01:55:17.987: INFO: deleting *v1.Role: csi-mock-volumes-6601-6928/external-attacher-cfg-csi-mock-volumes-6601 Nov 6 01:55:17.990: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6601-6928/csi-attacher-role-cfg Nov 6 01:55:17.994: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6601-6928/csi-provisioner Nov 6 01:55:17.997: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6601 Nov 6 01:55:18.003: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6601 Nov 6 01:55:18.008: INFO: deleting *v1.Role: csi-mock-volumes-6601-6928/external-provisioner-cfg-csi-mock-volumes-6601 Nov 6 01:55:18.011: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6601-6928/csi-provisioner-role-cfg Nov 6 01:55:18.015: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6601-6928/csi-resizer Nov 6 01:55:18.018: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6601 Nov 6 01:55:18.025: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6601 Nov 6 01:55:18.028: INFO: deleting *v1.Role: csi-mock-volumes-6601-6928/external-resizer-cfg-csi-mock-volumes-6601 Nov 6 01:55:18.033: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6601-6928/csi-resizer-role-cfg Nov 6 01:55:18.037: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6601-6928/csi-snapshotter Nov 6 01:55:18.041: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6601 Nov 6 01:55:18.044: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6601 Nov 6 01:55:18.047: INFO: deleting *v1.Role: csi-mock-volumes-6601-6928/external-snapshotter-leaderelection-csi-mock-volumes-6601 Nov 6 01:55:18.051: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6601-6928/external-snapshotter-leaderelection Nov 6 01:55:18.055: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6601-6928/csi-mock Nov 6 01:55:18.059: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6601 Nov 6 01:55:18.063: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6601 Nov 6 01:55:18.067: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6601 Nov 6 01:55:18.069: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6601 Nov 6 01:55:18.073: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6601 Nov 6 01:55:18.076: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6601 Nov 6 01:55:18.081: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6601 Nov 6 01:55:18.084: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6601-6928/csi-mockplugin STEP: deleting the driver namespace: csi-mock-volumes-6601-6928 STEP: Waiting for namespaces [csi-mock-volumes-6601-6928] to vanish Nov 6 01:55:46.097: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-6601 [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:55:46.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:312.674 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI CSIDriver deployment after pod creation using non-attachable mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:372 should bringup pod after deploying CSIDriver attach=false [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:373 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI CSIDriver deployment after pod creation using non-attachable mock driver should bringup pod after deploying CSIDriver attach=false [Slow]","total":-1,"completed":1,"skipped":7,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:55:29.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-65a5d80e-c25f-410c-ac96-e90ce0ea7d14" Nov 6 01:55:31.527: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-65a5d80e-c25f-410c-ac96-e90ce0ea7d14 && dd if=/dev/zero of=/tmp/local-volume-test-65a5d80e-c25f-410c-ac96-e90ce0ea7d14/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-65a5d80e-c25f-410c-ac96-e90ce0ea7d14/file] Namespace:persistent-local-volumes-test-6452 PodName:hostexec-node1-ds2bk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:55:31.527: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:55:31.646: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-65a5d80e-c25f-410c-ac96-e90ce0ea7d14/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6452 PodName:hostexec-node1-ds2bk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:55:31.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:55:31.736: INFO: Creating a PV followed by a PVC Nov 6 01:55:31.743: INFO: Waiting for PV local-pvdzdzh to bind to PVC pvc-pcrrc Nov 6 01:55:31.743: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-pcrrc] to have phase Bound Nov 6 01:55:31.745: INFO: PersistentVolumeClaim pvc-pcrrc found but phase is Pending instead of Bound. Nov 6 01:55:33.748: INFO: PersistentVolumeClaim pvc-pcrrc found but phase is Pending instead of Bound. Nov 6 01:55:35.752: INFO: PersistentVolumeClaim pvc-pcrrc found but phase is Pending instead of Bound. Nov 6 01:55:37.756: INFO: PersistentVolumeClaim pvc-pcrrc found and phase=Bound (6.013051118s) Nov 6 01:55:37.756: INFO: Waiting up to 3m0s for PersistentVolume local-pvdzdzh to have phase Bound Nov 6 01:55:37.758: INFO: PersistentVolume local-pvdzdzh found and phase=Bound (2.30961ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Nov 6 01:55:43.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-6452 exec pod-2f865d64-8b81-4b2f-9c81-ac2a599f88cb --namespace=persistent-local-volumes-test-6452 -- stat -c %g /mnt/volume1' Nov 6 01:55:44.048: INFO: stderr: "" Nov 6 01:55:44.048: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Nov 6 01:55:48.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-6452 exec pod-e2ba48d8-98df-4269-88d4-19a70c6b8941 --namespace=persistent-local-volumes-test-6452 -- stat -c %g /mnt/volume1' Nov 6 01:55:48.318: INFO: stderr: "" Nov 6 01:55:48.318: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod pod-2f865d64-8b81-4b2f-9c81-ac2a599f88cb in namespace persistent-local-volumes-test-6452 STEP: Deleting second pod STEP: Deleting pod pod-e2ba48d8-98df-4269-88d4-19a70c6b8941 in namespace persistent-local-volumes-test-6452 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:55:48.328: INFO: Deleting PersistentVolumeClaim "pvc-pcrrc" Nov 6 01:55:48.331: INFO: Deleting PersistentVolume "local-pvdzdzh" Nov 6 01:55:48.335: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-65a5d80e-c25f-410c-ac96-e90ce0ea7d14/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6452 PodName:hostexec-node1-ds2bk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:55:48.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node1" at path /tmp/local-volume-test-65a5d80e-c25f-410c-ac96-e90ce0ea7d14/file Nov 6 01:55:48.423: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-6452 PodName:hostexec-node1-ds2bk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:55:48.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-65a5d80e-c25f-410c-ac96-e90ce0ea7d14 Nov 6 01:55:49.121: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-65a5d80e-c25f-410c-ac96-e90ce0ea7d14] Namespace:persistent-local-volumes-test-6452 PodName:hostexec-node1-ds2bk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:55:49.121: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:55:50.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6452" for this suite. • [SLOW TEST:21.081 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":9,"skipped":251,"failed":0} SSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathSocket","total":-1,"completed":11,"skipped":508,"failed":0} [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:55:36.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Nov 6 01:55:36.872: INFO: The status of Pod test-hostpath-type-6dm87 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:55:38.877: INFO: The status of Pod test-hostpath-type-6dm87 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:55:40.878: INFO: The status of Pod test-hostpath-type-6dm87 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:55:42.876: INFO: The status of Pod test-hostpath-type-6dm87 is Running (Ready = true) STEP: running on node node2 STEP: Create a block device for further testing Nov 6 01:55:42.879: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-7414 PodName:test-hostpath-type-6dm87 ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:55:42.879: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:346 [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:55:55.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-7414" for this suite. • [SLOW TEST:18.211 seconds] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:346 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathBlockDev","total":-1,"completed":12,"skipped":508,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:55:21.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 6 01:55:25.277: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-ea7e0e8d-00c6-4ca4-b5db-a780ee7c4c7b-backend && mount --bind /tmp/local-volume-test-ea7e0e8d-00c6-4ca4-b5db-a780ee7c4c7b-backend /tmp/local-volume-test-ea7e0e8d-00c6-4ca4-b5db-a780ee7c4c7b-backend && ln -s /tmp/local-volume-test-ea7e0e8d-00c6-4ca4-b5db-a780ee7c4c7b-backend /tmp/local-volume-test-ea7e0e8d-00c6-4ca4-b5db-a780ee7c4c7b] Namespace:persistent-local-volumes-test-8094 PodName:hostexec-node2-jfc7c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:55:25.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:55:25.371: INFO: Creating a PV followed by a PVC Nov 6 01:55:25.376: INFO: Waiting for PV local-pvm42kt to bind to PVC pvc-khslr Nov 6 01:55:25.376: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-khslr] to have phase Bound Nov 6 01:55:25.378: INFO: PersistentVolumeClaim pvc-khslr found but phase is Pending instead of Bound. Nov 6 01:55:27.382: INFO: PersistentVolumeClaim pvc-khslr found but phase is Pending instead of Bound. Nov 6 01:55:29.386: INFO: PersistentVolumeClaim pvc-khslr found but phase is Pending instead of Bound. Nov 6 01:55:31.390: INFO: PersistentVolumeClaim pvc-khslr found but phase is Pending instead of Bound. Nov 6 01:55:33.394: INFO: PersistentVolumeClaim pvc-khslr found but phase is Pending instead of Bound. Nov 6 01:55:35.399: INFO: PersistentVolumeClaim pvc-khslr found but phase is Pending instead of Bound. Nov 6 01:55:37.401: INFO: PersistentVolumeClaim pvc-khslr found and phase=Bound (12.025024666s) Nov 6 01:55:37.401: INFO: Waiting up to 3m0s for PersistentVolume local-pvm42kt to have phase Bound Nov 6 01:55:37.403: INFO: PersistentVolume local-pvm42kt found and phase=Bound (1.955246ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Nov 6 01:55:43.430: INFO: pod "pod-7dc69c33-ed8a-4398-82e0-2a6f23aaef26" created on Node "node2" STEP: Writing in pod1 Nov 6 01:55:43.430: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8094 PodName:pod-7dc69c33-ed8a-4398-82e0-2a6f23aaef26 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:55:43.430: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:55:43.551: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 6 01:55:43.551: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8094 PodName:pod-7dc69c33-ed8a-4398-82e0-2a6f23aaef26 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:55:43.551: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:55:43.675: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Nov 6 01:55:55.709: INFO: pod "pod-f31bf1a8-a11f-4dea-b806-197d78fd751e" created on Node "node2" Nov 6 01:55:55.709: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8094 PodName:pod-f31bf1a8-a11f-4dea-b806-197d78fd751e ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:55:55.709: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:55:55.796: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Nov 6 01:55:55.796: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-ea7e0e8d-00c6-4ca4-b5db-a780ee7c4c7b > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8094 PodName:pod-f31bf1a8-a11f-4dea-b806-197d78fd751e ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:55:55.796: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:55:55.892: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-ea7e0e8d-00c6-4ca4-b5db-a780ee7c4c7b > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Nov 6 01:55:55.892: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8094 PodName:pod-7dc69c33-ed8a-4398-82e0-2a6f23aaef26 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:55:55.892: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:55:55.992: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-ea7e0e8d-00c6-4ca4-b5db-a780ee7c4c7b", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-7dc69c33-ed8a-4398-82e0-2a6f23aaef26 in namespace persistent-local-volumes-test-8094 STEP: Deleting pod2 STEP: Deleting pod pod-f31bf1a8-a11f-4dea-b806-197d78fd751e in namespace persistent-local-volumes-test-8094 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:55:56.001: INFO: Deleting PersistentVolumeClaim "pvc-khslr" Nov 6 01:55:56.005: INFO: Deleting PersistentVolume "local-pvm42kt" STEP: Removing the test directory Nov 6 01:55:56.008: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-ea7e0e8d-00c6-4ca4-b5db-a780ee7c4c7b && umount /tmp/local-volume-test-ea7e0e8d-00c6-4ca4-b5db-a780ee7c4c7b-backend && rm -r /tmp/local-volume-test-ea7e0e8d-00c6-4ca4-b5db-a780ee7c4c7b-backend] Namespace:persistent-local-volumes-test-8094 PodName:hostexec-node2-jfc7c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:55:56.008: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:55:56.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8094" for this suite. • [SLOW TEST:34.929 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":10,"skipped":388,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:50:56.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:411 STEP: Creating the pod [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:55:57.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9686" for this suite. • [SLOW TEST:300.059 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:411 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret Should fail non-optional pod creation due to secret object does not exist [Slow]","total":-1,"completed":3,"skipped":48,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:55:57.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should test that deleting a claim before the volume is provisioned deletes the volume. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:511 Nov 6 01:55:57.174: INFO: Only supported for providers [openstack gce aws gke vsphere azure] (not local) [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:55:57.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-8075" for this suite. S [SKIPPING] [0.031 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 DynamicProvisioner [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:152 should test that deleting a claim before the volume is provisioned deletes the volume. [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:511 Only supported for providers [openstack gce aws gke vsphere azure] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:517 ------------------------------ SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:55:57.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:75 STEP: Creating configMap with name projected-configmap-test-volume-0cad6fae-0534-436d-8404-fba0093dd7bf STEP: Creating a pod to test consume configMaps Nov 6 01:55:57.242: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6e7a8949-d756-4608-b870-9eaeadb7c291" in namespace "projected-865" to be "Succeeded or Failed" Nov 6 01:55:57.245: INFO: Pod "pod-projected-configmaps-6e7a8949-d756-4608-b870-9eaeadb7c291": Phase="Pending", Reason="", readiness=false. Elapsed: 2.942181ms Nov 6 01:55:59.249: INFO: Pod "pod-projected-configmaps-6e7a8949-d756-4608-b870-9eaeadb7c291": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006509624s Nov 6 01:56:01.253: INFO: Pod "pod-projected-configmaps-6e7a8949-d756-4608-b870-9eaeadb7c291": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010360316s STEP: Saw pod success Nov 6 01:56:01.253: INFO: Pod "pod-projected-configmaps-6e7a8949-d756-4608-b870-9eaeadb7c291" satisfied condition "Succeeded or Failed" Nov 6 01:56:01.255: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-6e7a8949-d756-4608-b870-9eaeadb7c291 container agnhost-container: STEP: delete the pod Nov 6 01:56:01.269: INFO: Waiting for pod pod-projected-configmaps-6e7a8949-d756-4608-b870-9eaeadb7c291 to disappear Nov 6 01:56:01.271: INFO: Pod pod-projected-configmaps-6e7a8949-d756-4608-b870-9eaeadb7c291 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:56:01.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-865" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":4,"skipped":126,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:55:33.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-4ae44c31-47b7-4e1f-9451-26ecfbf846ae" Nov 6 01:55:35.610: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-4ae44c31-47b7-4e1f-9451-26ecfbf846ae && dd if=/dev/zero of=/tmp/local-volume-test-4ae44c31-47b7-4e1f-9451-26ecfbf846ae/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-4ae44c31-47b7-4e1f-9451-26ecfbf846ae/file] Namespace:persistent-local-volumes-test-9867 PodName:hostexec-node2-vb2qg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:55:35.610: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:55:35.821: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-4ae44c31-47b7-4e1f-9451-26ecfbf846ae/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9867 PodName:hostexec-node2-vb2qg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:55:35.821: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:55:36.876: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-4ae44c31-47b7-4e1f-9451-26ecfbf846ae && chmod o+rwx /tmp/local-volume-test-4ae44c31-47b7-4e1f-9451-26ecfbf846ae] Namespace:persistent-local-volumes-test-9867 PodName:hostexec-node2-vb2qg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:55:36.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:55:37.568: INFO: Creating a PV followed by a PVC Nov 6 01:55:37.575: INFO: Waiting for PV local-pvm9wlw to bind to PVC pvc-w2ssn Nov 6 01:55:37.575: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-w2ssn] to have phase Bound Nov 6 01:55:37.577: INFO: PersistentVolumeClaim pvc-w2ssn found but phase is Pending instead of Bound. Nov 6 01:55:39.583: INFO: PersistentVolumeClaim pvc-w2ssn found and phase=Bound (2.008514322s) Nov 6 01:55:39.584: INFO: Waiting up to 3m0s for PersistentVolume local-pvm9wlw to have phase Bound Nov 6 01:55:39.586: INFO: PersistentVolume local-pvm9wlw found and phase=Bound (2.323282ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Nov 6 01:55:43.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-9867 exec pod-e703e2ce-fdf8-40e9-a963-b80fa1371bf9 --namespace=persistent-local-volumes-test-9867 -- stat -c %g /mnt/volume1' Nov 6 01:55:44.012: INFO: stderr: "" Nov 6 01:55:44.012: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Nov 6 01:56:00.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-9867 exec pod-3800f465-197e-4176-8091-4292cb1f0fd9 --namespace=persistent-local-volumes-test-9867 -- stat -c %g /mnt/volume1' Nov 6 01:56:00.321: INFO: stderr: "" Nov 6 01:56:00.321: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod pod-e703e2ce-fdf8-40e9-a963-b80fa1371bf9 in namespace persistent-local-volumes-test-9867 STEP: Deleting second pod STEP: Deleting pod pod-3800f465-197e-4176-8091-4292cb1f0fd9 in namespace persistent-local-volumes-test-9867 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:56:00.330: INFO: Deleting PersistentVolumeClaim "pvc-w2ssn" Nov 6 01:56:00.334: INFO: Deleting PersistentVolume "local-pvm9wlw" Nov 6 01:56:00.338: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-4ae44c31-47b7-4e1f-9451-26ecfbf846ae] Namespace:persistent-local-volumes-test-9867 PodName:hostexec-node2-vb2qg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:56:00.338: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:56:00.496: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-4ae44c31-47b7-4e1f-9451-26ecfbf846ae/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9867 PodName:hostexec-node2-vb2qg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:56:00.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-4ae44c31-47b7-4e1f-9451-26ecfbf846ae/file Nov 6 01:56:00.703: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-9867 PodName:hostexec-node2-vb2qg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:56:00.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-4ae44c31-47b7-4e1f-9451-26ecfbf846ae Nov 6 01:56:01.104: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4ae44c31-47b7-4e1f-9451-26ecfbf846ae] Namespace:persistent-local-volumes-test-9867 PodName:hostexec-node2-vb2qg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:56:01.104: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:56:01.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9867" for this suite. • [SLOW TEST:27.781 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":2,"skipped":36,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:55:38.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-ceaef6e1-5052-4bb1-b310-97b675760c6f" Nov 6 01:55:42.291: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-ceaef6e1-5052-4bb1-b310-97b675760c6f && dd if=/dev/zero of=/tmp/local-volume-test-ceaef6e1-5052-4bb1-b310-97b675760c6f/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-ceaef6e1-5052-4bb1-b310-97b675760c6f/file] Namespace:persistent-local-volumes-test-7865 PodName:hostexec-node1-b8jfq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:55:42.291: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:55:42.414: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-ceaef6e1-5052-4bb1-b310-97b675760c6f/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-7865 PodName:hostexec-node1-b8jfq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:55:42.414: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:55:42.504: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop1 && mount -t ext4 /dev/loop1 /tmp/local-volume-test-ceaef6e1-5052-4bb1-b310-97b675760c6f && chmod o+rwx /tmp/local-volume-test-ceaef6e1-5052-4bb1-b310-97b675760c6f] Namespace:persistent-local-volumes-test-7865 PodName:hostexec-node1-b8jfq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:55:42.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:55:42.759: INFO: Creating a PV followed by a PVC Nov 6 01:55:42.765: INFO: Waiting for PV local-pvsd66p to bind to PVC pvc-5jkb9 Nov 6 01:55:42.765: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-5jkb9] to have phase Bound Nov 6 01:55:42.767: INFO: PersistentVolumeClaim pvc-5jkb9 found but phase is Pending instead of Bound. Nov 6 01:55:44.772: INFO: PersistentVolumeClaim pvc-5jkb9 found but phase is Pending instead of Bound. Nov 6 01:55:46.776: INFO: PersistentVolumeClaim pvc-5jkb9 found but phase is Pending instead of Bound. Nov 6 01:55:48.779: INFO: PersistentVolumeClaim pvc-5jkb9 found but phase is Pending instead of Bound. Nov 6 01:55:50.783: INFO: PersistentVolumeClaim pvc-5jkb9 found but phase is Pending instead of Bound. Nov 6 01:55:52.787: INFO: PersistentVolumeClaim pvc-5jkb9 found and phase=Bound (10.021572351s) Nov 6 01:55:52.787: INFO: Waiting up to 3m0s for PersistentVolume local-pvsd66p to have phase Bound Nov 6 01:55:52.789: INFO: PersistentVolume local-pvsd66p found and phase=Bound (2.050007ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Nov 6 01:56:00.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-7865 exec pod-2847cff0-1f75-4b8b-a243-69ce3e6e6a20 --namespace=persistent-local-volumes-test-7865 -- stat -c %g /mnt/volume1' Nov 6 01:56:01.119: INFO: stderr: "" Nov 6 01:56:01.120: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-2847cff0-1f75-4b8b-a243-69ce3e6e6a20 in namespace persistent-local-volumes-test-7865 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:56:01.127: INFO: Deleting PersistentVolumeClaim "pvc-5jkb9" Nov 6 01:56:01.131: INFO: Deleting PersistentVolume "local-pvsd66p" Nov 6 01:56:01.135: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-ceaef6e1-5052-4bb1-b310-97b675760c6f] Namespace:persistent-local-volumes-test-7865 PodName:hostexec-node1-b8jfq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:56:01.135: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:56:01.290: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-ceaef6e1-5052-4bb1-b310-97b675760c6f/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-7865 PodName:hostexec-node1-b8jfq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:56:01.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop1" on node "node1" at path /tmp/local-volume-test-ceaef6e1-5052-4bb1-b310-97b675760c6f/file Nov 6 01:56:01.378: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop1] Namespace:persistent-local-volumes-test-7865 PodName:hostexec-node1-b8jfq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:56:01.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-ceaef6e1-5052-4bb1-b310-97b675760c6f Nov 6 01:56:01.460: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ceaef6e1-5052-4bb1-b310-97b675760c6f] Namespace:persistent-local-volumes-test-7865 PodName:hostexec-node1-b8jfq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:56:01.460: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:56:01.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7865" for this suite. • [SLOW TEST:23.308 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":11,"skipped":496,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:55:50.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Nov 6 01:55:50.624: INFO: The status of Pod test-hostpath-type-4v6h6 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:55:52.628: INFO: The status of Pod test-hostpath-type-4v6h6 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:55:54.629: INFO: The status of Pod test-hostpath-type-4v6h6 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:55:56.628: INFO: The status of Pod test-hostpath-type-4v6h6 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:55:58.628: INFO: The status of Pod test-hostpath-type-4v6h6 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:56:00.629: INFO: The status of Pod test-hostpath-type-4v6h6 is Running (Ready = true) STEP: running on node node1 [It] Should fail on mounting socket 'asocket' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:221 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:56:02.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-2172" for this suite. • [SLOW TEST:12.079 seconds] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting socket 'asocket' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:221 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathFile","total":-1,"completed":10,"skipped":263,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:56:01.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Nov 6 01:56:01.341: INFO: The status of Pod test-hostpath-type-bbjz4 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:56:03.345: INFO: The status of Pod test-hostpath-type-bbjz4 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:56:05.345: INFO: The status of Pod test-hostpath-type-bbjz4 is Running (Ready = true) STEP: running on node node1 STEP: Create a block device for further testing Nov 6 01:56:05.348: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-1174 PodName:test-hostpath-type-bbjz4 ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:56:05.348: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting non-existent block device 'does-not-exist-blk-dev' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:340 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:56:07.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-1174" for this suite. • [SLOW TEST:6.158 seconds] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting non-existent block device 'does-not-exist-blk-dev' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:340 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should fail on mounting non-existent block device 'does-not-exist-blk-dev' when HostPathType is HostPathBlockDev","total":-1,"completed":5,"skipped":135,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:56:01.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Nov 6 01:56:01.634: INFO: The status of Pod test-hostpath-type-sp4zt is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:56:03.637: INFO: The status of Pod test-hostpath-type-sp4zt is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:56:05.639: INFO: The status of Pod test-hostpath-type-sp4zt is Running (Ready = true) STEP: running on node node1 STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should be able to mount directory 'adir' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:80 [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:56:13.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-9762" for this suite. • [SLOW TEST:12.092 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount directory 'adir' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:80 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should be able to mount directory 'adir' successfully when HostPathType is HostPathUnset","total":-1,"completed":12,"skipped":515,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:56:13.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] new files should be created with FSGroup ownership when container is root /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:55 STEP: Creating a pod to test emptydir 0644 on tmpfs Nov 6 01:56:13.755: INFO: Waiting up to 5m0s for pod "pod-5d850261-7bec-424e-a7b7-889203838ee8" in namespace "emptydir-7058" to be "Succeeded or Failed" Nov 6 01:56:13.756: INFO: Pod "pod-5d850261-7bec-424e-a7b7-889203838ee8": Phase="Pending", Reason="", readiness=false. Elapsed: 1.842815ms Nov 6 01:56:15.760: INFO: Pod "pod-5d850261-7bec-424e-a7b7-889203838ee8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005200355s Nov 6 01:56:17.764: INFO: Pod "pod-5d850261-7bec-424e-a7b7-889203838ee8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009155611s STEP: Saw pod success Nov 6 01:56:17.764: INFO: Pod "pod-5d850261-7bec-424e-a7b7-889203838ee8" satisfied condition "Succeeded or Failed" Nov 6 01:56:17.766: INFO: Trying to get logs from node node1 pod pod-5d850261-7bec-424e-a7b7-889203838ee8 container test-container: STEP: delete the pod Nov 6 01:56:17.778: INFO: Waiting for pod pod-5d850261-7bec-424e-a7b7-889203838ee8 to disappear Nov 6 01:56:17.780: INFO: Pod pod-5d850261-7bec-424e-a7b7-889203838ee8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:56:17.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7058" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root","total":-1,"completed":13,"skipped":526,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:55:56.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 6 01:56:16.227: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-c3b2c0cc-e687-4c46-a40e-2ff4bbc8870d-backend && ln -s /tmp/local-volume-test-c3b2c0cc-e687-4c46-a40e-2ff4bbc8870d-backend /tmp/local-volume-test-c3b2c0cc-e687-4c46-a40e-2ff4bbc8870d] Namespace:persistent-local-volumes-test-1275 PodName:hostexec-node2-tx9tg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:56:16.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:56:16.330: INFO: Creating a PV followed by a PVC Nov 6 01:56:16.339: INFO: Waiting for PV local-pvh7prc to bind to PVC pvc-lnxdw Nov 6 01:56:16.339: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-lnxdw] to have phase Bound Nov 6 01:56:16.341: INFO: PersistentVolumeClaim pvc-lnxdw found but phase is Pending instead of Bound. Nov 6 01:56:18.343: INFO: PersistentVolumeClaim pvc-lnxdw found but phase is Pending instead of Bound. Nov 6 01:56:20.348: INFO: PersistentVolumeClaim pvc-lnxdw found but phase is Pending instead of Bound. Nov 6 01:56:22.352: INFO: PersistentVolumeClaim pvc-lnxdw found and phase=Bound (6.013410643s) Nov 6 01:56:22.352: INFO: Waiting up to 3m0s for PersistentVolume local-pvh7prc to have phase Bound Nov 6 01:56:22.355: INFO: PersistentVolume local-pvh7prc found and phase=Bound (2.490654ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Nov 6 01:56:22.359: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:56:22.361: INFO: Deleting PersistentVolumeClaim "pvc-lnxdw" Nov 6 01:56:22.365: INFO: Deleting PersistentVolume "local-pvh7prc" STEP: Removing the test directory Nov 6 01:56:22.369: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c3b2c0cc-e687-4c46-a40e-2ff4bbc8870d && rm -r /tmp/local-volume-test-c3b2c0cc-e687-4c46-a40e-2ff4bbc8870d-backend] Namespace:persistent-local-volumes-test-1275 PodName:hostexec-node2-tx9tg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:56:22.369: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:56:22.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1275" for this suite. S [SKIPPING] [26.443 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SS ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:56:17.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Nov 6 01:56:17.865: INFO: The status of Pod test-hostpath-type-5jvxf is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:56:19.868: INFO: The status of Pod test-hostpath-type-5jvxf is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:56:21.868: INFO: The status of Pod test-hostpath-type-5jvxf is Running (Ready = true) STEP: running on node node1 STEP: Create a character device for further testing Nov 6 01:56:21.870: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-1824 PodName:test-hostpath-type-5jvxf ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:56:21.870: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to mount character device 'achardev' successfully when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:277 [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:56:25.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-1824" for this suite. • [SLOW TEST:8.161 seconds] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount character device 'achardev' successfully when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:277 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should be able to mount character device 'achardev' successfully when HostPathType is HostPathCharDev","total":-1,"completed":14,"skipped":541,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:56:22.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37 [It] should support r/w [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:65 STEP: Creating a pod to test hostPath r/w Nov 6 01:56:22.658: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9558" to be "Succeeded or Failed" Nov 6 01:56:22.664: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 5.348274ms Nov 6 01:56:24.667: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008456133s Nov 6 01:56:26.669: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011168811s Nov 6 01:56:28.672: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014143913s STEP: Saw pod success Nov 6 01:56:28.672: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Nov 6 01:56:28.676: INFO: Trying to get logs from node node1 pod pod-host-path-test container test-container-2: STEP: delete the pod Nov 6 01:56:28.692: INFO: Waiting for pod pod-host-path-test to disappear Nov 6 01:56:28.694: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:56:28.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-9558" for this suite. • [SLOW TEST:6.077 seconds] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support r/w [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:65 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should support r/w [NodeConformance]","total":-1,"completed":11,"skipped":400,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:55:46.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 6 01:55:56.178: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-620af0a2-0c79-45d2-9b19-3fb0f628020f-backend && ln -s /tmp/local-volume-test-620af0a2-0c79-45d2-9b19-3fb0f628020f-backend /tmp/local-volume-test-620af0a2-0c79-45d2-9b19-3fb0f628020f] Namespace:persistent-local-volumes-test-1878 PodName:hostexec-node2-qbtmg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:55:56.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:55:56.373: INFO: Creating a PV followed by a PVC Nov 6 01:55:56.380: INFO: Waiting for PV local-pvkc4w4 to bind to PVC pvc-fd4r8 Nov 6 01:55:56.380: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-fd4r8] to have phase Bound Nov 6 01:55:56.382: INFO: PersistentVolumeClaim pvc-fd4r8 found but phase is Pending instead of Bound. Nov 6 01:55:58.386: INFO: PersistentVolumeClaim pvc-fd4r8 found but phase is Pending instead of Bound. Nov 6 01:56:00.390: INFO: PersistentVolumeClaim pvc-fd4r8 found but phase is Pending instead of Bound. Nov 6 01:56:02.393: INFO: PersistentVolumeClaim pvc-fd4r8 found but phase is Pending instead of Bound. Nov 6 01:56:04.396: INFO: PersistentVolumeClaim pvc-fd4r8 found but phase is Pending instead of Bound. Nov 6 01:56:06.399: INFO: PersistentVolumeClaim pvc-fd4r8 found but phase is Pending instead of Bound. Nov 6 01:56:08.403: INFO: PersistentVolumeClaim pvc-fd4r8 found and phase=Bound (12.022422779s) Nov 6 01:56:08.403: INFO: Waiting up to 3m0s for PersistentVolume local-pvkc4w4 to have phase Bound Nov 6 01:56:08.405: INFO: PersistentVolume local-pvkc4w4 found and phase=Bound (2.207896ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Nov 6 01:56:22.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-1878 exec pod-ece1dbfb-d656-41e0-8b49-435cc7c1164e --namespace=persistent-local-volumes-test-1878 -- stat -c %g /mnt/volume1' Nov 6 01:56:22.685: INFO: stderr: "" Nov 6 01:56:22.685: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Nov 6 01:56:34.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-1878 exec pod-b6c4eea9-1961-49a8-ba52-01ee066e6523 --namespace=persistent-local-volumes-test-1878 -- stat -c %g /mnt/volume1' Nov 6 01:56:34.952: INFO: stderr: "" Nov 6 01:56:34.952: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod pod-ece1dbfb-d656-41e0-8b49-435cc7c1164e in namespace persistent-local-volumes-test-1878 STEP: Deleting second pod STEP: Deleting pod pod-b6c4eea9-1961-49a8-ba52-01ee066e6523 in namespace persistent-local-volumes-test-1878 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:56:34.961: INFO: Deleting PersistentVolumeClaim "pvc-fd4r8" Nov 6 01:56:34.964: INFO: Deleting PersistentVolume "local-pvkc4w4" STEP: Removing the test directory Nov 6 01:56:34.968: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-620af0a2-0c79-45d2-9b19-3fb0f628020f && rm -r /tmp/local-volume-test-620af0a2-0c79-45d2-9b19-3fb0f628020f-backend] Namespace:persistent-local-volumes-test-1878 PodName:hostexec-node2-qbtmg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:56:34.968: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:56:35.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1878" for this suite. • [SLOW TEST:48.955 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":2,"skipped":15,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:56:28.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Nov 6 01:56:28.810: INFO: The status of Pod test-hostpath-type-sv5vz is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:56:30.816: INFO: The status of Pod test-hostpath-type-sv5vz is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:56:32.814: INFO: The status of Pod test-hostpath-type-sv5vz is Running (Ready = true) STEP: running on node node1 STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should fail on mounting directory 'adir' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:89 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:56:38.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-6525" for this suite. • [SLOW TEST:10.124 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting directory 'adir' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:89 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathSocket","total":-1,"completed":12,"skipped":432,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:56:02.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 6 01:56:18.813: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-548c71bf-7fc8-4126-bbc5-a04de0641305-backend && ln -s /tmp/local-volume-test-548c71bf-7fc8-4126-bbc5-a04de0641305-backend /tmp/local-volume-test-548c71bf-7fc8-4126-bbc5-a04de0641305] Namespace:persistent-local-volumes-test-4531 PodName:hostexec-node2-tqdnt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:56:18.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:56:18.900: INFO: Creating a PV followed by a PVC Nov 6 01:56:18.906: INFO: Waiting for PV local-pvpwhlp to bind to PVC pvc-b2sbj Nov 6 01:56:18.906: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-b2sbj] to have phase Bound Nov 6 01:56:18.908: INFO: PersistentVolumeClaim pvc-b2sbj found but phase is Pending instead of Bound. Nov 6 01:56:20.912: INFO: PersistentVolumeClaim pvc-b2sbj found and phase=Bound (2.005552267s) Nov 6 01:56:20.912: INFO: Waiting up to 3m0s for PersistentVolume local-pvpwhlp to have phase Bound Nov 6 01:56:20.914: INFO: PersistentVolume local-pvpwhlp found and phase=Bound (2.043647ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Nov 6 01:56:32.939: INFO: pod "pod-c30e6723-17d1-4ebb-a0a2-5a56d2ffb877" created on Node "node2" STEP: Writing in pod1 Nov 6 01:56:32.939: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4531 PodName:pod-c30e6723-17d1-4ebb-a0a2-5a56d2ffb877 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:56:32.939: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:56:33.384: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 6 01:56:33.384: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4531 PodName:pod-c30e6723-17d1-4ebb-a0a2-5a56d2ffb877 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:56:33.384: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:56:33.596: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Nov 6 01:56:51.623: INFO: pod "pod-2ff6ef95-610b-4ea8-9065-d5ddcd8d54bc" created on Node "node2" Nov 6 01:56:51.623: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4531 PodName:pod-2ff6ef95-610b-4ea8-9065-d5ddcd8d54bc ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:56:51.623: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:56:51.719: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Nov 6 01:56:51.719: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-548c71bf-7fc8-4126-bbc5-a04de0641305 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4531 PodName:pod-2ff6ef95-610b-4ea8-9065-d5ddcd8d54bc ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:56:51.719: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:56:51.828: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-548c71bf-7fc8-4126-bbc5-a04de0641305 > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Nov 6 01:56:51.828: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4531 PodName:pod-c30e6723-17d1-4ebb-a0a2-5a56d2ffb877 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:56:51.828: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:56:51.944: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-548c71bf-7fc8-4126-bbc5-a04de0641305", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-c30e6723-17d1-4ebb-a0a2-5a56d2ffb877 in namespace persistent-local-volumes-test-4531 STEP: Deleting pod2 STEP: Deleting pod pod-2ff6ef95-610b-4ea8-9065-d5ddcd8d54bc in namespace persistent-local-volumes-test-4531 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:56:51.955: INFO: Deleting PersistentVolumeClaim "pvc-b2sbj" Nov 6 01:56:51.959: INFO: Deleting PersistentVolume "local-pvpwhlp" STEP: Removing the test directory Nov 6 01:56:51.964: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-548c71bf-7fc8-4126-bbc5-a04de0641305 && rm -r /tmp/local-volume-test-548c71bf-7fc8-4126-bbc5-a04de0641305-backend] Namespace:persistent-local-volumes-test-4531 PodName:hostexec-node2-tqdnt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:56:51.964: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:56:52.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4531" for this suite. • [SLOW TEST:49.393 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":11,"skipped":302,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:56:52.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:77 Nov 6 01:56:52.179: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:56:52.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-2608" for this suite. [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:111 Nov 6 01:56:52.189: INFO: AfterEach: Cleaning up test resources Nov 6 01:56:52.189: INFO: pvc is nil Nov 6 01:56:52.189: INFO: pv is nil S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should test that deleting the Namespace of a PVC and Pod causes the successful detach of Persistent Disk [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:156 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:56:52.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:68 Nov 6 01:56:52.305: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:56:52.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-4148" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 NFSv4 [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:78 should be mountable for NFSv4 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:79 Only supported for node OS distro [gci ubuntu custom] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:69 ------------------------------ SSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:56:07.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 6 01:56:19.621: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-53644806-3383-40bb-a672-a8f3079cd1bd-backend && mount --bind /tmp/local-volume-test-53644806-3383-40bb-a672-a8f3079cd1bd-backend /tmp/local-volume-test-53644806-3383-40bb-a672-a8f3079cd1bd-backend && ln -s /tmp/local-volume-test-53644806-3383-40bb-a672-a8f3079cd1bd-backend /tmp/local-volume-test-53644806-3383-40bb-a672-a8f3079cd1bd] Namespace:persistent-local-volumes-test-3468 PodName:hostexec-node2-fg2g6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:56:19.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:56:19.728: INFO: Creating a PV followed by a PVC Nov 6 01:56:19.733: INFO: Waiting for PV local-pvxw2fz to bind to PVC pvc-w2ggs Nov 6 01:56:19.733: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-w2ggs] to have phase Bound Nov 6 01:56:19.736: INFO: PersistentVolumeClaim pvc-w2ggs found but phase is Pending instead of Bound. Nov 6 01:56:21.739: INFO: PersistentVolumeClaim pvc-w2ggs found but phase is Pending instead of Bound. Nov 6 01:56:23.744: INFO: PersistentVolumeClaim pvc-w2ggs found and phase=Bound (4.011207871s) Nov 6 01:56:23.744: INFO: Waiting up to 3m0s for PersistentVolume local-pvxw2fz to have phase Bound Nov 6 01:56:23.746: INFO: PersistentVolume local-pvxw2fz found and phase=Bound (1.860923ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Nov 6 01:56:33.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-3468 exec pod-6b1354f0-610d-4f8e-83d4-be2534e2db67 --namespace=persistent-local-volumes-test-3468 -- stat -c %g /mnt/volume1' Nov 6 01:56:34.068: INFO: stderr: "" Nov 6 01:56:34.068: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Nov 6 01:56:52.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-3468 exec pod-b24e2b86-b3c1-4e37-b12f-7934df76636d --namespace=persistent-local-volumes-test-3468 -- stat -c %g /mnt/volume1' Nov 6 01:56:52.578: INFO: stderr: "" Nov 6 01:56:52.578: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod pod-6b1354f0-610d-4f8e-83d4-be2534e2db67 in namespace persistent-local-volumes-test-3468 STEP: Deleting second pod STEP: Deleting pod pod-b24e2b86-b3c1-4e37-b12f-7934df76636d in namespace persistent-local-volumes-test-3468 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:56:52.587: INFO: Deleting PersistentVolumeClaim "pvc-w2ggs" Nov 6 01:56:52.591: INFO: Deleting PersistentVolume "local-pvxw2fz" STEP: Removing the test directory Nov 6 01:56:52.595: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-53644806-3383-40bb-a672-a8f3079cd1bd && umount /tmp/local-volume-test-53644806-3383-40bb-a672-a8f3079cd1bd-backend && rm -r /tmp/local-volume-test-53644806-3383-40bb-a672-a8f3079cd1bd-backend] Namespace:persistent-local-volumes-test-3468 PodName:hostexec-node2-fg2g6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:56:52.595: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:56:52.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3468" for this suite. • [SLOW TEST:45.335 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":6,"skipped":185,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:51:56.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to the key in the secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:421 STEP: Creating secret with name s-test-opt-create-6991566e-aed2-47b3-8653-8b33e43e345c STEP: Creating the pod [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:56:56.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9320" for this suite. • [SLOW TEST:300.059 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to the key in the secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:421 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]","total":-1,"completed":2,"skipped":81,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:56:56.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Nov 6 01:56:56.421: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:56:56.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-1821" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create metrics for total number of volumes in A/D Controller [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:322 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:56:35.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should create and delete persistent volumes [fast] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:794 STEP: creating a Gluster DP server Pod STEP: locating the provisioner pod STEP: creating a StorageClass STEP: Creating a StorageClass STEP: creating a claim object with a suffix for gluster dynamic provisioner Nov 6 01:56:53.155: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: creating claim=&PersistentVolumeClaim{ObjectMeta:{ pvc- volume-provisioning-8891 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {} 2Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*volume-provisioning-8891-glusterdptestv9nph,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} Nov 6 01:56:53.159: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-tmj75] to have phase Bound Nov 6 01:56:53.162: INFO: PersistentVolumeClaim pvc-tmj75 found but phase is Pending instead of Bound. Nov 6 01:56:55.170: INFO: PersistentVolumeClaim pvc-tmj75 found and phase=Bound (2.01078703s) STEP: checking the claim STEP: checking the PV STEP: deleting claim "volume-provisioning-8891"/"pvc-tmj75" STEP: deleting the claim's PV "pvc-dba39978-acd0-401d-9537-73e20c80aee6" Nov 6 01:56:55.177: INFO: Waiting up to 20m0s for PersistentVolume pvc-dba39978-acd0-401d-9537-73e20c80aee6 to get deleted Nov 6 01:56:55.179: INFO: PersistentVolume pvc-dba39978-acd0-401d-9537-73e20c80aee6 found and phase=Bound (1.616144ms) Nov 6 01:57:00.183: INFO: PersistentVolume pvc-dba39978-acd0-401d-9537-73e20c80aee6 was removed Nov 6 01:57:00.184: INFO: deleting claim "volume-provisioning-8891"/"pvc-tmj75" Nov 6 01:57:00.186: INFO: deleting storage class volume-provisioning-8891-glusterdptestv9nph [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:57:00.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-8891" for this suite. • [SLOW TEST:25.091 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 GlusterDynamicProvisioner /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:793 should create and delete persistent volumes [fast] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:794 ------------------------------ {"msg":"PASSED [sig-storage] Dynamic Provisioning GlusterDynamicProvisioner should create and delete persistent volumes [fast]","total":-1,"completed":3,"skipped":25,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:55:44.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not be passed when podInfoOnMount=false /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 STEP: Building a driver namespace object, basename csi-mock-volumes-2850 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 6 01:55:44.545: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2850-4200/csi-attacher Nov 6 01:55:44.548: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2850 Nov 6 01:55:44.548: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-2850 Nov 6 01:55:44.551: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2850 Nov 6 01:55:44.553: INFO: creating *v1.Role: csi-mock-volumes-2850-4200/external-attacher-cfg-csi-mock-volumes-2850 Nov 6 01:55:44.556: INFO: creating *v1.RoleBinding: csi-mock-volumes-2850-4200/csi-attacher-role-cfg Nov 6 01:55:44.558: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2850-4200/csi-provisioner Nov 6 01:55:44.561: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2850 Nov 6 01:55:44.561: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-2850 Nov 6 01:55:44.564: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2850 Nov 6 01:55:44.567: INFO: creating *v1.Role: csi-mock-volumes-2850-4200/external-provisioner-cfg-csi-mock-volumes-2850 Nov 6 01:55:44.569: INFO: creating *v1.RoleBinding: csi-mock-volumes-2850-4200/csi-provisioner-role-cfg Nov 6 01:55:44.572: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2850-4200/csi-resizer Nov 6 01:55:44.574: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2850 Nov 6 01:55:44.574: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-2850 Nov 6 01:55:44.577: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2850 Nov 6 01:55:44.580: INFO: creating *v1.Role: csi-mock-volumes-2850-4200/external-resizer-cfg-csi-mock-volumes-2850 Nov 6 01:55:44.583: INFO: creating *v1.RoleBinding: csi-mock-volumes-2850-4200/csi-resizer-role-cfg Nov 6 01:55:44.586: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2850-4200/csi-snapshotter Nov 6 01:55:44.588: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2850 Nov 6 01:55:44.588: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-2850 Nov 6 01:55:44.591: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2850 Nov 6 01:55:44.593: INFO: creating *v1.Role: csi-mock-volumes-2850-4200/external-snapshotter-leaderelection-csi-mock-volumes-2850 Nov 6 01:55:44.595: INFO: creating *v1.RoleBinding: csi-mock-volumes-2850-4200/external-snapshotter-leaderelection Nov 6 01:55:44.598: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2850-4200/csi-mock Nov 6 01:55:44.600: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2850 Nov 6 01:55:44.603: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2850 Nov 6 01:55:44.606: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2850 Nov 6 01:55:44.608: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2850 Nov 6 01:55:44.611: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2850 Nov 6 01:55:44.613: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2850 Nov 6 01:55:44.616: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2850 Nov 6 01:55:44.619: INFO: creating *v1.StatefulSet: csi-mock-volumes-2850-4200/csi-mockplugin Nov 6 01:55:44.623: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-2850 Nov 6 01:55:44.625: INFO: creating *v1.StatefulSet: csi-mock-volumes-2850-4200/csi-mockplugin-attacher Nov 6 01:55:44.628: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-2850" Nov 6 01:55:44.630: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-2850 to register on node node2 STEP: Creating pod Nov 6 01:56:00.900: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 6 01:56:00.904: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-wfbw2] to have phase Bound Nov 6 01:56:00.907: INFO: PersistentVolumeClaim pvc-wfbw2 found but phase is Pending instead of Bound. Nov 6 01:56:02.913: INFO: PersistentVolumeClaim pvc-wfbw2 found and phase=Bound (2.008271378s) STEP: Deleting the previously created pod Nov 6 01:56:20.933: INFO: Deleting pod "pvc-volume-tester-r85ld" in namespace "csi-mock-volumes-2850" Nov 6 01:56:20.938: INFO: Wait up to 5m0s for pod "pvc-volume-tester-r85ld" to be fully deleted STEP: Checking CSI driver logs Nov 6 01:56:26.955: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/34181654-b408-40bc-87e6-cc7ce96396f3/volumes/kubernetes.io~csi/pvc-64aa483e-5509-464e-a06e-4ce2ebe522fe/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-r85ld Nov 6 01:56:26.955: INFO: Deleting pod "pvc-volume-tester-r85ld" in namespace "csi-mock-volumes-2850" STEP: Deleting claim pvc-wfbw2 Nov 6 01:56:26.964: INFO: Waiting up to 2m0s for PersistentVolume pvc-64aa483e-5509-464e-a06e-4ce2ebe522fe to get deleted Nov 6 01:56:26.966: INFO: PersistentVolume pvc-64aa483e-5509-464e-a06e-4ce2ebe522fe found and phase=Bound (2.111909ms) Nov 6 01:56:28.969: INFO: PersistentVolume pvc-64aa483e-5509-464e-a06e-4ce2ebe522fe was removed STEP: Deleting storageclass csi-mock-volumes-2850-sckx7sk STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-2850 STEP: Waiting for namespaces [csi-mock-volumes-2850] to vanish STEP: uninstalling csi mock driver Nov 6 01:56:34.980: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2850-4200/csi-attacher Nov 6 01:56:34.983: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2850 Nov 6 01:56:34.986: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2850 Nov 6 01:56:34.989: INFO: deleting *v1.Role: csi-mock-volumes-2850-4200/external-attacher-cfg-csi-mock-volumes-2850 Nov 6 01:56:34.992: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2850-4200/csi-attacher-role-cfg Nov 6 01:56:34.996: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2850-4200/csi-provisioner Nov 6 01:56:34.999: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2850 Nov 6 01:56:35.003: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2850 Nov 6 01:56:35.006: INFO: deleting *v1.Role: csi-mock-volumes-2850-4200/external-provisioner-cfg-csi-mock-volumes-2850 Nov 6 01:56:35.010: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2850-4200/csi-provisioner-role-cfg Nov 6 01:56:35.013: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2850-4200/csi-resizer Nov 6 01:56:35.017: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2850 Nov 6 01:56:35.021: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2850 Nov 6 01:56:35.024: INFO: deleting *v1.Role: csi-mock-volumes-2850-4200/external-resizer-cfg-csi-mock-volumes-2850 Nov 6 01:56:35.027: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2850-4200/csi-resizer-role-cfg Nov 6 01:56:35.030: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2850-4200/csi-snapshotter Nov 6 01:56:35.033: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2850 Nov 6 01:56:35.036: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2850 Nov 6 01:56:35.039: INFO: deleting *v1.Role: csi-mock-volumes-2850-4200/external-snapshotter-leaderelection-csi-mock-volumes-2850 Nov 6 01:56:35.042: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2850-4200/external-snapshotter-leaderelection Nov 6 01:56:35.046: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2850-4200/csi-mock Nov 6 01:56:35.049: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2850 Nov 6 01:56:35.052: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2850 Nov 6 01:56:35.055: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2850 Nov 6 01:56:35.058: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2850 Nov 6 01:56:35.061: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2850 Nov 6 01:56:35.065: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2850 Nov 6 01:56:35.068: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2850 Nov 6 01:56:35.071: INFO: deleting *v1.StatefulSet: csi-mock-volumes-2850-4200/csi-mockplugin Nov 6 01:56:35.075: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-2850 Nov 6 01:56:35.078: INFO: deleting *v1.StatefulSet: csi-mock-volumes-2850-4200/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-2850-4200 STEP: Waiting for namespaces [csi-mock-volumes-2850-4200] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:57:03.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:78.611 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443 should not be passed when podInfoOnMount=false /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false","total":-1,"completed":5,"skipped":260,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Volume limits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:57:03.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-limits-on-node STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volume limits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_limits.go:35 Nov 6 01:57:03.138: INFO: Only supported for providers [aws gce gke] (not local) [AfterEach] [sig-storage] Volume limits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:57:03.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-limits-on-node-2818" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-storage] Volume limits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should verify that all nodes have volume limits [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_limits.go:41 Only supported for providers [aws gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_limits.go:36 ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:57:03.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 [It] [Serial] attach on previously attached volumes should work /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:458 Nov 6 01:57:03.183: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:57:03.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-5881" for this suite. S [SKIPPING] [0.042 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Serial] attach on previously attached volumes should work [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:458 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:459 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:56:56.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Nov 6 01:56:56.600: INFO: The status of Pod test-hostpath-type-l9zs7 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:56:58.603: INFO: The status of Pod test-hostpath-type-l9zs7 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:57:00.605: INFO: The status of Pod test-hostpath-type-l9zs7 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:57:02.603: INFO: The status of Pod test-hostpath-type-l9zs7 is Running (Ready = true) STEP: running on node node1 STEP: Create a character device for further testing Nov 6 01:57:02.606: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-1028 PodName:test-hostpath-type-l9zs7 ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:57:02.606: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting character device 'achardev' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:295 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:57:04.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-1028" for this suite. • [SLOW TEST:8.220 seconds] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting character device 'achardev' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:295 ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:56:01.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] token should be plumbed down when csiServiceAccountTokenEnabled=true /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1402 STEP: Building a driver namespace object, basename csi-mock-volumes-9842 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 6 01:56:01.446: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9842-1143/csi-attacher Nov 6 01:56:01.450: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9842 Nov 6 01:56:01.450: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-9842 Nov 6 01:56:01.453: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9842 Nov 6 01:56:01.455: INFO: creating *v1.Role: csi-mock-volumes-9842-1143/external-attacher-cfg-csi-mock-volumes-9842 Nov 6 01:56:01.458: INFO: creating *v1.RoleBinding: csi-mock-volumes-9842-1143/csi-attacher-role-cfg Nov 6 01:56:01.461: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9842-1143/csi-provisioner Nov 6 01:56:01.465: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9842 Nov 6 01:56:01.465: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-9842 Nov 6 01:56:01.468: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9842 Nov 6 01:56:01.470: INFO: creating *v1.Role: csi-mock-volumes-9842-1143/external-provisioner-cfg-csi-mock-volumes-9842 Nov 6 01:56:01.473: INFO: creating *v1.RoleBinding: csi-mock-volumes-9842-1143/csi-provisioner-role-cfg Nov 6 01:56:01.475: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9842-1143/csi-resizer Nov 6 01:56:01.477: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9842 Nov 6 01:56:01.478: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-9842 Nov 6 01:56:01.483: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9842 Nov 6 01:56:01.485: INFO: creating *v1.Role: csi-mock-volumes-9842-1143/external-resizer-cfg-csi-mock-volumes-9842 Nov 6 01:56:01.489: INFO: creating *v1.RoleBinding: csi-mock-volumes-9842-1143/csi-resizer-role-cfg Nov 6 01:56:01.492: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9842-1143/csi-snapshotter Nov 6 01:56:01.494: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9842 Nov 6 01:56:01.495: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-9842 Nov 6 01:56:01.497: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9842 Nov 6 01:56:01.500: INFO: creating *v1.Role: csi-mock-volumes-9842-1143/external-snapshotter-leaderelection-csi-mock-volumes-9842 Nov 6 01:56:01.502: INFO: creating *v1.RoleBinding: csi-mock-volumes-9842-1143/external-snapshotter-leaderelection Nov 6 01:56:01.505: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9842-1143/csi-mock Nov 6 01:56:01.506: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9842 Nov 6 01:56:01.509: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9842 Nov 6 01:56:01.511: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9842 Nov 6 01:56:01.514: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9842 Nov 6 01:56:01.516: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9842 Nov 6 01:56:01.518: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9842 Nov 6 01:56:01.520: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9842 Nov 6 01:56:01.523: INFO: creating *v1.StatefulSet: csi-mock-volumes-9842-1143/csi-mockplugin Nov 6 01:56:01.527: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-9842 Nov 6 01:56:01.531: INFO: creating *v1.StatefulSet: csi-mock-volumes-9842-1143/csi-mockplugin-attacher Nov 6 01:56:01.534: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-9842" Nov 6 01:56:01.537: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-9842 to register on node node2 STEP: Creating pod Nov 6 01:56:17.807: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 6 01:56:17.812: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-c5bf5] to have phase Bound Nov 6 01:56:17.814: INFO: PersistentVolumeClaim pvc-c5bf5 found but phase is Pending instead of Bound. Nov 6 01:56:19.819: INFO: PersistentVolumeClaim pvc-c5bf5 found and phase=Bound (2.007160504s) STEP: Deleting the previously created pod Nov 6 01:56:44.841: INFO: Deleting pod "pvc-volume-tester-vq6r8" in namespace "csi-mock-volumes-9842" Nov 6 01:56:44.847: INFO: Wait up to 5m0s for pod "pvc-volume-tester-vq6r8" to be fully deleted STEP: Checking CSI driver logs Nov 6 01:56:50.886: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.tokens: {"":{"token":"eyJhbGciOiJSUzI1NiIsImtpZCI6Im8yajhoVHJFVGRzV2FwbVI4UHRqeXR6Q2llYi1aMllhSXBmWWpfN2E0OU0ifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjM2MTY0MzkxLCJpYXQiOjE2MzYxNjM3OTEsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJjc2ktbW9jay12b2x1bWVzLTk4NDIiLCJwb2QiOnsibmFtZSI6InB2Yy12b2x1bWUtdGVzdGVyLXZxNnI4IiwidWlkIjoiYjA0MzI0ZjktZmFjYy00NGQ0LWEzNGMtMzU2N2U1ZDY1NGJmIn0sInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJkZWZhdWx0IiwidWlkIjoiM2E1N2UzNTktNzMyNy00OTU2LTg1ZjItY2QyZWU0OTQ0Njc2In19LCJuYmYiOjE2MzYxNjM3OTEsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpjc2ktbW9jay12b2x1bWVzLTk4NDI6ZGVmYXVsdCJ9.bDN0kUyJPcMaDF0SEdLPOA19xsqaG6vdiK64WMFefq1K2on9oDz8X2oYyIhnSjclDZbDrp9zWsOyzU7_JTvrfeb2GVUjONJUxbG1he1lXOx3ygAmjrrkndf_ULWPOJtV8MM8TQa4PLRRJ7Mehb7urDgECFH08MDWUUIQPG6mW-o9bidShiBjn4lQJnmwh97ARpgU3boBiOJvH0I7gPjUC8Qr5N9W8c9GxvtX2_2uWXhI0lHgivIalUa1EA_3_xjFy7Oe2nlWLZ8dxiMKtfkl1VU9GsNGHmCjLVuy9yKtRFlI5-uHbmUCuSWB9sZATk_VGTZ3R_NH0p7Tc2ckf2r9JQ","expirationTimestamp":"2021-11-06T02:06:31Z"}} Nov 6 01:56:50.886: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/b04324f9-facc-44d4-a34c-3567e5d654bf/volumes/kubernetes.io~csi/pvc-cd711fb1-23bb-46eb-b79e-e79a4f70c615/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-vq6r8 Nov 6 01:56:50.887: INFO: Deleting pod "pvc-volume-tester-vq6r8" in namespace "csi-mock-volumes-9842" STEP: Deleting claim pvc-c5bf5 Nov 6 01:56:50.898: INFO: Waiting up to 2m0s for PersistentVolume pvc-cd711fb1-23bb-46eb-b79e-e79a4f70c615 to get deleted Nov 6 01:56:50.901: INFO: PersistentVolume pvc-cd711fb1-23bb-46eb-b79e-e79a4f70c615 found and phase=Bound (2.674364ms) Nov 6 01:56:52.905: INFO: PersistentVolume pvc-cd711fb1-23bb-46eb-b79e-e79a4f70c615 was removed STEP: Deleting storageclass csi-mock-volumes-9842-scr7l6z STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-9842 STEP: Waiting for namespaces [csi-mock-volumes-9842] to vanish STEP: uninstalling csi mock driver Nov 6 01:56:58.944: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9842-1143/csi-attacher Nov 6 01:56:58.947: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9842 Nov 6 01:56:58.950: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9842 Nov 6 01:56:58.954: INFO: deleting *v1.Role: csi-mock-volumes-9842-1143/external-attacher-cfg-csi-mock-volumes-9842 Nov 6 01:56:58.957: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9842-1143/csi-attacher-role-cfg Nov 6 01:56:58.961: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9842-1143/csi-provisioner Nov 6 01:56:58.964: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9842 Nov 6 01:56:58.970: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9842 Nov 6 01:56:58.976: INFO: deleting *v1.Role: csi-mock-volumes-9842-1143/external-provisioner-cfg-csi-mock-volumes-9842 Nov 6 01:56:58.984: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9842-1143/csi-provisioner-role-cfg Nov 6 01:56:58.992: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9842-1143/csi-resizer Nov 6 01:56:58.997: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9842 Nov 6 01:56:59.000: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9842 Nov 6 01:56:59.004: INFO: deleting *v1.Role: csi-mock-volumes-9842-1143/external-resizer-cfg-csi-mock-volumes-9842 Nov 6 01:56:59.007: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9842-1143/csi-resizer-role-cfg Nov 6 01:56:59.011: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9842-1143/csi-snapshotter Nov 6 01:56:59.014: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9842 Nov 6 01:56:59.017: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9842 Nov 6 01:56:59.020: INFO: deleting *v1.Role: csi-mock-volumes-9842-1143/external-snapshotter-leaderelection-csi-mock-volumes-9842 Nov 6 01:56:59.023: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9842-1143/external-snapshotter-leaderelection Nov 6 01:56:59.027: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9842-1143/csi-mock Nov 6 01:56:59.030: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9842 Nov 6 01:56:59.033: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9842 Nov 6 01:56:59.036: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9842 Nov 6 01:56:59.039: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9842 Nov 6 01:56:59.043: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9842 Nov 6 01:56:59.046: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9842 Nov 6 01:56:59.049: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9842 Nov 6 01:56:59.053: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9842-1143/csi-mockplugin Nov 6 01:56:59.056: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-9842 Nov 6 01:56:59.059: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9842-1143/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-9842-1143 STEP: Waiting for namespaces [csi-mock-volumes-9842-1143] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:57:05.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:63.709 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIServiceAccountToken /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1374 token should be plumbed down when csiServiceAccountTokenEnabled=true /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1402 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should be plumbed down when csiServiceAccountTokenEnabled=true","total":-1,"completed":3,"skipped":47,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:57:00.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Nov 6 01:57:06.357: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-7408 PodName:hostexec-node2-ndszr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:57:06.357: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:57:07.399: INFO: exec node2: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Nov 6 01:57:07.399: INFO: exec node2: stdout: "0\n" Nov 6 01:57:07.399: INFO: exec node2: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Nov 6 01:57:07.399: INFO: exec node2: exit code: 0 Nov 6 01:57:07.400: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:57:07.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7408" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [7.107 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1256 ------------------------------ S ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:57:03.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Nov 6 01:57:03.264: INFO: The status of Pod test-hostpath-type-v2g48 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:57:05.268: INFO: The status of Pod test-hostpath-type-v2g48 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:57:07.269: INFO: The status of Pod test-hostpath-type-v2g48 is Running (Ready = true) STEP: running on node node1 [It] Should fail on mounting socket 'asocket' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:216 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:57:09.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-2770" for this suite. • [SLOW TEST:6.082 seconds] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting socket 'asocket' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:216 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathDirectory","total":-1,"completed":6,"skipped":284,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:56:52.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-227a2481-04b2-4592-baed-2e6d0a110e53" Nov 6 01:56:56.966: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-227a2481-04b2-4592-baed-2e6d0a110e53 && dd if=/dev/zero of=/tmp/local-volume-test-227a2481-04b2-4592-baed-2e6d0a110e53/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-227a2481-04b2-4592-baed-2e6d0a110e53/file] Namespace:persistent-local-volumes-test-8942 PodName:hostexec-node1-btdbc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:56:56.966: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:56:57.294: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-227a2481-04b2-4592-baed-2e6d0a110e53/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-8942 PodName:hostexec-node1-btdbc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:56:57.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:56:57.543: INFO: Creating a PV followed by a PVC Nov 6 01:56:57.550: INFO: Waiting for PV local-pvpzpkb to bind to PVC pvc-nrvw6 Nov 6 01:56:57.550: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-nrvw6] to have phase Bound Nov 6 01:56:57.552: INFO: PersistentVolumeClaim pvc-nrvw6 found but phase is Pending instead of Bound. Nov 6 01:56:59.556: INFO: PersistentVolumeClaim pvc-nrvw6 found but phase is Pending instead of Bound. Nov 6 01:57:01.560: INFO: PersistentVolumeClaim pvc-nrvw6 found but phase is Pending instead of Bound. Nov 6 01:57:03.563: INFO: PersistentVolumeClaim pvc-nrvw6 found but phase is Pending instead of Bound. Nov 6 01:57:05.567: INFO: PersistentVolumeClaim pvc-nrvw6 found but phase is Pending instead of Bound. Nov 6 01:57:07.570: INFO: PersistentVolumeClaim pvc-nrvw6 found and phase=Bound (10.019693846s) Nov 6 01:57:07.570: INFO: Waiting up to 3m0s for PersistentVolume local-pvpzpkb to have phase Bound Nov 6 01:57:07.572: INFO: PersistentVolume local-pvpzpkb found and phase=Bound (2.109612ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 6 01:57:11.600: INFO: pod "pod-62187dd1-3ff3-4f48-870e-01e76667c008" created on Node "node1" STEP: Writing in pod1 Nov 6 01:57:11.600: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8942 PodName:pod-62187dd1-3ff3-4f48-870e-01e76667c008 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:57:11.600: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:57:11.697: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n18 bytes (18B) copied, 0.000168 seconds, 104.6KB/s", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Nov 6 01:57:11.697: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-8942 PodName:pod-62187dd1-3ff3-4f48-870e-01e76667c008 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:57:11.697: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:57:11.782: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Writing in pod1 Nov 6 01:57:11.782: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo /dev/loop0 > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8942 PodName:pod-62187dd1-3ff3-4f48-870e-01e76667c008 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:57:11.782: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:57:11.860: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo /dev/loop0 > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n11 bytes (11B) copied, 0.000042 seconds, 255.8KB/s", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-62187dd1-3ff3-4f48-870e-01e76667c008 in namespace persistent-local-volumes-test-8942 [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:57:11.865: INFO: Deleting PersistentVolumeClaim "pvc-nrvw6" Nov 6 01:57:11.868: INFO: Deleting PersistentVolume "local-pvpzpkb" Nov 6 01:57:11.873: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-227a2481-04b2-4592-baed-2e6d0a110e53/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-8942 PodName:hostexec-node1-btdbc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:57:11.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node1" at path /tmp/local-volume-test-227a2481-04b2-4592-baed-2e6d0a110e53/file Nov 6 01:57:11.963: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-8942 PodName:hostexec-node1-btdbc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:57:11.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-227a2481-04b2-4592-baed-2e6d0a110e53 Nov 6 01:57:12.044: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-227a2481-04b2-4592-baed-2e6d0a110e53] Namespace:persistent-local-volumes-test-8942 PodName:hostexec-node1-btdbc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:57:12.044: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:57:12.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8942" for this suite. • [SLOW TEST:19.221 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":7,"skipped":188,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:55:45.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] exhausted, late binding, no topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 STEP: Building a driver namespace object, basename csi-mock-volumes-5039 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Nov 6 01:55:45.605: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5039-8178/csi-attacher Nov 6 01:55:45.608: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5039 Nov 6 01:55:45.608: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-5039 Nov 6 01:55:45.610: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5039 Nov 6 01:55:45.614: INFO: creating *v1.Role: csi-mock-volumes-5039-8178/external-attacher-cfg-csi-mock-volumes-5039 Nov 6 01:55:45.616: INFO: creating *v1.RoleBinding: csi-mock-volumes-5039-8178/csi-attacher-role-cfg Nov 6 01:55:45.619: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5039-8178/csi-provisioner Nov 6 01:55:45.622: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5039 Nov 6 01:55:45.622: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-5039 Nov 6 01:55:45.625: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5039 Nov 6 01:55:45.627: INFO: creating *v1.Role: csi-mock-volumes-5039-8178/external-provisioner-cfg-csi-mock-volumes-5039 Nov 6 01:55:45.630: INFO: creating *v1.RoleBinding: csi-mock-volumes-5039-8178/csi-provisioner-role-cfg Nov 6 01:55:45.632: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5039-8178/csi-resizer Nov 6 01:55:45.635: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5039 Nov 6 01:55:45.635: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-5039 Nov 6 01:55:45.638: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5039 Nov 6 01:55:45.640: INFO: creating *v1.Role: csi-mock-volumes-5039-8178/external-resizer-cfg-csi-mock-volumes-5039 Nov 6 01:55:45.643: INFO: creating *v1.RoleBinding: csi-mock-volumes-5039-8178/csi-resizer-role-cfg Nov 6 01:55:45.646: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5039-8178/csi-snapshotter Nov 6 01:55:45.648: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5039 Nov 6 01:55:45.648: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-5039 Nov 6 01:55:45.651: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5039 Nov 6 01:55:45.654: INFO: creating *v1.Role: csi-mock-volumes-5039-8178/external-snapshotter-leaderelection-csi-mock-volumes-5039 Nov 6 01:55:45.657: INFO: creating *v1.RoleBinding: csi-mock-volumes-5039-8178/external-snapshotter-leaderelection Nov 6 01:55:45.662: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5039-8178/csi-mock Nov 6 01:55:45.665: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5039 Nov 6 01:55:45.668: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5039 Nov 6 01:55:45.670: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5039 Nov 6 01:55:45.673: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5039 Nov 6 01:55:45.676: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5039 Nov 6 01:55:45.678: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5039 Nov 6 01:55:45.681: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5039 Nov 6 01:55:45.683: INFO: creating *v1.StatefulSet: csi-mock-volumes-5039-8178/csi-mockplugin Nov 6 01:55:45.688: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-5039 Nov 6 01:55:45.691: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-5039" Nov 6 01:55:45.693: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5039 to register on node node2 I1106 01:56:01.124419 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-5039","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1106 01:56:01.229162 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I1106 01:56:01.282997 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-5039","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1106 01:56:01.294082 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I1106 01:56:01.297736 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I1106 01:56:01.404219 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-5039"},"Error":"","FullError":null} STEP: Creating pod Nov 6 01:56:12.096: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil I1106 01:56:12.125801 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-3b52ef19-90a6-4f6b-acf1-8d641a08ed9b","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}} I1106 01:56:12.131904 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-3b52ef19-90a6-4f6b-acf1-8d641a08ed9b","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-3b52ef19-90a6-4f6b-acf1-8d641a08ed9b"}}},"Error":"","FullError":null} I1106 01:56:14.871113 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Nov 6 01:56:14.873: INFO: >>> kubeConfig: /root/.kube/config I1106 01:56:14.982341 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-3b52ef19-90a6-4f6b-acf1-8d641a08ed9b/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-3b52ef19-90a6-4f6b-acf1-8d641a08ed9b","storage.kubernetes.io/csiProvisionerIdentity":"1636163761295-8081-csi-mock-csi-mock-volumes-5039"}},"Response":{},"Error":"","FullError":null} I1106 01:56:15.745228 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Nov 6 01:56:15.747: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:56:15.882: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:56:16.013: INFO: >>> kubeConfig: /root/.kube/config I1106 01:56:16.130900 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-3b52ef19-90a6-4f6b-acf1-8d641a08ed9b/globalmount","target_path":"/var/lib/kubelet/pods/03bb07e9-7607-4e4a-b2d8-f339904f21e1/volumes/kubernetes.io~csi/pvc-3b52ef19-90a6-4f6b-acf1-8d641a08ed9b/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-3b52ef19-90a6-4f6b-acf1-8d641a08ed9b","storage.kubernetes.io/csiProvisionerIdentity":"1636163761295-8081-csi-mock-csi-mock-volumes-5039"}},"Response":{},"Error":"","FullError":null} I1106 01:56:19.121947 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1106 01:56:19.123956 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetVolumeStats","Request":{"volume_id":"4","volume_path":"/var/lib/kubelet/pods/03bb07e9-7607-4e4a-b2d8-f339904f21e1/volumes/kubernetes.io~csi/pvc-3b52ef19-90a6-4f6b-acf1-8d641a08ed9b/mount"},"Response":{"usage":[{"total":1073741824,"unit":1}],"volume_condition":{}},"Error":"","FullError":null} Nov 6 01:56:22.116: INFO: Deleting pod "pvc-volume-tester-dlp46" in namespace "csi-mock-volumes-5039" Nov 6 01:56:22.121: INFO: Wait up to 5m0s for pod "pvc-volume-tester-dlp46" to be fully deleted Nov 6 01:56:25.655: INFO: >>> kubeConfig: /root/.kube/config I1106 01:56:25.789387 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/03bb07e9-7607-4e4a-b2d8-f339904f21e1/volumes/kubernetes.io~csi/pvc-3b52ef19-90a6-4f6b-acf1-8d641a08ed9b/mount"},"Response":{},"Error":"","FullError":null} I1106 01:56:25.858512 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1106 01:56:25.866590 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-3b52ef19-90a6-4f6b-acf1-8d641a08ed9b/globalmount"},"Response":{},"Error":"","FullError":null} I1106 01:56:28.142126 29 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} STEP: Checking PVC events Nov 6 01:56:29.130: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-zbdm8", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5039", SelfLink:"", UID:"3b52ef19-90a6-4f6b-acf1-8d641a08ed9b", ResourceVersion:"103752", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63771760572, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0041ac9c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0041ac9d8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc004770fa0), VolumeMode:(*v1.PersistentVolumeMode)(0xc004770fb0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 6 01:56:29.131: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-zbdm8", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5039", SelfLink:"", UID:"3b52ef19-90a6-4f6b-acf1-8d641a08ed9b", ResourceVersion:"103755", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63771760572, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.kubernetes.io/selected-node":"node2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0041aca50), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0041aca68)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0041aca80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0041aca98)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc004770fe0), VolumeMode:(*v1.PersistentVolumeMode)(0xc004770ff0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 6 01:56:29.131: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-zbdm8", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5039", SelfLink:"", UID:"3b52ef19-90a6-4f6b-acf1-8d641a08ed9b", ResourceVersion:"103756", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63771760572, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5039", "volume.kubernetes.io/selected-node":"node2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0039b3e60), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0039b3e78)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0039b3e90), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0039b3ea8)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0039b3ec0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0039b3ed8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc00480b8f0), VolumeMode:(*v1.PersistentVolumeMode)(0xc00480b900), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 6 01:56:29.131: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-zbdm8", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5039", SelfLink:"", UID:"3b52ef19-90a6-4f6b-acf1-8d641a08ed9b", ResourceVersion:"103764", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63771760572, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5039", "volume.kubernetes.io/selected-node":"node2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003afa8d0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003afa8e8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003afa900), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003afa918)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003afa930), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003afa948)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-3b52ef19-90a6-4f6b-acf1-8d641a08ed9b", StorageClassName:(*string)(0xc003f6c610), VolumeMode:(*v1.PersistentVolumeMode)(0xc003f6c620), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 6 01:56:29.131: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-zbdm8", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5039", SelfLink:"", UID:"3b52ef19-90a6-4f6b-acf1-8d641a08ed9b", ResourceVersion:"103765", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63771760572, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5039", "volume.kubernetes.io/selected-node":"node2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003afa978), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003afa990)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003afa9a8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003afa9c0)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003afa9d8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003afa9f0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-3b52ef19-90a6-4f6b-acf1-8d641a08ed9b", StorageClassName:(*string)(0xc003f6c650), VolumeMode:(*v1.PersistentVolumeMode)(0xc003f6c660), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 6 01:56:29.131: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-zbdm8", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5039", SelfLink:"", UID:"3b52ef19-90a6-4f6b-acf1-8d641a08ed9b", ResourceVersion:"104147", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63771760572, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(0xc003afaa20), DeletionGracePeriodSeconds:(*int64)(0xc00369b168), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5039", "volume.kubernetes.io/selected-node":"node2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003afaa38), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003afaa50)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003afaa68), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003afaa80)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003afaa98), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003afaab0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-3b52ef19-90a6-4f6b-acf1-8d641a08ed9b", StorageClassName:(*string)(0xc003f6c6a0), VolumeMode:(*v1.PersistentVolumeMode)(0xc003f6c6b0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 6 01:56:29.131: INFO: PVC event DELETED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-zbdm8", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5039", SelfLink:"", UID:"3b52ef19-90a6-4f6b-acf1-8d641a08ed9b", ResourceVersion:"104148", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63771760572, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(0xc0044be720), DeletionGracePeriodSeconds:(*int64)(0xc004806a68), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5039", "volume.kubernetes.io/selected-node":"node2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0044be738), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0044be750)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0044be768), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0044be780)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0044be798), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0044be7b0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-3b52ef19-90a6-4f6b-acf1-8d641a08ed9b", StorageClassName:(*string)(0xc0048846f0), VolumeMode:(*v1.PersistentVolumeMode)(0xc004884700), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} STEP: Deleting pod pvc-volume-tester-dlp46 Nov 6 01:56:29.132: INFO: Deleting pod "pvc-volume-tester-dlp46" in namespace "csi-mock-volumes-5039" STEP: Deleting claim pvc-zbdm8 STEP: Deleting storageclass csi-mock-volumes-5039-scq962s STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-5039 STEP: Waiting for namespaces [csi-mock-volumes-5039] to vanish STEP: uninstalling csi mock driver Nov 6 01:56:35.168: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5039-8178/csi-attacher Nov 6 01:56:35.175: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5039 Nov 6 01:56:35.183: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5039 Nov 6 01:56:35.188: INFO: deleting *v1.Role: csi-mock-volumes-5039-8178/external-attacher-cfg-csi-mock-volumes-5039 Nov 6 01:56:35.191: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5039-8178/csi-attacher-role-cfg Nov 6 01:56:35.195: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5039-8178/csi-provisioner Nov 6 01:56:35.198: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5039 Nov 6 01:56:35.201: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5039 Nov 6 01:56:35.204: INFO: deleting *v1.Role: csi-mock-volumes-5039-8178/external-provisioner-cfg-csi-mock-volumes-5039 Nov 6 01:56:35.207: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5039-8178/csi-provisioner-role-cfg Nov 6 01:56:35.211: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5039-8178/csi-resizer Nov 6 01:56:35.215: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5039 Nov 6 01:56:35.218: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5039 Nov 6 01:56:35.221: INFO: deleting *v1.Role: csi-mock-volumes-5039-8178/external-resizer-cfg-csi-mock-volumes-5039 Nov 6 01:56:35.224: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5039-8178/csi-resizer-role-cfg Nov 6 01:56:35.228: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5039-8178/csi-snapshotter Nov 6 01:56:35.231: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5039 Nov 6 01:56:35.234: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5039 Nov 6 01:56:35.237: INFO: deleting *v1.Role: csi-mock-volumes-5039-8178/external-snapshotter-leaderelection-csi-mock-volumes-5039 Nov 6 01:56:35.241: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5039-8178/external-snapshotter-leaderelection Nov 6 01:56:35.244: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5039-8178/csi-mock Nov 6 01:56:35.247: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5039 Nov 6 01:56:35.250: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5039 Nov 6 01:56:35.254: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5039 Nov 6 01:56:35.257: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5039 Nov 6 01:56:35.260: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5039 Nov 6 01:56:35.263: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5039 Nov 6 01:56:35.266: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5039 Nov 6 01:56:35.269: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5039-8178/csi-mockplugin Nov 6 01:56:35.274: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-5039 STEP: deleting the driver namespace: csi-mock-volumes-5039-8178 STEP: Waiting for namespaces [csi-mock-volumes-5039-8178] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:57:19.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:93.748 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 storage capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900 exhausted, late binding, no topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, no topology","total":-1,"completed":5,"skipped":111,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:57:12.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Nov 6 01:57:12.239: INFO: The status of Pod test-hostpath-type-987b6 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:57:14.242: INFO: The status of Pod test-hostpath-type-987b6 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:57:16.244: INFO: The status of Pod test-hostpath-type-987b6 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:57:18.243: INFO: The status of Pod test-hostpath-type-987b6 is Running (Ready = true) STEP: running on node node1 STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should fail on mounting non-existent file 'does-not-exist-file' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:137 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:57:24.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-8963" for this suite. • [SLOW TEST:12.107 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting non-existent file 'does-not-exist-file' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:137 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should fail on mounting non-existent file 'does-not-exist-file' when HostPathType is HostPathFile","total":-1,"completed":8,"skipped":217,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:57:24.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Nov 6 01:57:26.439: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-8896 PodName:hostexec-node1-hn4lj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:57:26.439: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:57:26.562: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Nov 6 01:57:26.562: INFO: exec node1: stdout: "0\n" Nov 6 01:57:26.562: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Nov 6 01:57:26.562: INFO: exec node1: exit code: 0 Nov 6 01:57:26.562: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:57:26.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8896" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.181 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1256 ------------------------------ SS ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathSocket","total":-1,"completed":3,"skipped":221,"failed":0} [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:57:04.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 6 01:57:10.837: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-d929f9f0-2cea-4050-8332-63fc2d65c5f6] Namespace:persistent-local-volumes-test-3181 PodName:hostexec-node2-cvj5l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:57:10.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:57:11.078: INFO: Creating a PV followed by a PVC Nov 6 01:57:11.085: INFO: Waiting for PV local-pvk5vbx to bind to PVC pvc-x22rd Nov 6 01:57:11.085: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-x22rd] to have phase Bound Nov 6 01:57:11.088: INFO: PersistentVolumeClaim pvc-x22rd found but phase is Pending instead of Bound. Nov 6 01:57:13.093: INFO: PersistentVolumeClaim pvc-x22rd found but phase is Pending instead of Bound. Nov 6 01:57:15.098: INFO: PersistentVolumeClaim pvc-x22rd found but phase is Pending instead of Bound. Nov 6 01:57:17.101: INFO: PersistentVolumeClaim pvc-x22rd found but phase is Pending instead of Bound. Nov 6 01:57:19.104: INFO: PersistentVolumeClaim pvc-x22rd found but phase is Pending instead of Bound. Nov 6 01:57:21.107: INFO: PersistentVolumeClaim pvc-x22rd found but phase is Pending instead of Bound. Nov 6 01:57:23.111: INFO: PersistentVolumeClaim pvc-x22rd found and phase=Bound (12.025380149s) Nov 6 01:57:23.111: INFO: Waiting up to 3m0s for PersistentVolume local-pvk5vbx to have phase Bound Nov 6 01:57:23.113: INFO: PersistentVolume local-pvk5vbx found and phase=Bound (1.901459ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Nov 6 01:57:29.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-3181 exec pod-fced79f8-0876-47e5-91a0-3408a70c1200 --namespace=persistent-local-volumes-test-3181 -- stat -c %g /mnt/volume1' Nov 6 01:57:29.435: INFO: stderr: "" Nov 6 01:57:29.435: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-fced79f8-0876-47e5-91a0-3408a70c1200 in namespace persistent-local-volumes-test-3181 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:57:29.440: INFO: Deleting PersistentVolumeClaim "pvc-x22rd" Nov 6 01:57:29.443: INFO: Deleting PersistentVolume "local-pvk5vbx" STEP: Removing the test directory Nov 6 01:57:29.447: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-d929f9f0-2cea-4050-8332-63fc2d65c5f6] Namespace:persistent-local-volumes-test-3181 PodName:hostexec-node2-cvj5l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:57:29.447: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:57:29.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3181" for this suite. • [SLOW TEST:24.802 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":4,"skipped":221,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:57:19.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Nov 6 01:57:19.346: INFO: The status of Pod test-hostpath-type-fbwd7 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:57:21.349: INFO: The status of Pod test-hostpath-type-fbwd7 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:57:23.350: INFO: The status of Pod test-hostpath-type-fbwd7 is Running (Ready = true) STEP: running on node node2 STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should fail on mounting file 'afile' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:156 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:57:31.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-92" for this suite. • [SLOW TEST:12.102 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting file 'afile' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:156 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathSocket","total":-1,"completed":6,"skipped":114,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:57:26.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 6 01:57:30.637: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-7c11ebb3-6e9e-4a58-bbd8-6f39d28d6c58-backend && mount --bind /tmp/local-volume-test-7c11ebb3-6e9e-4a58-bbd8-6f39d28d6c58-backend /tmp/local-volume-test-7c11ebb3-6e9e-4a58-bbd8-6f39d28d6c58-backend && ln -s /tmp/local-volume-test-7c11ebb3-6e9e-4a58-bbd8-6f39d28d6c58-backend /tmp/local-volume-test-7c11ebb3-6e9e-4a58-bbd8-6f39d28d6c58] Namespace:persistent-local-volumes-test-6334 PodName:hostexec-node2-fgswb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:57:30.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:57:30.876: INFO: Creating a PV followed by a PVC Nov 6 01:57:30.883: INFO: Waiting for PV local-pv6l86r to bind to PVC pvc-cgvm7 Nov 6 01:57:30.883: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-cgvm7] to have phase Bound Nov 6 01:57:30.886: INFO: PersistentVolumeClaim pvc-cgvm7 found but phase is Pending instead of Bound. Nov 6 01:57:32.889: INFO: PersistentVolumeClaim pvc-cgvm7 found but phase is Pending instead of Bound. Nov 6 01:57:34.893: INFO: PersistentVolumeClaim pvc-cgvm7 found but phase is Pending instead of Bound. Nov 6 01:57:36.896: INFO: PersistentVolumeClaim pvc-cgvm7 found and phase=Bound (6.012588139s) Nov 6 01:57:36.896: INFO: Waiting up to 3m0s for PersistentVolume local-pv6l86r to have phase Bound Nov 6 01:57:36.899: INFO: PersistentVolume local-pv6l86r found and phase=Bound (2.449383ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Nov 6 01:57:40.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-6334 exec pod-8fe5c04a-bef7-4bf4-9d45-d0b6dfacac41 --namespace=persistent-local-volumes-test-6334 -- stat -c %g /mnt/volume1' Nov 6 01:57:41.167: INFO: stderr: "" Nov 6 01:57:41.167: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-8fe5c04a-bef7-4bf4-9d45-d0b6dfacac41 in namespace persistent-local-volumes-test-6334 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:57:41.173: INFO: Deleting PersistentVolumeClaim "pvc-cgvm7" Nov 6 01:57:41.176: INFO: Deleting PersistentVolume "local-pv6l86r" STEP: Removing the test directory Nov 6 01:57:41.180: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-7c11ebb3-6e9e-4a58-bbd8-6f39d28d6c58 && umount /tmp/local-volume-test-7c11ebb3-6e9e-4a58-bbd8-6f39d28d6c58-backend && rm -r /tmp/local-volume-test-7c11ebb3-6e9e-4a58-bbd8-6f39d28d6c58-backend] Namespace:persistent-local-volumes-test-6334 PodName:hostexec-node2-fgswb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:57:41.180: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:57:41.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6334" for this suite. • [SLOW TEST:14.794 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":9,"skipped":261,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:57:29.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-b6a43d6f-52c9-4f46-ad99-c21ca441b832" Nov 6 01:57:31.639: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-b6a43d6f-52c9-4f46-ad99-c21ca441b832 && dd if=/dev/zero of=/tmp/local-volume-test-b6a43d6f-52c9-4f46-ad99-c21ca441b832/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-b6a43d6f-52c9-4f46-ad99-c21ca441b832/file] Namespace:persistent-local-volumes-test-924 PodName:hostexec-node1-98wcv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:57:31.639: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:57:31.752: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-b6a43d6f-52c9-4f46-ad99-c21ca441b832/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-924 PodName:hostexec-node1-98wcv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:57:31.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:57:31.918: INFO: Creating a PV followed by a PVC Nov 6 01:57:31.926: INFO: Waiting for PV local-pvgvwqs to bind to PVC pvc-jlzd6 Nov 6 01:57:31.926: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-jlzd6] to have phase Bound Nov 6 01:57:31.928: INFO: PersistentVolumeClaim pvc-jlzd6 found but phase is Pending instead of Bound. Nov 6 01:57:33.932: INFO: PersistentVolumeClaim pvc-jlzd6 found but phase is Pending instead of Bound. Nov 6 01:57:35.935: INFO: PersistentVolumeClaim pvc-jlzd6 found but phase is Pending instead of Bound. Nov 6 01:57:37.940: INFO: PersistentVolumeClaim pvc-jlzd6 found and phase=Bound (6.013600006s) Nov 6 01:57:37.940: INFO: Waiting up to 3m0s for PersistentVolume local-pvgvwqs to have phase Bound Nov 6 01:57:37.942: INFO: PersistentVolume local-pvgvwqs found and phase=Bound (2.479909ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Nov 6 01:57:43.969: INFO: pod "pod-d0232950-0359-42db-b85e-bd67432c0200" created on Node "node1" STEP: Writing in pod1 Nov 6 01:57:43.969: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-924 PodName:pod-d0232950-0359-42db-b85e-bd67432c0200 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:57:43.969: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:57:44.055: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n18 bytes (18B) copied, 0.000119 seconds, 147.7KB/s", err: Nov 6 01:57:44.055: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-924 PodName:pod-d0232950-0359-42db-b85e-bd67432c0200 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:57:44.055: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:57:44.152: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-d0232950-0359-42db-b85e-bd67432c0200 in namespace persistent-local-volumes-test-924 STEP: Creating pod2 STEP: Creating a pod Nov 6 01:57:48.186: INFO: pod "pod-080a3f76-97ca-4b50-b40c-f466e860ac9e" created on Node "node1" STEP: Reading in pod2 Nov 6 01:57:48.186: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-924 PodName:pod-080a3f76-97ca-4b50-b40c-f466e860ac9e ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:57:48.186: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:57:48.265: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-080a3f76-97ca-4b50-b40c-f466e860ac9e in namespace persistent-local-volumes-test-924 [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:57:48.270: INFO: Deleting PersistentVolumeClaim "pvc-jlzd6" Nov 6 01:57:48.273: INFO: Deleting PersistentVolume "local-pvgvwqs" Nov 6 01:57:48.277: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-b6a43d6f-52c9-4f46-ad99-c21ca441b832/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-924 PodName:hostexec-node1-98wcv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:57:48.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node1" at path /tmp/local-volume-test-b6a43d6f-52c9-4f46-ad99-c21ca441b832/file Nov 6 01:57:48.381: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-924 PodName:hostexec-node1-98wcv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:57:48.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-b6a43d6f-52c9-4f46-ad99-c21ca441b832 Nov 6 01:57:48.460: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-b6a43d6f-52c9-4f46-ad99-c21ca441b832] Namespace:persistent-local-volumes-test-924 PodName:hostexec-node1-98wcv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:57:48.460: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:57:48.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-924" for this suite. • [SLOW TEST:18.957 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":223,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:57:31.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:42 [It] should be mountable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:48 STEP: starting configmap-client STEP: Checking that text file contents are perfect. Nov 6 01:57:35.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=volume-8172 exec configmap-client --namespace=volume-8172 -- cat /opt/0/firstfile' Nov 6 01:57:35.779: INFO: stderr: "" Nov 6 01:57:35.779: INFO: stdout: "this is the first file" Nov 6 01:57:35.779: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /opt/0] Namespace:volume-8172 PodName:configmap-client ContainerName:configmap-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:57:35.779: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:57:35.875: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /opt/0] Namespace:volume-8172 PodName:configmap-client ContainerName:configmap-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:57:35.875: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:57:35.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=volume-8172 exec configmap-client --namespace=volume-8172 -- cat /opt/1/secondfile' Nov 6 01:57:36.239: INFO: stderr: "" Nov 6 01:57:36.239: INFO: stdout: "this is the second file" Nov 6 01:57:36.239: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /opt/1] Namespace:volume-8172 PodName:configmap-client ContainerName:configmap-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:57:36.239: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:57:36.329: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /opt/1] Namespace:volume-8172 PodName:configmap-client ContainerName:configmap-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:57:36.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod configmap-client in namespace volume-8172 Nov 6 01:57:36.471: INFO: Waiting for pod configmap-client to disappear Nov 6 01:57:36.473: INFO: Pod configmap-client still exists Nov 6 01:57:38.475: INFO: Waiting for pod configmap-client to disappear Nov 6 01:57:38.477: INFO: Pod configmap-client still exists Nov 6 01:57:40.475: INFO: Waiting for pod configmap-client to disappear Nov 6 01:57:40.478: INFO: Pod configmap-client still exists Nov 6 01:57:42.474: INFO: Waiting for pod configmap-client to disappear Nov 6 01:57:42.477: INFO: Pod configmap-client still exists Nov 6 01:57:44.474: INFO: Waiting for pod configmap-client to disappear Nov 6 01:57:44.477: INFO: Pod configmap-client still exists Nov 6 01:57:46.475: INFO: Waiting for pod configmap-client to disappear Nov 6 01:57:46.479: INFO: Pod configmap-client still exists Nov 6 01:57:48.474: INFO: Waiting for pod configmap-client to disappear Nov 6 01:57:48.476: INFO: Pod configmap-client still exists Nov 6 01:57:50.475: INFO: Waiting for pod configmap-client to disappear Nov 6 01:57:50.479: INFO: Pod configmap-client no longer exists [AfterEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:57:50.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-8172" for this suite. • [SLOW TEST:19.009 seconds] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:47 should be mountable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:48 ------------------------------ {"msg":"PASSED [sig-storage] Volumes ConfigMap should be mountable","total":-1,"completed":7,"skipped":149,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:57:09.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:49 [It] should allow deletion of pod with invalid volume : configmap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 Nov 6 01:57:39.363: INFO: Deleting pod "pv-8591"/"pod-ephm-test-projected-lcv7" Nov 6 01:57:39.363: INFO: Deleting pod "pod-ephm-test-projected-lcv7" in namespace "pv-8591" Nov 6 01:57:39.367: INFO: Wait up to 5m0s for pod "pod-ephm-test-projected-lcv7" to be fully deleted [AfterEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:57:51.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-8591" for this suite. • [SLOW TEST:42.055 seconds] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 When pod refers to non-existent ephemeral storage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53 should allow deletion of pod with invalid volume : configmap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 ------------------------------ {"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap","total":-1,"completed":7,"skipped":290,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:57:50.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] files with FSGroup ownership should support (root,0644,tmpfs) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:67 STEP: Creating a pod to test emptydir 0644 on tmpfs Nov 6 01:57:50.538: INFO: Waiting up to 5m0s for pod "pod-7a76eb00-97d6-41d1-a8a2-806d9e24cc23" in namespace "emptydir-9651" to be "Succeeded or Failed" Nov 6 01:57:50.542: INFO: Pod "pod-7a76eb00-97d6-41d1-a8a2-806d9e24cc23": Phase="Pending", Reason="", readiness=false. Elapsed: 3.813047ms Nov 6 01:57:52.547: INFO: Pod "pod-7a76eb00-97d6-41d1-a8a2-806d9e24cc23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008381773s Nov 6 01:57:54.551: INFO: Pod "pod-7a76eb00-97d6-41d1-a8a2-806d9e24cc23": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012502479s Nov 6 01:57:56.557: INFO: Pod "pod-7a76eb00-97d6-41d1-a8a2-806d9e24cc23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018626797s STEP: Saw pod success Nov 6 01:57:56.557: INFO: Pod "pod-7a76eb00-97d6-41d1-a8a2-806d9e24cc23" satisfied condition "Succeeded or Failed" Nov 6 01:57:56.559: INFO: Trying to get logs from node node1 pod pod-7a76eb00-97d6-41d1-a8a2-806d9e24cc23 container test-container: STEP: delete the pod Nov 6 01:57:56.654: INFO: Waiting for pod pod-7a76eb00-97d6-41d1-a8a2-806d9e24cc23 to disappear Nov 6 01:57:56.656: INFO: Pod pod-7a76eb00-97d6-41d1-a8a2-806d9e24cc23 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:57:56.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9651" for this suite. • [SLOW TEST:6.158 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48 files with FSGroup ownership should support (root,0644,tmpfs) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:67 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)","total":-1,"completed":8,"skipped":154,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:57:51.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Nov 6 01:57:51.587: INFO: The status of Pod test-hostpath-type-8xjs4 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:57:53.590: INFO: The status of Pod test-hostpath-type-8xjs4 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:57:55.593: INFO: The status of Pod test-hostpath-type-8xjs4 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:57:57.591: INFO: The status of Pod test-hostpath-type-8xjs4 is Running (Ready = true) STEP: running on node node1 [It] Should fail on mounting socket 'asocket' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:231 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:57:59.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-7283" for this suite. • [SLOW TEST:8.086 seconds] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting socket 'asocket' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:231 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathBlockDev","total":-1,"completed":8,"skipped":377,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:56:26.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume by restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 STEP: Building a driver namespace object, basename csi-mock-volumes-7799 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 6 01:56:26.064: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7799-8719/csi-attacher Nov 6 01:56:26.069: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7799 Nov 6 01:56:26.069: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-7799 Nov 6 01:56:26.072: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7799 Nov 6 01:56:26.075: INFO: creating *v1.Role: csi-mock-volumes-7799-8719/external-attacher-cfg-csi-mock-volumes-7799 Nov 6 01:56:26.078: INFO: creating *v1.RoleBinding: csi-mock-volumes-7799-8719/csi-attacher-role-cfg Nov 6 01:56:26.081: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7799-8719/csi-provisioner Nov 6 01:56:26.084: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7799 Nov 6 01:56:26.084: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-7799 Nov 6 01:56:26.088: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7799 Nov 6 01:56:26.090: INFO: creating *v1.Role: csi-mock-volumes-7799-8719/external-provisioner-cfg-csi-mock-volumes-7799 Nov 6 01:56:26.093: INFO: creating *v1.RoleBinding: csi-mock-volumes-7799-8719/csi-provisioner-role-cfg Nov 6 01:56:26.095: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7799-8719/csi-resizer Nov 6 01:56:26.098: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7799 Nov 6 01:56:26.098: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-7799 Nov 6 01:56:26.100: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7799 Nov 6 01:56:26.103: INFO: creating *v1.Role: csi-mock-volumes-7799-8719/external-resizer-cfg-csi-mock-volumes-7799 Nov 6 01:56:26.106: INFO: creating *v1.RoleBinding: csi-mock-volumes-7799-8719/csi-resizer-role-cfg Nov 6 01:56:26.109: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7799-8719/csi-snapshotter Nov 6 01:56:26.111: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7799 Nov 6 01:56:26.111: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-7799 Nov 6 01:56:26.113: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7799 Nov 6 01:56:26.116: INFO: creating *v1.Role: csi-mock-volumes-7799-8719/external-snapshotter-leaderelection-csi-mock-volumes-7799 Nov 6 01:56:26.118: INFO: creating *v1.RoleBinding: csi-mock-volumes-7799-8719/external-snapshotter-leaderelection Nov 6 01:56:26.120: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7799-8719/csi-mock Nov 6 01:56:26.123: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7799 Nov 6 01:56:26.125: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7799 Nov 6 01:56:26.127: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7799 Nov 6 01:56:26.130: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7799 Nov 6 01:56:26.132: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7799 Nov 6 01:56:26.135: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7799 Nov 6 01:56:26.138: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7799 Nov 6 01:56:26.140: INFO: creating *v1.StatefulSet: csi-mock-volumes-7799-8719/csi-mockplugin Nov 6 01:56:26.144: INFO: creating *v1.StatefulSet: csi-mock-volumes-7799-8719/csi-mockplugin-attacher Nov 6 01:56:26.148: INFO: creating *v1.StatefulSet: csi-mock-volumes-7799-8719/csi-mockplugin-resizer Nov 6 01:56:26.151: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-7799 to register on node node2 STEP: Creating pod Nov 6 01:56:52.547: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 6 01:56:52.551: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-p9cwl] to have phase Bound Nov 6 01:56:52.553: INFO: PersistentVolumeClaim pvc-p9cwl found but phase is Pending instead of Bound. Nov 6 01:56:54.556: INFO: PersistentVolumeClaim pvc-p9cwl found and phase=Bound (2.00483143s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Checking for conditions on pvc STEP: Deleting the previously created pod Nov 6 01:57:08.596: INFO: Deleting pod "pvc-volume-tester-8tcw7" in namespace "csi-mock-volumes-7799" Nov 6 01:57:08.601: INFO: Wait up to 5m0s for pod "pvc-volume-tester-8tcw7" to be fully deleted STEP: Creating a new pod with same volume STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-8tcw7 Nov 6 01:57:32.623: INFO: Deleting pod "pvc-volume-tester-8tcw7" in namespace "csi-mock-volumes-7799" STEP: Deleting pod pvc-volume-tester-n75fr Nov 6 01:57:32.627: INFO: Deleting pod "pvc-volume-tester-n75fr" in namespace "csi-mock-volumes-7799" Nov 6 01:57:32.631: INFO: Wait up to 5m0s for pod "pvc-volume-tester-n75fr" to be fully deleted STEP: Deleting claim pvc-p9cwl Nov 6 01:57:40.644: INFO: Waiting up to 2m0s for PersistentVolume pvc-c377fd52-4368-4a8f-841a-e95f98349de8 to get deleted Nov 6 01:57:40.646: INFO: PersistentVolume pvc-c377fd52-4368-4a8f-841a-e95f98349de8 found and phase=Bound (1.686629ms) Nov 6 01:57:42.652: INFO: PersistentVolume pvc-c377fd52-4368-4a8f-841a-e95f98349de8 was removed STEP: Deleting storageclass csi-mock-volumes-7799-scrthbl STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-7799 STEP: Waiting for namespaces [csi-mock-volumes-7799] to vanish STEP: uninstalling csi mock driver Nov 6 01:57:48.665: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7799-8719/csi-attacher Nov 6 01:57:48.669: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7799 Nov 6 01:57:48.673: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7799 Nov 6 01:57:48.677: INFO: deleting *v1.Role: csi-mock-volumes-7799-8719/external-attacher-cfg-csi-mock-volumes-7799 Nov 6 01:57:48.680: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7799-8719/csi-attacher-role-cfg Nov 6 01:57:48.684: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7799-8719/csi-provisioner Nov 6 01:57:48.687: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7799 Nov 6 01:57:48.690: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7799 Nov 6 01:57:48.697: INFO: deleting *v1.Role: csi-mock-volumes-7799-8719/external-provisioner-cfg-csi-mock-volumes-7799 Nov 6 01:57:48.700: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7799-8719/csi-provisioner-role-cfg Nov 6 01:57:48.703: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7799-8719/csi-resizer Nov 6 01:57:48.710: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7799 Nov 6 01:57:48.716: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7799 Nov 6 01:57:48.720: INFO: deleting *v1.Role: csi-mock-volumes-7799-8719/external-resizer-cfg-csi-mock-volumes-7799 Nov 6 01:57:48.724: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7799-8719/csi-resizer-role-cfg Nov 6 01:57:48.728: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7799-8719/csi-snapshotter Nov 6 01:57:48.731: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7799 Nov 6 01:57:48.736: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7799 Nov 6 01:57:48.739: INFO: deleting *v1.Role: csi-mock-volumes-7799-8719/external-snapshotter-leaderelection-csi-mock-volumes-7799 Nov 6 01:57:48.742: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7799-8719/external-snapshotter-leaderelection Nov 6 01:57:48.750: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7799-8719/csi-mock Nov 6 01:57:48.753: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7799 Nov 6 01:57:48.757: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7799 Nov 6 01:57:48.760: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7799 Nov 6 01:57:48.763: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7799 Nov 6 01:57:48.766: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7799 Nov 6 01:57:48.769: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7799 Nov 6 01:57:48.772: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7799 Nov 6 01:57:48.777: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7799-8719/csi-mockplugin Nov 6 01:57:48.780: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7799-8719/csi-mockplugin-attacher Nov 6 01:57:48.783: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7799-8719/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-7799-8719 STEP: Waiting for namespaces [csi-mock-volumes-7799-8719] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:58:00.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:94.799 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI Volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561 should expand volume by restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on","total":-1,"completed":15,"skipped":545,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:57:41.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 6 01:57:43.462: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-fa231c70-0ad8-4474-b55a-331fba1488fe] Namespace:persistent-local-volumes-test-8160 PodName:hostexec-node2-42xbb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:57:43.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:57:43.561: INFO: Creating a PV followed by a PVC Nov 6 01:57:43.568: INFO: Waiting for PV local-pvcv4hw to bind to PVC pvc-m2269 Nov 6 01:57:43.569: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-m2269] to have phase Bound Nov 6 01:57:43.570: INFO: PersistentVolumeClaim pvc-m2269 found but phase is Pending instead of Bound. Nov 6 01:57:45.575: INFO: PersistentVolumeClaim pvc-m2269 found but phase is Pending instead of Bound. Nov 6 01:57:47.581: INFO: PersistentVolumeClaim pvc-m2269 found but phase is Pending instead of Bound. Nov 6 01:57:49.584: INFO: PersistentVolumeClaim pvc-m2269 found but phase is Pending instead of Bound. Nov 6 01:57:51.588: INFO: PersistentVolumeClaim pvc-m2269 found but phase is Pending instead of Bound. Nov 6 01:57:53.590: INFO: PersistentVolumeClaim pvc-m2269 found and phase=Bound (10.021792532s) Nov 6 01:57:53.590: INFO: Waiting up to 3m0s for PersistentVolume local-pvcv4hw to have phase Bound Nov 6 01:57:53.593: INFO: PersistentVolume local-pvcv4hw found and phase=Bound (2.746568ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Nov 6 01:57:57.619: INFO: pod "pod-50b2c084-4813-4c71-a2be-7ed8aff73f60" created on Node "node2" STEP: Writing in pod1 Nov 6 01:57:57.619: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8160 PodName:pod-50b2c084-4813-4c71-a2be-7ed8aff73f60 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:57:57.619: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:57:57.750: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 6 01:57:57.750: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8160 PodName:pod-50b2c084-4813-4c71-a2be-7ed8aff73f60 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:57:57.750: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:57:57.831: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Nov 6 01:58:01.852: INFO: pod "pod-be6b0877-4cb7-4ab7-b4d8-e424d8a384dd" created on Node "node2" Nov 6 01:58:01.852: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8160 PodName:pod-be6b0877-4cb7-4ab7-b4d8-e424d8a384dd ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:58:01.852: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:58:01.938: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Nov 6 01:58:01.938: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-fa231c70-0ad8-4474-b55a-331fba1488fe > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8160 PodName:pod-be6b0877-4cb7-4ab7-b4d8-e424d8a384dd ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:58:01.938: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:58:02.017: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-fa231c70-0ad8-4474-b55a-331fba1488fe > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Nov 6 01:58:02.017: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-8160 PodName:pod-50b2c084-4813-4c71-a2be-7ed8aff73f60 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:58:02.017: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:58:02.136: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-fa231c70-0ad8-4474-b55a-331fba1488fe", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-50b2c084-4813-4c71-a2be-7ed8aff73f60 in namespace persistent-local-volumes-test-8160 STEP: Deleting pod2 STEP: Deleting pod pod-be6b0877-4cb7-4ab7-b4d8-e424d8a384dd in namespace persistent-local-volumes-test-8160 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:58:02.145: INFO: Deleting PersistentVolumeClaim "pvc-m2269" Nov 6 01:58:02.148: INFO: Deleting PersistentVolume "local-pvcv4hw" STEP: Removing the test directory Nov 6 01:58:02.153: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-fa231c70-0ad8-4474-b55a-331fba1488fe] Namespace:persistent-local-volumes-test-8160 PodName:hostexec-node2-42xbb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:58:02.153: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:58:02.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8160" for this suite. • [SLOW TEST:20.868 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":10,"skipped":276,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:57:48.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Nov 6 01:57:48.746: INFO: The status of Pod test-hostpath-type-5bn9l is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:57:50.751: INFO: The status of Pod test-hostpath-type-5bn9l is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:57:52.750: INFO: The status of Pod test-hostpath-type-5bn9l is Running (Ready = true) STEP: running on node node1 STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should be able to mount file 'afile' successfully when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:143 [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:58:02.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-1021" for this suite. • [SLOW TEST:14.104 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount file 'afile' successfully when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:143 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should be able to mount file 'afile' successfully when HostPathType is HostPathFile","total":-1,"completed":6,"skipped":297,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:56:52.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] token should not be plumbed down when CSIDriver is not deployed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1402 STEP: Building a driver namespace object, basename csi-mock-volumes-4349 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 6 01:56:52.381: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4349-3584/csi-attacher Nov 6 01:56:52.385: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4349 Nov 6 01:56:52.385: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4349 Nov 6 01:56:52.387: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4349 Nov 6 01:56:52.389: INFO: creating *v1.Role: csi-mock-volumes-4349-3584/external-attacher-cfg-csi-mock-volumes-4349 Nov 6 01:56:52.393: INFO: creating *v1.RoleBinding: csi-mock-volumes-4349-3584/csi-attacher-role-cfg Nov 6 01:56:52.395: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4349-3584/csi-provisioner Nov 6 01:56:52.398: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4349 Nov 6 01:56:52.398: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4349 Nov 6 01:56:52.402: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4349 Nov 6 01:56:52.404: INFO: creating *v1.Role: csi-mock-volumes-4349-3584/external-provisioner-cfg-csi-mock-volumes-4349 Nov 6 01:56:52.407: INFO: creating *v1.RoleBinding: csi-mock-volumes-4349-3584/csi-provisioner-role-cfg Nov 6 01:56:52.409: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4349-3584/csi-resizer Nov 6 01:56:52.412: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4349 Nov 6 01:56:52.412: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4349 Nov 6 01:56:52.414: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4349 Nov 6 01:56:52.417: INFO: creating *v1.Role: csi-mock-volumes-4349-3584/external-resizer-cfg-csi-mock-volumes-4349 Nov 6 01:56:52.420: INFO: creating *v1.RoleBinding: csi-mock-volumes-4349-3584/csi-resizer-role-cfg Nov 6 01:56:52.422: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4349-3584/csi-snapshotter Nov 6 01:56:52.424: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4349 Nov 6 01:56:52.424: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4349 Nov 6 01:56:52.426: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4349 Nov 6 01:56:52.429: INFO: creating *v1.Role: csi-mock-volumes-4349-3584/external-snapshotter-leaderelection-csi-mock-volumes-4349 Nov 6 01:56:52.432: INFO: creating *v1.RoleBinding: csi-mock-volumes-4349-3584/external-snapshotter-leaderelection Nov 6 01:56:52.434: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4349-3584/csi-mock Nov 6 01:56:52.437: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4349 Nov 6 01:56:52.439: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4349 Nov 6 01:56:52.442: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4349 Nov 6 01:56:52.445: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4349 Nov 6 01:56:52.448: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4349 Nov 6 01:56:52.450: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4349 Nov 6 01:56:52.453: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4349 Nov 6 01:56:52.455: INFO: creating *v1.StatefulSet: csi-mock-volumes-4349-3584/csi-mockplugin Nov 6 01:56:52.463: INFO: creating *v1.StatefulSet: csi-mock-volumes-4349-3584/csi-mockplugin-attacher Nov 6 01:56:52.467: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4349 to register on node node1 STEP: Creating pod Nov 6 01:57:01.984: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 6 01:57:01.988: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-rmknb] to have phase Bound Nov 6 01:57:01.991: INFO: PersistentVolumeClaim pvc-rmknb found but phase is Pending instead of Bound. Nov 6 01:57:03.994: INFO: PersistentVolumeClaim pvc-rmknb found and phase=Bound (2.006059021s) STEP: Deleting the previously created pod Nov 6 01:57:18.020: INFO: Deleting pod "pvc-volume-tester-gr87t" in namespace "csi-mock-volumes-4349" Nov 6 01:57:18.026: INFO: Wait up to 5m0s for pod "pvc-volume-tester-gr87t" to be fully deleted STEP: Checking CSI driver logs Nov 6 01:57:24.046: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/9df1e764-ba5f-42e6-8923-d5d4421124c0/volumes/kubernetes.io~csi/pvc-4863d6b2-6bb9-4861-a5da-f05291c9a56b/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-gr87t Nov 6 01:57:24.046: INFO: Deleting pod "pvc-volume-tester-gr87t" in namespace "csi-mock-volumes-4349" STEP: Deleting claim pvc-rmknb Nov 6 01:57:24.055: INFO: Waiting up to 2m0s for PersistentVolume pvc-4863d6b2-6bb9-4861-a5da-f05291c9a56b to get deleted Nov 6 01:57:24.057: INFO: PersistentVolume pvc-4863d6b2-6bb9-4861-a5da-f05291c9a56b found and phase=Bound (2.115945ms) Nov 6 01:57:26.061: INFO: PersistentVolume pvc-4863d6b2-6bb9-4861-a5da-f05291c9a56b found and phase=Released (2.00599492s) Nov 6 01:57:28.064: INFO: PersistentVolume pvc-4863d6b2-6bb9-4861-a5da-f05291c9a56b found and phase=Released (4.009164683s) Nov 6 01:57:30.069: INFO: PersistentVolume pvc-4863d6b2-6bb9-4861-a5da-f05291c9a56b found and phase=Released (6.014196451s) Nov 6 01:57:32.072: INFO: PersistentVolume pvc-4863d6b2-6bb9-4861-a5da-f05291c9a56b was removed STEP: Deleting storageclass csi-mock-volumes-4349-sc8msf6 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4349 STEP: Waiting for namespaces [csi-mock-volumes-4349] to vanish STEP: uninstalling csi mock driver Nov 6 01:57:38.090: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4349-3584/csi-attacher Nov 6 01:57:38.096: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4349 Nov 6 01:57:38.100: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4349 Nov 6 01:57:38.104: INFO: deleting *v1.Role: csi-mock-volumes-4349-3584/external-attacher-cfg-csi-mock-volumes-4349 Nov 6 01:57:38.107: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4349-3584/csi-attacher-role-cfg Nov 6 01:57:38.111: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4349-3584/csi-provisioner Nov 6 01:57:38.115: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4349 Nov 6 01:57:38.118: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4349 Nov 6 01:57:38.124: INFO: deleting *v1.Role: csi-mock-volumes-4349-3584/external-provisioner-cfg-csi-mock-volumes-4349 Nov 6 01:57:38.130: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4349-3584/csi-provisioner-role-cfg Nov 6 01:57:38.137: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4349-3584/csi-resizer Nov 6 01:57:38.142: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4349 Nov 6 01:57:38.146: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4349 Nov 6 01:57:38.149: INFO: deleting *v1.Role: csi-mock-volumes-4349-3584/external-resizer-cfg-csi-mock-volumes-4349 Nov 6 01:57:38.153: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4349-3584/csi-resizer-role-cfg Nov 6 01:57:38.156: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4349-3584/csi-snapshotter Nov 6 01:57:38.160: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4349 Nov 6 01:57:38.163: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4349 Nov 6 01:57:38.166: INFO: deleting *v1.Role: csi-mock-volumes-4349-3584/external-snapshotter-leaderelection-csi-mock-volumes-4349 Nov 6 01:57:38.170: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4349-3584/external-snapshotter-leaderelection Nov 6 01:57:38.173: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4349-3584/csi-mock Nov 6 01:57:38.176: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4349 Nov 6 01:57:38.179: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4349 Nov 6 01:57:38.183: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4349 Nov 6 01:57:38.187: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4349 Nov 6 01:57:38.190: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4349 Nov 6 01:57:38.194: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4349 Nov 6 01:57:38.197: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4349 Nov 6 01:57:38.200: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4349-3584/csi-mockplugin Nov 6 01:57:38.204: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4349-3584/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-4349-3584 STEP: Waiting for namespaces [csi-mock-volumes-4349-3584] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:58:06.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:73.896 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIServiceAccountToken /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1374 token should not be plumbed down when CSIDriver is not deployed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1402 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when CSIDriver is not deployed","total":-1,"completed":12,"skipped":360,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:58:06.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:75 STEP: Creating configMap with name configmap-test-volume-678fdb33-c0be-4bc3-b3af-d9251e5971fb STEP: Creating a pod to test consume configMaps Nov 6 01:58:06.274: INFO: Waiting up to 5m0s for pod "pod-configmaps-752bba56-fa5e-407b-a3c7-22125fe9598b" in namespace "configmap-3628" to be "Succeeded or Failed" Nov 6 01:58:06.278: INFO: Pod "pod-configmaps-752bba56-fa5e-407b-a3c7-22125fe9598b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.353471ms Nov 6 01:58:08.282: INFO: Pod "pod-configmaps-752bba56-fa5e-407b-a3c7-22125fe9598b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007766856s Nov 6 01:58:10.286: INFO: Pod "pod-configmaps-752bba56-fa5e-407b-a3c7-22125fe9598b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011945851s STEP: Saw pod success Nov 6 01:58:10.286: INFO: Pod "pod-configmaps-752bba56-fa5e-407b-a3c7-22125fe9598b" satisfied condition "Succeeded or Failed" Nov 6 01:58:10.287: INFO: Trying to get logs from node node2 pod pod-configmaps-752bba56-fa5e-407b-a3c7-22125fe9598b container agnhost-container: STEP: delete the pod Nov 6 01:58:10.307: INFO: Waiting for pod pod-configmaps-752bba56-fa5e-407b-a3c7-22125fe9598b to disappear Nov 6 01:58:10.309: INFO: Pod pod-configmaps-752bba56-fa5e-407b-a3c7-22125fe9598b no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:58:10.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3628" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":13,"skipped":366,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:56:38.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should retry NodeStage after NodeStage ephemeral error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 STEP: Building a driver namespace object, basename csi-mock-volumes-6518 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Nov 6 01:56:39.005: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6518-9834/csi-attacher Nov 6 01:56:39.008: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6518 Nov 6 01:56:39.008: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-6518 Nov 6 01:56:39.011: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6518 Nov 6 01:56:39.015: INFO: creating *v1.Role: csi-mock-volumes-6518-9834/external-attacher-cfg-csi-mock-volumes-6518 Nov 6 01:56:39.017: INFO: creating *v1.RoleBinding: csi-mock-volumes-6518-9834/csi-attacher-role-cfg Nov 6 01:56:39.020: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6518-9834/csi-provisioner Nov 6 01:56:39.022: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6518 Nov 6 01:56:39.022: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-6518 Nov 6 01:56:39.025: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6518 Nov 6 01:56:39.028: INFO: creating *v1.Role: csi-mock-volumes-6518-9834/external-provisioner-cfg-csi-mock-volumes-6518 Nov 6 01:56:39.031: INFO: creating *v1.RoleBinding: csi-mock-volumes-6518-9834/csi-provisioner-role-cfg Nov 6 01:56:39.034: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6518-9834/csi-resizer Nov 6 01:56:39.036: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6518 Nov 6 01:56:39.036: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-6518 Nov 6 01:56:39.039: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6518 Nov 6 01:56:39.041: INFO: creating *v1.Role: csi-mock-volumes-6518-9834/external-resizer-cfg-csi-mock-volumes-6518 Nov 6 01:56:39.043: INFO: creating *v1.RoleBinding: csi-mock-volumes-6518-9834/csi-resizer-role-cfg Nov 6 01:56:39.046: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6518-9834/csi-snapshotter Nov 6 01:56:39.048: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6518 Nov 6 01:56:39.048: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-6518 Nov 6 01:56:39.051: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6518 Nov 6 01:56:39.054: INFO: creating *v1.Role: csi-mock-volumes-6518-9834/external-snapshotter-leaderelection-csi-mock-volumes-6518 Nov 6 01:56:39.057: INFO: creating *v1.RoleBinding: csi-mock-volumes-6518-9834/external-snapshotter-leaderelection Nov 6 01:56:39.060: INFO: creating *v1.ServiceAccount: csi-mock-volumes-6518-9834/csi-mock Nov 6 01:56:39.062: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6518 Nov 6 01:56:39.064: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6518 Nov 6 01:56:39.067: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6518 Nov 6 01:56:39.069: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6518 Nov 6 01:56:39.072: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6518 Nov 6 01:56:39.074: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6518 Nov 6 01:56:39.077: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6518 Nov 6 01:56:39.079: INFO: creating *v1.StatefulSet: csi-mock-volumes-6518-9834/csi-mockplugin Nov 6 01:56:39.083: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-6518 Nov 6 01:56:39.085: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-6518" Nov 6 01:56:39.088: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-6518 to register on node node2 I1106 01:56:51.214437 24 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-6518","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1106 01:56:51.331188 24 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I1106 01:56:51.332629 24 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-6518","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1106 01:56:51.376291 24 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I1106 01:56:51.383775 24 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I1106 01:56:51.781542 24 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-6518"},"Error":"","FullError":null} STEP: Creating pod Nov 6 01:56:55.355: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 6 01:56:55.359: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-859m2] to have phase Bound Nov 6 01:56:55.361: INFO: PersistentVolumeClaim pvc-859m2 found but phase is Pending instead of Bound. I1106 01:56:55.367099 24 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-7d9252c6-eba1-47c1-92cb-b0eb17b0bd38","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-7d9252c6-eba1-47c1-92cb-b0eb17b0bd38"}}},"Error":"","FullError":null} Nov 6 01:56:57.365: INFO: PersistentVolumeClaim pvc-859m2 found and phase=Bound (2.005572137s) Nov 6 01:56:57.379: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-859m2] to have phase Bound Nov 6 01:56:57.382: INFO: PersistentVolumeClaim pvc-859m2 found and phase=Bound (2.797422ms) STEP: Waiting for expected CSI calls I1106 01:56:58.564400 24 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1106 01:56:58.567572 24 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7d9252c6-eba1-47c1-92cb-b0eb17b0bd38/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-7d9252c6-eba1-47c1-92cb-b0eb17b0bd38","storage.kubernetes.io/csiProvisionerIdentity":"1636163811419-8081-csi-mock-csi-mock-volumes-6518"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I1106 01:56:59.168609 24 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1106 01:56:59.170871 24 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7d9252c6-eba1-47c1-92cb-b0eb17b0bd38/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-7d9252c6-eba1-47c1-92cb-b0eb17b0bd38","storage.kubernetes.io/csiProvisionerIdentity":"1636163811419-8081-csi-mock-csi-mock-volumes-6518"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I1106 01:57:00.192871 24 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1106 01:57:00.194905 24 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7d9252c6-eba1-47c1-92cb-b0eb17b0bd38/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-7d9252c6-eba1-47c1-92cb-b0eb17b0bd38","storage.kubernetes.io/csiProvisionerIdentity":"1636163811419-8081-csi-mock-csi-mock-volumes-6518"}},"Response":null,"Error":"rpc error: code = DeadlineExceeded desc = fake error","FullError":{"code":4,"message":"fake error"}} I1106 01:57:02.220006 24 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Nov 6 01:57:02.221: INFO: >>> kubeConfig: /root/.kube/config I1106 01:57:02.401625 24 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7d9252c6-eba1-47c1-92cb-b0eb17b0bd38/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-7d9252c6-eba1-47c1-92cb-b0eb17b0bd38","storage.kubernetes.io/csiProvisionerIdentity":"1636163811419-8081-csi-mock-csi-mock-volumes-6518"}},"Response":{},"Error":"","FullError":null} I1106 01:57:02.737588 24 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Nov 6 01:57:02.739: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:57:02.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Waiting for pod to be running Nov 6 01:57:03.447: INFO: >>> kubeConfig: /root/.kube/config I1106 01:57:03.539562 24 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7d9252c6-eba1-47c1-92cb-b0eb17b0bd38/globalmount","target_path":"/var/lib/kubelet/pods/ef0b73d1-e766-45d1-813b-5353399a1fae/volumes/kubernetes.io~csi/pvc-7d9252c6-eba1-47c1-92cb-b0eb17b0bd38/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-7d9252c6-eba1-47c1-92cb-b0eb17b0bd38","storage.kubernetes.io/csiProvisionerIdentity":"1636163811419-8081-csi-mock-csi-mock-volumes-6518"}},"Response":{},"Error":"","FullError":null} STEP: Deleting the previously created pod Nov 6 01:57:09.391: INFO: Deleting pod "pvc-volume-tester-hnnbm" in namespace "csi-mock-volumes-6518" Nov 6 01:57:09.395: INFO: Wait up to 5m0s for pod "pvc-volume-tester-hnnbm" to be fully deleted Nov 6 01:57:12.547: INFO: >>> kubeConfig: /root/.kube/config I1106 01:57:12.705191 24 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/ef0b73d1-e766-45d1-813b-5353399a1fae/volumes/kubernetes.io~csi/pvc-7d9252c6-eba1-47c1-92cb-b0eb17b0bd38/mount"},"Response":{},"Error":"","FullError":null} I1106 01:57:12.756221 24 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1106 01:57:12.758982 24 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7d9252c6-eba1-47c1-92cb-b0eb17b0bd38/globalmount"},"Response":{},"Error":"","FullError":null} STEP: Waiting for all remaining expected CSI calls STEP: Deleting pod pvc-volume-tester-hnnbm Nov 6 01:57:20.400: INFO: Deleting pod "pvc-volume-tester-hnnbm" in namespace "csi-mock-volumes-6518" STEP: Deleting claim pvc-859m2 Nov 6 01:57:20.411: INFO: Waiting up to 2m0s for PersistentVolume pvc-7d9252c6-eba1-47c1-92cb-b0eb17b0bd38 to get deleted Nov 6 01:57:20.413: INFO: PersistentVolume pvc-7d9252c6-eba1-47c1-92cb-b0eb17b0bd38 found and phase=Bound (2.390197ms) I1106 01:57:20.500777 24 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} Nov 6 01:57:22.417: INFO: PersistentVolume pvc-7d9252c6-eba1-47c1-92cb-b0eb17b0bd38 was removed STEP: Deleting storageclass csi-mock-volumes-6518-scgb94r STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-6518 STEP: Waiting for namespaces [csi-mock-volumes-6518] to vanish STEP: uninstalling csi mock driver Nov 6 01:57:28.547: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6518-9834/csi-attacher Nov 6 01:57:28.551: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-6518 Nov 6 01:57:28.555: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-6518 Nov 6 01:57:28.558: INFO: deleting *v1.Role: csi-mock-volumes-6518-9834/external-attacher-cfg-csi-mock-volumes-6518 Nov 6 01:57:28.561: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6518-9834/csi-attacher-role-cfg Nov 6 01:57:28.565: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6518-9834/csi-provisioner Nov 6 01:57:28.568: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-6518 Nov 6 01:57:28.571: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-6518 Nov 6 01:57:28.575: INFO: deleting *v1.Role: csi-mock-volumes-6518-9834/external-provisioner-cfg-csi-mock-volumes-6518 Nov 6 01:57:28.578: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6518-9834/csi-provisioner-role-cfg Nov 6 01:57:28.581: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6518-9834/csi-resizer Nov 6 01:57:28.585: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-6518 Nov 6 01:57:28.589: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-6518 Nov 6 01:57:28.592: INFO: deleting *v1.Role: csi-mock-volumes-6518-9834/external-resizer-cfg-csi-mock-volumes-6518 Nov 6 01:57:28.596: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6518-9834/csi-resizer-role-cfg Nov 6 01:57:28.599: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6518-9834/csi-snapshotter Nov 6 01:57:28.603: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-6518 Nov 6 01:57:28.606: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-6518 Nov 6 01:57:28.609: INFO: deleting *v1.Role: csi-mock-volumes-6518-9834/external-snapshotter-leaderelection-csi-mock-volumes-6518 Nov 6 01:57:28.613: INFO: deleting *v1.RoleBinding: csi-mock-volumes-6518-9834/external-snapshotter-leaderelection Nov 6 01:57:28.616: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-6518-9834/csi-mock Nov 6 01:57:28.620: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-6518 Nov 6 01:57:28.623: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-6518 Nov 6 01:57:28.626: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-6518 Nov 6 01:57:28.629: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-6518 Nov 6 01:57:28.633: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-6518 Nov 6 01:57:28.636: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-6518 Nov 6 01:57:28.638: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-6518 Nov 6 01:57:28.642: INFO: deleting *v1.StatefulSet: csi-mock-volumes-6518-9834/csi-mockplugin Nov 6 01:57:28.646: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-6518 STEP: deleting the driver namespace: csi-mock-volumes-6518-9834 STEP: Waiting for namespaces [csi-mock-volumes-6518-9834] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:58:12.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:93.714 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI NodeStage error cases [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:734 should retry NodeStage after NodeStage ephemeral error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should retry NodeStage after NodeStage ephemeral error","total":-1,"completed":13,"skipped":456,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:57:56.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 6 01:58:00.736: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-ff03798a-3514-4d00-827c-3db9095d8283] Namespace:persistent-local-volumes-test-9181 PodName:hostexec-node2-lcn2q ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:58:00.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:58:00.824: INFO: Creating a PV followed by a PVC Nov 6 01:58:00.830: INFO: Waiting for PV local-pvwnksb to bind to PVC pvc-645k9 Nov 6 01:58:00.830: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-645k9] to have phase Bound Nov 6 01:58:00.832: INFO: PersistentVolumeClaim pvc-645k9 found but phase is Pending instead of Bound. Nov 6 01:58:02.836: INFO: PersistentVolumeClaim pvc-645k9 found and phase=Bound (2.005736107s) Nov 6 01:58:02.836: INFO: Waiting up to 3m0s for PersistentVolume local-pvwnksb to have phase Bound Nov 6 01:58:02.837: INFO: PersistentVolume local-pvwnksb found and phase=Bound (1.676023ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Nov 6 01:58:08.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-9181 exec pod-571098ce-3857-4b69-b684-fa1d6167634b --namespace=persistent-local-volumes-test-9181 -- stat -c %g /mnt/volume1' Nov 6 01:58:09.098: INFO: stderr: "" Nov 6 01:58:09.098: INFO: stdout: "1234\n" STEP: Create second pod with same fsGroup and check fsGroup is correct STEP: Creating a pod Nov 6 01:58:13.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-9181 exec pod-96fbefa6-6f2f-4dce-8e05-1e6e90447c17 --namespace=persistent-local-volumes-test-9181 -- stat -c %g /mnt/volume1' Nov 6 01:58:13.492: INFO: stderr: "" Nov 6 01:58:13.492: INFO: stdout: "1234\n" STEP: Deleting first pod STEP: Deleting pod pod-571098ce-3857-4b69-b684-fa1d6167634b in namespace persistent-local-volumes-test-9181 STEP: Deleting second pod STEP: Deleting pod pod-96fbefa6-6f2f-4dce-8e05-1e6e90447c17 in namespace persistent-local-volumes-test-9181 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:58:13.502: INFO: Deleting PersistentVolumeClaim "pvc-645k9" Nov 6 01:58:13.506: INFO: Deleting PersistentVolume "local-pvwnksb" STEP: Removing the test directory Nov 6 01:58:13.510: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ff03798a-3514-4d00-827c-3db9095d8283] Namespace:persistent-local-volumes-test-9181 PodName:hostexec-node2-lcn2q ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:58:13.510: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:58:13.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9181" for this suite. • [SLOW TEST:16.993 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":9,"skipped":165,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:58:00.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Nov 6 01:58:00.913: INFO: The status of Pod test-hostpath-type-jkdsx is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:58:02.916: INFO: The status of Pod test-hostpath-type-jkdsx is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:58:04.917: INFO: The status of Pod test-hostpath-type-jkdsx is Running (Ready = true) STEP: running on node node1 STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should fail on mounting file 'afile' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:166 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:58:14.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-6604" for this suite. • [SLOW TEST:14.165 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting file 'afile' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:166 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathBlockDev","total":-1,"completed":16,"skipped":550,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:58:13.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] volume on tmpfs should have the correct mode using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:75 STEP: Creating a pod to test emptydir volume type on tmpfs Nov 6 01:58:13.751: INFO: Waiting up to 5m0s for pod "pod-dcabb0a4-1546-4da2-82c5-143aa49563bb" in namespace "emptydir-8776" to be "Succeeded or Failed" Nov 6 01:58:13.757: INFO: Pod "pod-dcabb0a4-1546-4da2-82c5-143aa49563bb": Phase="Pending", Reason="", readiness=false. Elapsed: 5.433967ms Nov 6 01:58:15.761: INFO: Pod "pod-dcabb0a4-1546-4da2-82c5-143aa49563bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009223929s Nov 6 01:58:17.767: INFO: Pod "pod-dcabb0a4-1546-4da2-82c5-143aa49563bb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015136716s Nov 6 01:58:19.771: INFO: Pod "pod-dcabb0a4-1546-4da2-82c5-143aa49563bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019042185s STEP: Saw pod success Nov 6 01:58:19.771: INFO: Pod "pod-dcabb0a4-1546-4da2-82c5-143aa49563bb" satisfied condition "Succeeded or Failed" Nov 6 01:58:19.773: INFO: Trying to get logs from node node2 pod pod-dcabb0a4-1546-4da2-82c5-143aa49563bb container test-container: STEP: delete the pod Nov 6 01:58:19.787: INFO: Waiting for pod pod-dcabb0a4-1546-4da2-82c5-143aa49563bb to disappear Nov 6 01:58:19.789: INFO: Pod pod-dcabb0a4-1546-4da2-82c5-143aa49563bb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:58:19.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8776" for this suite. • [SLOW TEST:6.080 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48 volume on tmpfs should have the correct mode using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:75 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup","total":-1,"completed":10,"skipped":179,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:58:15.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Nov 6 01:58:15.189: INFO: The status of Pod test-hostpath-type-f2tp7 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:58:17.192: INFO: The status of Pod test-hostpath-type-f2tp7 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:58:19.191: INFO: The status of Pod test-hostpath-type-f2tp7 is Running (Ready = true) STEP: running on node node1 STEP: Create a character device for further testing Nov 6 01:58:19.194: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-6092 PodName:test-hostpath-type-f2tp7 ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:58:19.194: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting character device 'achardev' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:285 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:58:21.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-6092" for this suite. • [SLOW TEST:6.163 seconds] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting character device 'achardev' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:285 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathDirectory","total":-1,"completed":17,"skipped":642,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:57:07.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] exhausted, late binding, with topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 STEP: Building a driver namespace object, basename csi-mock-volumes-5570 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Nov 6 01:57:07.479: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5570-9161/csi-attacher Nov 6 01:57:07.481: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5570 Nov 6 01:57:07.481: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-5570 Nov 6 01:57:07.484: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5570 Nov 6 01:57:07.486: INFO: creating *v1.Role: csi-mock-volumes-5570-9161/external-attacher-cfg-csi-mock-volumes-5570 Nov 6 01:57:07.489: INFO: creating *v1.RoleBinding: csi-mock-volumes-5570-9161/csi-attacher-role-cfg Nov 6 01:57:07.492: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5570-9161/csi-provisioner Nov 6 01:57:07.495: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5570 Nov 6 01:57:07.495: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-5570 Nov 6 01:57:07.498: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5570 Nov 6 01:57:07.501: INFO: creating *v1.Role: csi-mock-volumes-5570-9161/external-provisioner-cfg-csi-mock-volumes-5570 Nov 6 01:57:07.504: INFO: creating *v1.RoleBinding: csi-mock-volumes-5570-9161/csi-provisioner-role-cfg Nov 6 01:57:07.506: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5570-9161/csi-resizer Nov 6 01:57:07.508: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5570 Nov 6 01:57:07.508: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-5570 Nov 6 01:57:07.511: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5570 Nov 6 01:57:07.514: INFO: creating *v1.Role: csi-mock-volumes-5570-9161/external-resizer-cfg-csi-mock-volumes-5570 Nov 6 01:57:07.517: INFO: creating *v1.RoleBinding: csi-mock-volumes-5570-9161/csi-resizer-role-cfg Nov 6 01:57:07.520: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5570-9161/csi-snapshotter Nov 6 01:57:07.522: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5570 Nov 6 01:57:07.522: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-5570 Nov 6 01:57:07.524: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5570 Nov 6 01:57:07.527: INFO: creating *v1.Role: csi-mock-volumes-5570-9161/external-snapshotter-leaderelection-csi-mock-volumes-5570 Nov 6 01:57:07.531: INFO: creating *v1.RoleBinding: csi-mock-volumes-5570-9161/external-snapshotter-leaderelection Nov 6 01:57:07.533: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5570-9161/csi-mock Nov 6 01:57:07.536: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5570 Nov 6 01:57:07.539: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5570 Nov 6 01:57:07.541: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5570 Nov 6 01:57:07.544: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5570 Nov 6 01:57:07.547: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5570 Nov 6 01:57:07.550: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5570 Nov 6 01:57:07.552: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5570 Nov 6 01:57:07.555: INFO: creating *v1.StatefulSet: csi-mock-volumes-5570-9161/csi-mockplugin Nov 6 01:57:07.559: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-5570 Nov 6 01:57:07.562: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-5570" Nov 6 01:57:07.563: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5570 to register on node node2 I1106 01:57:14.662352 37 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-5570","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1106 01:57:14.743953 37 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I1106 01:57:14.745444 37 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-5570","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1106 01:57:14.747596 37 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}},{"Type":{"Service":{"type":2}}}]},"Error":"","FullError":null} I1106 01:57:14.749269 37 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I1106 01:57:14.845366 37 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-5570","accessible_topology":{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}},"Error":"","FullError":null} STEP: Creating pod Nov 6 01:57:17.082: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil I1106 01:57:17.109803 37 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-fdefb224-4d2c-4117-bf3b-d4a2f75990a5","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}],"accessibility_requirements":{"requisite":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}],"preferred":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}} I1106 01:57:19.698303 37 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-fdefb224-4d2c-4117-bf3b-d4a2f75990a5","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}],"accessibility_requirements":{"requisite":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}],"preferred":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-fdefb224-4d2c-4117-bf3b-d4a2f75990a5"},"accessible_topology":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Error":"","FullError":null} I1106 01:57:20.893507 37 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Nov 6 01:57:20.896: INFO: >>> kubeConfig: /root/.kube/config I1106 01:57:21.028430 37 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-fdefb224-4d2c-4117-bf3b-d4a2f75990a5/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-fdefb224-4d2c-4117-bf3b-d4a2f75990a5","storage.kubernetes.io/csiProvisionerIdentity":"1636163834745-8081-csi-mock-csi-mock-volumes-5570"}},"Response":{},"Error":"","FullError":null} I1106 01:57:21.429655 37 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Nov 6 01:57:21.431: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:57:21.665: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:57:21.763: INFO: >>> kubeConfig: /root/.kube/config I1106 01:57:21.912422 37 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-fdefb224-4d2c-4117-bf3b-d4a2f75990a5/globalmount","target_path":"/var/lib/kubelet/pods/89659041-ba05-4e73-8d4b-e230ea26ac7e/volumes/kubernetes.io~csi/pvc-fdefb224-4d2c-4117-bf3b-d4a2f75990a5/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-fdefb224-4d2c-4117-bf3b-d4a2f75990a5","storage.kubernetes.io/csiProvisionerIdentity":"1636163834745-8081-csi-mock-csi-mock-volumes-5570"}},"Response":{},"Error":"","FullError":null} Nov 6 01:57:27.104: INFO: Deleting pod "pvc-volume-tester-js8bv" in namespace "csi-mock-volumes-5570" Nov 6 01:57:27.111: INFO: Wait up to 5m0s for pod "pvc-volume-tester-js8bv" to be fully deleted Nov 6 01:57:28.509: INFO: >>> kubeConfig: /root/.kube/config I1106 01:57:28.602484 37 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/89659041-ba05-4e73-8d4b-e230ea26ac7e/volumes/kubernetes.io~csi/pvc-fdefb224-4d2c-4117-bf3b-d4a2f75990a5/mount"},"Response":{},"Error":"","FullError":null} I1106 01:57:28.624882 37 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1106 01:57:28.637263 37 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-fdefb224-4d2c-4117-bf3b-d4a2f75990a5/globalmount"},"Response":{},"Error":"","FullError":null} I1106 01:57:33.133826 37 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} STEP: Checking PVC events Nov 6 01:57:34.119: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-gxthx", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5570", SelfLink:"", UID:"fdefb224-4d2c-4117-bf3b-d4a2f75990a5", ResourceVersion:"105607", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63771760637, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002131950), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002131968)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc00460d540), VolumeMode:(*v1.PersistentVolumeMode)(0xc00460d550), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 6 01:57:34.119: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-gxthx", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5570", SelfLink:"", UID:"fdefb224-4d2c-4117-bf3b-d4a2f75990a5", ResourceVersion:"105610", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63771760637, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.kubernetes.io/selected-node":"node2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0021319c8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0021319e0)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0021319f8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002131a10)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc00460d5a0), VolumeMode:(*v1.PersistentVolumeMode)(0xc00460d5c0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 6 01:57:34.119: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-gxthx", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5570", SelfLink:"", UID:"fdefb224-4d2c-4117-bf3b-d4a2f75990a5", ResourceVersion:"105611", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63771760637, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5570", "volume.kubernetes.io/selected-node":"node2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003888678), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003888690)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0038886a8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0038886c0)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0038886d8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0038886f0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc004786840), VolumeMode:(*v1.PersistentVolumeMode)(0xc004786850), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 6 01:57:34.120: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-gxthx", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5570", SelfLink:"", UID:"fdefb224-4d2c-4117-bf3b-d4a2f75990a5", ResourceVersion:"105614", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63771760637, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5570"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003888708), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003888720)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003888738), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003888750)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003888768), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003888780)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc004786880), VolumeMode:(*v1.PersistentVolumeMode)(0xc004786890), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 6 01:57:34.120: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-gxthx", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5570", SelfLink:"", UID:"fdefb224-4d2c-4117-bf3b-d4a2f75990a5", ResourceVersion:"105684", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63771760637, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5570", "volume.kubernetes.io/selected-node":"node2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0038887b0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0038887c8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0038887e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0038887f8)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003888810), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003888828)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0047868c0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0047868d0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 6 01:57:34.120: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-gxthx", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5570", SelfLink:"", UID:"fdefb224-4d2c-4117-bf3b-d4a2f75990a5", ResourceVersion:"105690", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63771760637, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5570", "volume.kubernetes.io/selected-node":"node2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003888858), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003888870)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003888888), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0038888a0)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0038888b8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0038888d0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-fdefb224-4d2c-4117-bf3b-d4a2f75990a5", StorageClassName:(*string)(0xc004786900), VolumeMode:(*v1.PersistentVolumeMode)(0xc004786910), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 6 01:57:34.120: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-gxthx", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5570", SelfLink:"", UID:"fdefb224-4d2c-4117-bf3b-d4a2f75990a5", ResourceVersion:"105691", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63771760637, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5570", "volume.kubernetes.io/selected-node":"node2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003bd0b28), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003bd0b40)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003bd0b58), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003bd0b70)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003bd0b88), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003bd0ba0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-fdefb224-4d2c-4117-bf3b-d4a2f75990a5", StorageClassName:(*string)(0xc0049c2c20), VolumeMode:(*v1.PersistentVolumeMode)(0xc0049c2c30), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 6 01:57:34.120: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-gxthx", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5570", SelfLink:"", UID:"fdefb224-4d2c-4117-bf3b-d4a2f75990a5", ResourceVersion:"106017", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63771760637, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(0xc003bd0bd0), DeletionGracePeriodSeconds:(*int64)(0xc003bfd108), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5570", "volume.kubernetes.io/selected-node":"node2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003bd0be8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003bd0c00)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003bd0c18), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003bd0c30)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003bd0c48), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003bd0c60)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-fdefb224-4d2c-4117-bf3b-d4a2f75990a5", StorageClassName:(*string)(0xc0049c2c70), VolumeMode:(*v1.PersistentVolumeMode)(0xc0049c2c80), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 6 01:57:34.120: INFO: PVC event DELETED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-gxthx", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5570", SelfLink:"", UID:"fdefb224-4d2c-4117-bf3b-d4a2f75990a5", ResourceVersion:"106018", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63771760637, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(0xc002db84b0), DeletionGracePeriodSeconds:(*int64)(0xc0038be718), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5570", "volume.kubernetes.io/selected-node":"node2"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002db84c8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002db84e0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002db84f8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002db8510)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002db8528), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002db8540)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-fdefb224-4d2c-4117-bf3b-d4a2f75990a5", StorageClassName:(*string)(0xc004734530), VolumeMode:(*v1.PersistentVolumeMode)(0xc004734540), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} STEP: Deleting pod pvc-volume-tester-js8bv Nov 6 01:57:34.121: INFO: Deleting pod "pvc-volume-tester-js8bv" in namespace "csi-mock-volumes-5570" STEP: Deleting claim pvc-gxthx STEP: Deleting storageclass csi-mock-volumes-5570-sctrgrd STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-5570 STEP: Waiting for namespaces [csi-mock-volumes-5570] to vanish STEP: uninstalling csi mock driver Nov 6 01:57:40.158: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5570-9161/csi-attacher Nov 6 01:57:40.162: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5570 Nov 6 01:57:40.166: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5570 Nov 6 01:57:40.170: INFO: deleting *v1.Role: csi-mock-volumes-5570-9161/external-attacher-cfg-csi-mock-volumes-5570 Nov 6 01:57:40.173: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5570-9161/csi-attacher-role-cfg Nov 6 01:57:40.177: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5570-9161/csi-provisioner Nov 6 01:57:40.181: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5570 Nov 6 01:57:40.184: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5570 Nov 6 01:57:40.187: INFO: deleting *v1.Role: csi-mock-volumes-5570-9161/external-provisioner-cfg-csi-mock-volumes-5570 Nov 6 01:57:40.191: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5570-9161/csi-provisioner-role-cfg Nov 6 01:57:40.194: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5570-9161/csi-resizer Nov 6 01:57:40.197: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5570 Nov 6 01:57:40.200: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5570 Nov 6 01:57:40.203: INFO: deleting *v1.Role: csi-mock-volumes-5570-9161/external-resizer-cfg-csi-mock-volumes-5570 Nov 6 01:57:40.207: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5570-9161/csi-resizer-role-cfg Nov 6 01:57:40.210: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5570-9161/csi-snapshotter Nov 6 01:57:40.213: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5570 Nov 6 01:57:40.216: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5570 Nov 6 01:57:40.220: INFO: deleting *v1.Role: csi-mock-volumes-5570-9161/external-snapshotter-leaderelection-csi-mock-volumes-5570 Nov 6 01:57:40.224: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5570-9161/external-snapshotter-leaderelection Nov 6 01:57:40.228: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5570-9161/csi-mock Nov 6 01:57:40.231: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5570 Nov 6 01:57:40.234: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5570 Nov 6 01:57:40.238: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5570 Nov 6 01:57:40.241: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5570 Nov 6 01:57:40.244: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5570 Nov 6 01:57:40.247: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5570 Nov 6 01:57:40.250: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5570 Nov 6 01:57:40.253: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5570-9161/csi-mockplugin Nov 6 01:57:40.257: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-5570 STEP: deleting the driver namespace: csi-mock-volumes-5570-9161 STEP: Waiting for namespaces [csi-mock-volumes-5570-9161] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:58:24.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:76.860 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 storage capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900 exhausted, late binding, with topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology","total":-1,"completed":4,"skipped":77,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:58:24.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Nov 6 01:58:24.306: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:58:24.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-1539" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create none metrics for pvc controller before creating any PV or PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:481 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:58:24.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:68 Nov 6 01:58:24.382: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:58:24.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-3785" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.028 seconds] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 NFSv3 [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:102 should be mountable for NFSv3 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:103 Only supported for node OS distro [gci ubuntu custom] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:69 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:58:02.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:354 STEP: Initializing test volumes Nov 6 01:58:04.898: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-7c0bbbe5-c65f-46b0-943b-fd88fecd13c3] Namespace:persistent-local-volumes-test-3384 PodName:hostexec-node1-mmd72 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:58:04.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:58:04.996: INFO: Creating a PV followed by a PVC Nov 6 01:58:05.003: INFO: Waiting for PV local-pvjk69f to bind to PVC pvc-z6696 Nov 6 01:58:05.003: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-z6696] to have phase Bound Nov 6 01:58:05.005: INFO: PersistentVolumeClaim pvc-z6696 found but phase is Pending instead of Bound. Nov 6 01:58:07.008: INFO: PersistentVolumeClaim pvc-z6696 found and phase=Bound (2.004999365s) Nov 6 01:58:07.008: INFO: Waiting up to 3m0s for PersistentVolume local-pvjk69f to have phase Bound Nov 6 01:58:07.011: INFO: PersistentVolume local-pvjk69f found and phase=Bound (2.825784ms) [It] should fail scheduling due to different NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375 STEP: local-volume-type: dir STEP: Initializing test volumes Nov 6 01:58:07.015: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-e7164685-6ccb-40ea-8a0b-c66d091ee680] Namespace:persistent-local-volumes-test-3384 PodName:hostexec-node1-mmd72 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:58:07.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:58:08.399: INFO: Creating a PV followed by a PVC Nov 6 01:58:08.406: INFO: Waiting for PV local-pvxf86k to bind to PVC pvc-jpmq8 Nov 6 01:58:08.406: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-jpmq8] to have phase Bound Nov 6 01:58:08.408: INFO: PersistentVolumeClaim pvc-jpmq8 found but phase is Pending instead of Bound. Nov 6 01:58:10.411: INFO: PersistentVolumeClaim pvc-jpmq8 found but phase is Pending instead of Bound. Nov 6 01:58:12.415: INFO: PersistentVolumeClaim pvc-jpmq8 found but phase is Pending instead of Bound. Nov 6 01:58:14.419: INFO: PersistentVolumeClaim pvc-jpmq8 found but phase is Pending instead of Bound. Nov 6 01:58:16.424: INFO: PersistentVolumeClaim pvc-jpmq8 found but phase is Pending instead of Bound. Nov 6 01:58:18.428: INFO: PersistentVolumeClaim pvc-jpmq8 found but phase is Pending instead of Bound. Nov 6 01:58:20.434: INFO: PersistentVolumeClaim pvc-jpmq8 found but phase is Pending instead of Bound. Nov 6 01:58:22.438: INFO: PersistentVolumeClaim pvc-jpmq8 found and phase=Bound (14.032078587s) Nov 6 01:58:22.438: INFO: Waiting up to 3m0s for PersistentVolume local-pvxf86k to have phase Bound Nov 6 01:58:22.440: INFO: PersistentVolume local-pvxf86k found and phase=Bound (2.385125ms) Nov 6 01:58:22.457: INFO: Waiting up to 5m0s for pod "pod-3c6b2482-dc40-4b0e-96b0-bd003c1b7320" in namespace "persistent-local-volumes-test-3384" to be "Unschedulable" Nov 6 01:58:22.460: INFO: Pod "pod-3c6b2482-dc40-4b0e-96b0-bd003c1b7320": Phase="Pending", Reason="", readiness=false. Elapsed: 3.334822ms Nov 6 01:58:24.465: INFO: Pod "pod-3c6b2482-dc40-4b0e-96b0-bd003c1b7320": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008307871s Nov 6 01:58:24.465: INFO: Pod "pod-3c6b2482-dc40-4b0e-96b0-bd003c1b7320" satisfied condition "Unschedulable" [AfterEach] Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:370 STEP: Cleaning up PVC and PV Nov 6 01:58:24.465: INFO: Deleting PersistentVolumeClaim "pvc-z6696" Nov 6 01:58:24.469: INFO: Deleting PersistentVolume "local-pvjk69f" STEP: Removing the test directory Nov 6 01:58:24.472: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-7c0bbbe5-c65f-46b0-943b-fd88fecd13c3] Namespace:persistent-local-volumes-test-3384 PodName:hostexec-node1-mmd72 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:58:24.472: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:58:24.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3384" for this suite. • [SLOW TEST:21.726 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347 should fail scheduling due to different NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity","total":-1,"completed":7,"skipped":316,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:58:02.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:354 STEP: Initializing test volumes Nov 6 01:58:06.336: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-452da471-c473-4a19-88eb-9394594f9131] Namespace:persistent-local-volumes-test-2689 PodName:hostexec-node1-fvfts ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:58:06.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:58:06.438: INFO: Creating a PV followed by a PVC Nov 6 01:58:06.444: INFO: Waiting for PV local-pv5rw7k to bind to PVC pvc-lljmr Nov 6 01:58:06.444: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-lljmr] to have phase Bound Nov 6 01:58:06.447: INFO: PersistentVolumeClaim pvc-lljmr found but phase is Pending instead of Bound. Nov 6 01:58:08.450: INFO: PersistentVolumeClaim pvc-lljmr found and phase=Bound (2.005610858s) Nov 6 01:58:08.450: INFO: Waiting up to 3m0s for PersistentVolume local-pv5rw7k to have phase Bound Nov 6 01:58:08.453: INFO: PersistentVolume local-pv5rw7k found and phase=Bound (2.877422ms) [It] should fail scheduling due to different NodeSelector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379 STEP: local-volume-type: dir STEP: Initializing test volumes Nov 6 01:58:08.457: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-a20516a6-de2e-4dbd-9ee9-336c02f042e3] Namespace:persistent-local-volumes-test-2689 PodName:hostexec-node1-fvfts ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:58:08.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:58:10.657: INFO: Creating a PV followed by a PVC Nov 6 01:58:10.664: INFO: Waiting for PV local-pvcfdrq to bind to PVC pvc-mzslw Nov 6 01:58:10.664: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-mzslw] to have phase Bound Nov 6 01:58:10.667: INFO: PersistentVolumeClaim pvc-mzslw found but phase is Pending instead of Bound. Nov 6 01:58:12.670: INFO: PersistentVolumeClaim pvc-mzslw found but phase is Pending instead of Bound. Nov 6 01:58:14.674: INFO: PersistentVolumeClaim pvc-mzslw found but phase is Pending instead of Bound. Nov 6 01:58:16.678: INFO: PersistentVolumeClaim pvc-mzslw found but phase is Pending instead of Bound. Nov 6 01:58:18.681: INFO: PersistentVolumeClaim pvc-mzslw found but phase is Pending instead of Bound. Nov 6 01:58:20.687: INFO: PersistentVolumeClaim pvc-mzslw found but phase is Pending instead of Bound. Nov 6 01:58:22.691: INFO: PersistentVolumeClaim pvc-mzslw found and phase=Bound (12.026225578s) Nov 6 01:58:22.691: INFO: Waiting up to 3m0s for PersistentVolume local-pvcfdrq to have phase Bound Nov 6 01:58:22.693: INFO: PersistentVolume local-pvcfdrq found and phase=Bound (2.612685ms) Nov 6 01:58:22.707: INFO: Waiting up to 5m0s for pod "pod-cb067845-9f96-4bb7-a17c-4ed25335c0dc" in namespace "persistent-local-volumes-test-2689" to be "Unschedulable" Nov 6 01:58:22.709: INFO: Pod "pod-cb067845-9f96-4bb7-a17c-4ed25335c0dc": Phase="Pending", Reason="", readiness=false. Elapsed: 1.831928ms Nov 6 01:58:24.713: INFO: Pod "pod-cb067845-9f96-4bb7-a17c-4ed25335c0dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005804829s Nov 6 01:58:24.713: INFO: Pod "pod-cb067845-9f96-4bb7-a17c-4ed25335c0dc" satisfied condition "Unschedulable" [AfterEach] Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:370 STEP: Cleaning up PVC and PV Nov 6 01:58:24.713: INFO: Deleting PersistentVolumeClaim "pvc-lljmr" Nov 6 01:58:24.717: INFO: Deleting PersistentVolume "local-pv5rw7k" STEP: Removing the test directory Nov 6 01:58:24.721: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-452da471-c473-4a19-88eb-9394594f9131] Namespace:persistent-local-volumes-test-2689 PodName:hostexec-node1-fvfts ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:58:24.721: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:58:24.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2689" for this suite. • [SLOW TEST:22.534 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Pod with node different from PV's NodeAffinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347 should fail scheduling due to different NodeSelector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector","total":-1,"completed":11,"skipped":279,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:58:24.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] should report an error and create no PV /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:825 Nov 6 01:58:24.871: INFO: Only supported for providers [aws] (not local) [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:58:24.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-938" for this suite. S [SKIPPING] [0.029 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Invalid AWS KMS key /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:824 should report an error and create no PV [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:825 Only supported for providers [aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:826 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:58:24.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Nov 6 01:58:24.950: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:58:24.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-1916" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.029 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create volume metrics in Volume Manager [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:292 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:58:10.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-fe129b72-af01-4989-b116-30417cc34aa6" Nov 6 01:58:14.409: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-fe129b72-af01-4989-b116-30417cc34aa6 && dd if=/dev/zero of=/tmp/local-volume-test-fe129b72-af01-4989-b116-30417cc34aa6/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-fe129b72-af01-4989-b116-30417cc34aa6/file] Namespace:persistent-local-volumes-test-6747 PodName:hostexec-node2-6gxvt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:58:14.409: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:58:14.655: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-fe129b72-af01-4989-b116-30417cc34aa6/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6747 PodName:hostexec-node2-6gxvt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:58:14.655: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:58:14.884: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-fe129b72-af01-4989-b116-30417cc34aa6 && chmod o+rwx /tmp/local-volume-test-fe129b72-af01-4989-b116-30417cc34aa6] Namespace:persistent-local-volumes-test-6747 PodName:hostexec-node2-6gxvt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:58:14.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:58:15.172: INFO: Creating a PV followed by a PVC Nov 6 01:58:15.178: INFO: Waiting for PV local-pv252mh to bind to PVC pvc-lhdfw Nov 6 01:58:15.178: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-lhdfw] to have phase Bound Nov 6 01:58:15.182: INFO: PersistentVolumeClaim pvc-lhdfw found but phase is Pending instead of Bound. Nov 6 01:58:17.185: INFO: PersistentVolumeClaim pvc-lhdfw found but phase is Pending instead of Bound. Nov 6 01:58:19.190: INFO: PersistentVolumeClaim pvc-lhdfw found but phase is Pending instead of Bound. Nov 6 01:58:21.200: INFO: PersistentVolumeClaim pvc-lhdfw found but phase is Pending instead of Bound. Nov 6 01:58:23.203: INFO: PersistentVolumeClaim pvc-lhdfw found and phase=Bound (8.02569982s) Nov 6 01:58:23.203: INFO: Waiting up to 3m0s for PersistentVolume local-pv252mh to have phase Bound Nov 6 01:58:23.205: INFO: PersistentVolume local-pv252mh found and phase=Bound (1.793703ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 6 01:58:29.230: INFO: pod "pod-3ac05892-61dc-4d15-9ca0-37659fb92456" created on Node "node2" STEP: Writing in pod1 Nov 6 01:58:29.230: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6747 PodName:pod-3ac05892-61dc-4d15-9ca0-37659fb92456 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:58:29.230: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:58:29.316: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Nov 6 01:58:29.316: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6747 PodName:pod-3ac05892-61dc-4d15-9ca0-37659fb92456 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:58:29.316: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:58:29.406: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Nov 6 01:58:29.406: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-fe129b72-af01-4989-b116-30417cc34aa6 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6747 PodName:pod-3ac05892-61dc-4d15-9ca0-37659fb92456 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:58:29.406: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:58:29.488: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-fe129b72-af01-4989-b116-30417cc34aa6 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-3ac05892-61dc-4d15-9ca0-37659fb92456 in namespace persistent-local-volumes-test-6747 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:58:29.493: INFO: Deleting PersistentVolumeClaim "pvc-lhdfw" Nov 6 01:58:29.497: INFO: Deleting PersistentVolume "local-pv252mh" Nov 6 01:58:29.501: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-fe129b72-af01-4989-b116-30417cc34aa6] Namespace:persistent-local-volumes-test-6747 PodName:hostexec-node2-6gxvt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:58:29.501: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:58:29.601: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-fe129b72-af01-4989-b116-30417cc34aa6/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6747 PodName:hostexec-node2-6gxvt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:58:29.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-fe129b72-af01-4989-b116-30417cc34aa6/file Nov 6 01:58:29.694: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-6747 PodName:hostexec-node2-6gxvt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:58:29.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-fe129b72-af01-4989-b116-30417cc34aa6 Nov 6 01:58:29.794: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-fe129b72-af01-4989-b116-30417cc34aa6] Namespace:persistent-local-volumes-test-6747 PodName:hostexec-node2-6gxvt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:58:29.794: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:58:29.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6747" for this suite. • [SLOW TEST:19.568 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":14,"skipped":384,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:58:12.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 6 01:58:16.716: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-d4bd296f-257a-48a7-a881-cb5750e850c7 && mount --bind /tmp/local-volume-test-d4bd296f-257a-48a7-a881-cb5750e850c7 /tmp/local-volume-test-d4bd296f-257a-48a7-a881-cb5750e850c7] Namespace:persistent-local-volumes-test-5637 PodName:hostexec-node2-7vtkq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:58:16.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:58:16.838: INFO: Creating a PV followed by a PVC Nov 6 01:58:16.845: INFO: Waiting for PV local-pv6pjmn to bind to PVC pvc-lsw8f Nov 6 01:58:16.845: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-lsw8f] to have phase Bound Nov 6 01:58:16.847: INFO: PersistentVolumeClaim pvc-lsw8f found but phase is Pending instead of Bound. Nov 6 01:58:18.850: INFO: PersistentVolumeClaim pvc-lsw8f found but phase is Pending instead of Bound. Nov 6 01:58:20.854: INFO: PersistentVolumeClaim pvc-lsw8f found but phase is Pending instead of Bound. Nov 6 01:58:22.857: INFO: PersistentVolumeClaim pvc-lsw8f found and phase=Bound (6.012294401s) Nov 6 01:58:22.857: INFO: Waiting up to 3m0s for PersistentVolume local-pv6pjmn to have phase Bound Nov 6 01:58:22.860: INFO: PersistentVolume local-pv6pjmn found and phase=Bound (2.571972ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Nov 6 01:58:28.885: INFO: pod "pod-10a6b182-4348-41f5-bdee-5d455fac9445" created on Node "node2" STEP: Writing in pod1 Nov 6 01:58:28.885: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5637 PodName:pod-10a6b182-4348-41f5-bdee-5d455fac9445 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:58:28.885: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:58:29.018: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 6 01:58:29.018: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5637 PodName:pod-10a6b182-4348-41f5-bdee-5d455fac9445 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:58:29.018: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:58:29.105: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Nov 6 01:58:35.129: INFO: pod "pod-b0c2d695-097f-4c19-927d-eede23445988" created on Node "node2" Nov 6 01:58:35.129: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5637 PodName:pod-b0c2d695-097f-4c19-927d-eede23445988 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:58:35.129: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:58:35.223: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Nov 6 01:58:35.223: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-d4bd296f-257a-48a7-a881-cb5750e850c7 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5637 PodName:pod-b0c2d695-097f-4c19-927d-eede23445988 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:58:35.223: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:58:35.307: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-d4bd296f-257a-48a7-a881-cb5750e850c7 > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Nov 6 01:58:35.307: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5637 PodName:pod-10a6b182-4348-41f5-bdee-5d455fac9445 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:58:35.307: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:58:35.417: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-d4bd296f-257a-48a7-a881-cb5750e850c7", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-10a6b182-4348-41f5-bdee-5d455fac9445 in namespace persistent-local-volumes-test-5637 STEP: Deleting pod2 STEP: Deleting pod pod-b0c2d695-097f-4c19-927d-eede23445988 in namespace persistent-local-volumes-test-5637 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:58:35.427: INFO: Deleting PersistentVolumeClaim "pvc-lsw8f" Nov 6 01:58:35.431: INFO: Deleting PersistentVolume "local-pv6pjmn" STEP: Removing the test directory Nov 6 01:58:35.434: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-d4bd296f-257a-48a7-a881-cb5750e850c7 && rm -r /tmp/local-volume-test-d4bd296f-257a-48a7-a881-cb5750e850c7] Namespace:persistent-local-volumes-test-5637 PodName:hostexec-node2-7vtkq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:58:35.434: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:58:35.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5637" for this suite. • [SLOW TEST:22.885 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":14,"skipped":458,"failed":0} [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:58:35.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Nov 6 01:58:35.579: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:58:35.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-8440" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 schedule pods each with a PD, delete pod and verify detach [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:93 for RW PD with pod delete grace period of "immediate (0s)" /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:135 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:58:25.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 6 01:58:29.120: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-b4d8d927-1f49-4f29-9852-1c6bd06e70c7-backend && ln -s /tmp/local-volume-test-b4d8d927-1f49-4f29-9852-1c6bd06e70c7-backend /tmp/local-volume-test-b4d8d927-1f49-4f29-9852-1c6bd06e70c7] Namespace:persistent-local-volumes-test-3219 PodName:hostexec-node1-fhm5n ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:58:29.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:58:29.218: INFO: Creating a PV followed by a PVC Nov 6 01:58:29.224: INFO: Waiting for PV local-pv7n8d5 to bind to PVC pvc-kpmnh Nov 6 01:58:29.224: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-kpmnh] to have phase Bound Nov 6 01:58:29.226: INFO: PersistentVolumeClaim pvc-kpmnh found but phase is Pending instead of Bound. Nov 6 01:58:31.230: INFO: PersistentVolumeClaim pvc-kpmnh found and phase=Bound (2.005667132s) Nov 6 01:58:31.230: INFO: Waiting up to 3m0s for PersistentVolume local-pv7n8d5 to have phase Bound Nov 6 01:58:31.232: INFO: PersistentVolume local-pv7n8d5 found and phase=Bound (2.510341ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 6 01:58:35.264: INFO: pod "pod-c1c95e31-d1a6-43be-a62c-68d7fb9e8e47" created on Node "node1" STEP: Writing in pod1 Nov 6 01:58:35.264: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3219 PodName:pod-c1c95e31-d1a6-43be-a62c-68d7fb9e8e47 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:58:35.264: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:58:35.443: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Nov 6 01:58:35.443: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3219 PodName:pod-c1c95e31-d1a6-43be-a62c-68d7fb9e8e47 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:58:35.443: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:58:35.565: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-c1c95e31-d1a6-43be-a62c-68d7fb9e8e47 in namespace persistent-local-volumes-test-3219 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:58:35.570: INFO: Deleting PersistentVolumeClaim "pvc-kpmnh" Nov 6 01:58:35.574: INFO: Deleting PersistentVolume "local-pv7n8d5" STEP: Removing the test directory Nov 6 01:58:35.578: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-b4d8d927-1f49-4f29-9852-1c6bd06e70c7 && rm -r /tmp/local-volume-test-b4d8d927-1f49-4f29-9852-1c6bd06e70c7-backend] Namespace:persistent-local-volumes-test-3219 PodName:hostexec-node1-fhm5n ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:58:35.578: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:58:35.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3219" for this suite. • [SLOW TEST:10.603 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":12,"skipped":384,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:58:24.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-a277ec92-5c82-4b53-91a8-23f8453578d1" Nov 6 01:58:28.754: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-a277ec92-5c82-4b53-91a8-23f8453578d1 && dd if=/dev/zero of=/tmp/local-volume-test-a277ec92-5c82-4b53-91a8-23f8453578d1/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-a277ec92-5c82-4b53-91a8-23f8453578d1/file] Namespace:persistent-local-volumes-test-522 PodName:hostexec-node2-bczbz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:58:28.754: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:58:28.877: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-a277ec92-5c82-4b53-91a8-23f8453578d1/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-522 PodName:hostexec-node2-bczbz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:58:28.877: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:58:29.023: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop1 && mount -t ext4 /dev/loop1 /tmp/local-volume-test-a277ec92-5c82-4b53-91a8-23f8453578d1 && chmod o+rwx /tmp/local-volume-test-a277ec92-5c82-4b53-91a8-23f8453578d1] Namespace:persistent-local-volumes-test-522 PodName:hostexec-node2-bczbz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:58:29.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:58:29.295: INFO: Creating a PV followed by a PVC Nov 6 01:58:29.302: INFO: Waiting for PV local-pvm26tv to bind to PVC pvc-r9znw Nov 6 01:58:29.302: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-r9znw] to have phase Bound Nov 6 01:58:29.304: INFO: PersistentVolumeClaim pvc-r9znw found but phase is Pending instead of Bound. Nov 6 01:58:31.309: INFO: PersistentVolumeClaim pvc-r9znw found and phase=Bound (2.006941721s) Nov 6 01:58:31.309: INFO: Waiting up to 3m0s for PersistentVolume local-pvm26tv to have phase Bound Nov 6 01:58:31.311: INFO: PersistentVolume local-pvm26tv found and phase=Bound (2.565899ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 6 01:58:35.338: INFO: pod "pod-642a8392-0d89-4da6-a7bb-023c6a1167f3" created on Node "node2" STEP: Writing in pod1 Nov 6 01:58:35.338: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-522 PodName:pod-642a8392-0d89-4da6-a7bb-023c6a1167f3 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:58:35.338: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:58:35.446: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Nov 6 01:58:35.446: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-522 PodName:pod-642a8392-0d89-4da6-a7bb-023c6a1167f3 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:58:35.446: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:58:35.546: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-642a8392-0d89-4da6-a7bb-023c6a1167f3 in namespace persistent-local-volumes-test-522 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:58:35.551: INFO: Deleting PersistentVolumeClaim "pvc-r9znw" Nov 6 01:58:35.554: INFO: Deleting PersistentVolume "local-pvm26tv" Nov 6 01:58:35.558: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-a277ec92-5c82-4b53-91a8-23f8453578d1] Namespace:persistent-local-volumes-test-522 PodName:hostexec-node2-bczbz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:58:35.558: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:58:35.684: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-a277ec92-5c82-4b53-91a8-23f8453578d1/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-522 PodName:hostexec-node2-bczbz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:58:35.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop1" on node "node2" at path /tmp/local-volume-test-a277ec92-5c82-4b53-91a8-23f8453578d1/file Nov 6 01:58:35.802: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop1] Namespace:persistent-local-volumes-test-522 PodName:hostexec-node2-bczbz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:58:35.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-a277ec92-5c82-4b53-91a8-23f8453578d1 Nov 6 01:58:35.884: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a277ec92-5c82-4b53-91a8-23f8453578d1] Namespace:persistent-local-volumes-test-522 PodName:hostexec-node2-bczbz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:58:35.884: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:58:35.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-522" for this suite. • [SLOW TEST:11.275 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":8,"skipped":377,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:58:36.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:91 STEP: Creating a pod to test downward API volume plugin Nov 6 01:58:36.164: INFO: Waiting up to 5m0s for pod "metadata-volume-d709d8a7-6e52-4f12-9dd2-a89efa22c9b1" in namespace "downward-api-3795" to be "Succeeded or Failed" Nov 6 01:58:36.168: INFO: Pod "metadata-volume-d709d8a7-6e52-4f12-9dd2-a89efa22c9b1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.45881ms Nov 6 01:58:38.172: INFO: Pod "metadata-volume-d709d8a7-6e52-4f12-9dd2-a89efa22c9b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007253285s Nov 6 01:58:40.177: INFO: Pod "metadata-volume-d709d8a7-6e52-4f12-9dd2-a89efa22c9b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01231865s Nov 6 01:58:42.180: INFO: Pod "metadata-volume-d709d8a7-6e52-4f12-9dd2-a89efa22c9b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01515407s STEP: Saw pod success Nov 6 01:58:42.180: INFO: Pod "metadata-volume-d709d8a7-6e52-4f12-9dd2-a89efa22c9b1" satisfied condition "Succeeded or Failed" Nov 6 01:58:42.181: INFO: Trying to get logs from node node1 pod metadata-volume-d709d8a7-6e52-4f12-9dd2-a89efa22c9b1 container client-container: STEP: delete the pod Nov 6 01:58:42.197: INFO: Waiting for pod metadata-volume-d709d8a7-6e52-4f12-9dd2-a89efa22c9b1 to disappear Nov 6 01:58:42.199: INFO: Pod metadata-volume-d709d8a7-6e52-4f12-9dd2-a89efa22c9b1 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:58:42.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3795" for this suite. • [SLOW TEST:6.076 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:91 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":9,"skipped":456,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:57:59.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not require VolumeAttach for drivers without attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 STEP: Building a driver namespace object, basename csi-mock-volumes-9406 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 6 01:57:59.813: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9406-8791/csi-attacher Nov 6 01:57:59.816: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9406 Nov 6 01:57:59.816: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-9406 Nov 6 01:57:59.819: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9406 Nov 6 01:57:59.822: INFO: creating *v1.Role: csi-mock-volumes-9406-8791/external-attacher-cfg-csi-mock-volumes-9406 Nov 6 01:57:59.825: INFO: creating *v1.RoleBinding: csi-mock-volumes-9406-8791/csi-attacher-role-cfg Nov 6 01:57:59.828: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9406-8791/csi-provisioner Nov 6 01:57:59.831: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9406 Nov 6 01:57:59.831: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-9406 Nov 6 01:57:59.834: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9406 Nov 6 01:57:59.837: INFO: creating *v1.Role: csi-mock-volumes-9406-8791/external-provisioner-cfg-csi-mock-volumes-9406 Nov 6 01:57:59.839: INFO: creating *v1.RoleBinding: csi-mock-volumes-9406-8791/csi-provisioner-role-cfg Nov 6 01:57:59.841: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9406-8791/csi-resizer Nov 6 01:57:59.843: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9406 Nov 6 01:57:59.843: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-9406 Nov 6 01:57:59.846: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9406 Nov 6 01:57:59.848: INFO: creating *v1.Role: csi-mock-volumes-9406-8791/external-resizer-cfg-csi-mock-volumes-9406 Nov 6 01:57:59.851: INFO: creating *v1.RoleBinding: csi-mock-volumes-9406-8791/csi-resizer-role-cfg Nov 6 01:57:59.854: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9406-8791/csi-snapshotter Nov 6 01:57:59.857: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9406 Nov 6 01:57:59.857: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-9406 Nov 6 01:57:59.859: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9406 Nov 6 01:57:59.862: INFO: creating *v1.Role: csi-mock-volumes-9406-8791/external-snapshotter-leaderelection-csi-mock-volumes-9406 Nov 6 01:57:59.865: INFO: creating *v1.RoleBinding: csi-mock-volumes-9406-8791/external-snapshotter-leaderelection Nov 6 01:57:59.868: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9406-8791/csi-mock Nov 6 01:57:59.871: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9406 Nov 6 01:57:59.874: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9406 Nov 6 01:57:59.876: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9406 Nov 6 01:57:59.879: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9406 Nov 6 01:57:59.882: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9406 Nov 6 01:57:59.884: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9406 Nov 6 01:57:59.887: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9406 Nov 6 01:57:59.890: INFO: creating *v1.StatefulSet: csi-mock-volumes-9406-8791/csi-mockplugin Nov 6 01:57:59.894: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-9406 Nov 6 01:57:59.897: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-9406" Nov 6 01:57:59.899: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-9406 to register on node node1 STEP: Creating pod Nov 6 01:58:09.418: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 6 01:58:09.422: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-59xb2] to have phase Bound Nov 6 01:58:09.424: INFO: PersistentVolumeClaim pvc-59xb2 found but phase is Pending instead of Bound. Nov 6 01:58:11.429: INFO: PersistentVolumeClaim pvc-59xb2 found and phase=Bound (2.006605637s) STEP: Checking if VolumeAttachment was created for the pod STEP: Deleting pod pvc-volume-tester-pmjx7 Nov 6 01:58:19.460: INFO: Deleting pod "pvc-volume-tester-pmjx7" in namespace "csi-mock-volumes-9406" Nov 6 01:58:19.464: INFO: Wait up to 5m0s for pod "pvc-volume-tester-pmjx7" to be fully deleted STEP: Deleting claim pvc-59xb2 Nov 6 01:58:29.475: INFO: Waiting up to 2m0s for PersistentVolume pvc-b2776163-ad03-423e-9051-0927d361d8ff to get deleted Nov 6 01:58:29.478: INFO: PersistentVolume pvc-b2776163-ad03-423e-9051-0927d361d8ff found and phase=Bound (2.256952ms) Nov 6 01:58:31.486: INFO: PersistentVolume pvc-b2776163-ad03-423e-9051-0927d361d8ff was removed STEP: Deleting storageclass csi-mock-volumes-9406-scl49vq STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-9406 STEP: Waiting for namespaces [csi-mock-volumes-9406] to vanish STEP: uninstalling csi mock driver Nov 6 01:58:37.499: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9406-8791/csi-attacher Nov 6 01:58:37.503: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9406 Nov 6 01:58:37.508: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9406 Nov 6 01:58:37.511: INFO: deleting *v1.Role: csi-mock-volumes-9406-8791/external-attacher-cfg-csi-mock-volumes-9406 Nov 6 01:58:37.516: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9406-8791/csi-attacher-role-cfg Nov 6 01:58:37.519: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9406-8791/csi-provisioner Nov 6 01:58:37.522: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9406 Nov 6 01:58:37.525: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9406 Nov 6 01:58:37.530: INFO: deleting *v1.Role: csi-mock-volumes-9406-8791/external-provisioner-cfg-csi-mock-volumes-9406 Nov 6 01:58:37.533: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9406-8791/csi-provisioner-role-cfg Nov 6 01:58:37.540: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9406-8791/csi-resizer Nov 6 01:58:37.545: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9406 Nov 6 01:58:37.552: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9406 Nov 6 01:58:37.557: INFO: deleting *v1.Role: csi-mock-volumes-9406-8791/external-resizer-cfg-csi-mock-volumes-9406 Nov 6 01:58:37.560: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9406-8791/csi-resizer-role-cfg Nov 6 01:58:37.565: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9406-8791/csi-snapshotter Nov 6 01:58:37.569: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9406 Nov 6 01:58:37.572: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9406 Nov 6 01:58:37.575: INFO: deleting *v1.Role: csi-mock-volumes-9406-8791/external-snapshotter-leaderelection-csi-mock-volumes-9406 Nov 6 01:58:37.579: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9406-8791/external-snapshotter-leaderelection Nov 6 01:58:37.582: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9406-8791/csi-mock Nov 6 01:58:37.585: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9406 Nov 6 01:58:37.589: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9406 Nov 6 01:58:37.592: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9406 Nov 6 01:58:37.596: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9406 Nov 6 01:58:37.600: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9406 Nov 6 01:58:37.604: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9406 Nov 6 01:58:37.608: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9406 Nov 6 01:58:37.611: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9406-8791/csi-mockplugin Nov 6 01:58:37.615: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-9406 STEP: deleting the driver namespace: csi-mock-volumes-9406-8791 STEP: Waiting for namespaces [csi-mock-volumes-9406-8791] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:58:43.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:43.873 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI attach test using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:316 should not require VolumeAttach for drivers without attachment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment","total":-1,"completed":9,"skipped":444,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:58:24.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 6 01:58:28.447: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-9671ae86-d373-44a4-bae2-73b2923ab1ef] Namespace:persistent-local-volumes-test-5657 PodName:hostexec-node2-8cwtq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:58:28.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:58:28.568: INFO: Creating a PV followed by a PVC Nov 6 01:58:28.575: INFO: Waiting for PV local-pvwhkwr to bind to PVC pvc-69hvg Nov 6 01:58:28.575: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-69hvg] to have phase Bound Nov 6 01:58:28.577: INFO: PersistentVolumeClaim pvc-69hvg found but phase is Pending instead of Bound. Nov 6 01:58:30.581: INFO: PersistentVolumeClaim pvc-69hvg found but phase is Pending instead of Bound. Nov 6 01:58:32.585: INFO: PersistentVolumeClaim pvc-69hvg found but phase is Pending instead of Bound. Nov 6 01:58:34.589: INFO: PersistentVolumeClaim pvc-69hvg found but phase is Pending instead of Bound. Nov 6 01:58:36.593: INFO: PersistentVolumeClaim pvc-69hvg found but phase is Pending instead of Bound. Nov 6 01:58:38.597: INFO: PersistentVolumeClaim pvc-69hvg found and phase=Bound (10.022100142s) Nov 6 01:58:38.597: INFO: Waiting up to 3m0s for PersistentVolume local-pvwhkwr to have phase Bound Nov 6 01:58:38.599: INFO: PersistentVolume local-pvwhkwr found and phase=Bound (2.037833ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 6 01:58:44.626: INFO: pod "pod-90684226-3d7f-4b6d-9a00-df4a86c066a7" created on Node "node2" STEP: Writing in pod1 Nov 6 01:58:44.626: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5657 PodName:pod-90684226-3d7f-4b6d-9a00-df4a86c066a7 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:58:44.626: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:58:44.701: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Nov 6 01:58:44.701: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5657 PodName:pod-90684226-3d7f-4b6d-9a00-df4a86c066a7 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:58:44.701: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:58:44.839: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Nov 6 01:58:44.839: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-9671ae86-d373-44a4-bae2-73b2923ab1ef > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-5657 PodName:pod-90684226-3d7f-4b6d-9a00-df4a86c066a7 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:58:44.839: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:58:44.974: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-9671ae86-d373-44a4-bae2-73b2923ab1ef > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-90684226-3d7f-4b6d-9a00-df4a86c066a7 in namespace persistent-local-volumes-test-5657 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:58:44.979: INFO: Deleting PersistentVolumeClaim "pvc-69hvg" Nov 6 01:58:44.983: INFO: Deleting PersistentVolume "local-pvwhkwr" STEP: Removing the test directory Nov 6 01:58:44.988: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-9671ae86-d373-44a4-bae2-73b2923ab1ef] Namespace:persistent-local-volumes-test-5657 PodName:hostexec-node2-8cwtq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:58:44.988: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:58:45.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5657" for this suite. • [SLOW TEST:20.715 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":5,"skipped":107,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:57:05.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should modify fsGroup if fsGroupPolicy=File /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1460 STEP: Building a driver namespace object, basename csi-mock-volumes-3523 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 6 01:57:05.185: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3523-8887/csi-attacher Nov 6 01:57:05.189: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3523 Nov 6 01:57:05.189: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-3523 Nov 6 01:57:05.192: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3523 Nov 6 01:57:05.195: INFO: creating *v1.Role: csi-mock-volumes-3523-8887/external-attacher-cfg-csi-mock-volumes-3523 Nov 6 01:57:05.198: INFO: creating *v1.RoleBinding: csi-mock-volumes-3523-8887/csi-attacher-role-cfg Nov 6 01:57:05.201: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3523-8887/csi-provisioner Nov 6 01:57:05.204: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3523 Nov 6 01:57:05.204: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-3523 Nov 6 01:57:05.206: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3523 Nov 6 01:57:05.209: INFO: creating *v1.Role: csi-mock-volumes-3523-8887/external-provisioner-cfg-csi-mock-volumes-3523 Nov 6 01:57:05.212: INFO: creating *v1.RoleBinding: csi-mock-volumes-3523-8887/csi-provisioner-role-cfg Nov 6 01:57:05.215: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3523-8887/csi-resizer Nov 6 01:57:05.217: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3523 Nov 6 01:57:05.217: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-3523 Nov 6 01:57:05.219: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3523 Nov 6 01:57:05.223: INFO: creating *v1.Role: csi-mock-volumes-3523-8887/external-resizer-cfg-csi-mock-volumes-3523 Nov 6 01:57:05.225: INFO: creating *v1.RoleBinding: csi-mock-volumes-3523-8887/csi-resizer-role-cfg Nov 6 01:57:05.227: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3523-8887/csi-snapshotter Nov 6 01:57:05.230: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3523 Nov 6 01:57:05.230: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-3523 Nov 6 01:57:05.233: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3523 Nov 6 01:57:05.236: INFO: creating *v1.Role: csi-mock-volumes-3523-8887/external-snapshotter-leaderelection-csi-mock-volumes-3523 Nov 6 01:57:05.239: INFO: creating *v1.RoleBinding: csi-mock-volumes-3523-8887/external-snapshotter-leaderelection Nov 6 01:57:05.242: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3523-8887/csi-mock Nov 6 01:57:05.246: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3523 Nov 6 01:57:05.248: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3523 Nov 6 01:57:05.250: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3523 Nov 6 01:57:05.254: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3523 Nov 6 01:57:05.256: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3523 Nov 6 01:57:05.258: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3523 Nov 6 01:57:05.261: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3523 Nov 6 01:57:05.264: INFO: creating *v1.StatefulSet: csi-mock-volumes-3523-8887/csi-mockplugin Nov 6 01:57:05.268: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-3523 Nov 6 01:57:05.271: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-3523" Nov 6 01:57:05.273: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-3523 to register on node node1 STEP: Creating pod with fsGroup Nov 6 01:57:15.291: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 6 01:57:15.296: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-g65np] to have phase Bound Nov 6 01:57:15.297: INFO: PersistentVolumeClaim pvc-g65np found but phase is Pending instead of Bound. Nov 6 01:57:17.299: INFO: PersistentVolumeClaim pvc-g65np found and phase=Bound (2.003813253s) Nov 6 01:57:21.321: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir /mnt/test/csi-mock-volumes-3523] Namespace:csi-mock-volumes-3523 PodName:pvc-volume-tester-kl5f9 ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:57:21.321: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:57:21.415: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'filecontents' > '/mnt/test/csi-mock-volumes-3523/csi-mock-volumes-3523'; sync] Namespace:csi-mock-volumes-3523 PodName:pvc-volume-tester-kl5f9 ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:57:21.415: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:57:23.718: INFO: ExecWithOptions {Command:[/bin/sh -c ls -l /mnt/test/csi-mock-volumes-3523/csi-mock-volumes-3523] Namespace:csi-mock-volumes-3523 PodName:pvc-volume-tester-kl5f9 ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:57:23.718: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:57:23.803: INFO: pod csi-mock-volumes-3523/pvc-volume-tester-kl5f9 exec for cmd ls -l /mnt/test/csi-mock-volumes-3523/csi-mock-volumes-3523, stdout: -rw-r--r-- 1 root 12558 13 Nov 6 01:57 /mnt/test/csi-mock-volumes-3523/csi-mock-volumes-3523, stderr: Nov 6 01:57:23.803: INFO: ExecWithOptions {Command:[/bin/sh -c rm -fr /mnt/test/csi-mock-volumes-3523] Namespace:csi-mock-volumes-3523 PodName:pvc-volume-tester-kl5f9 ContainerName:volume-tester Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:57:23.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod pvc-volume-tester-kl5f9 Nov 6 01:57:23.883: INFO: Deleting pod "pvc-volume-tester-kl5f9" in namespace "csi-mock-volumes-3523" Nov 6 01:57:23.888: INFO: Wait up to 5m0s for pod "pvc-volume-tester-kl5f9" to be fully deleted STEP: Deleting claim pvc-g65np Nov 6 01:58:09.901: INFO: Waiting up to 2m0s for PersistentVolume pvc-1a9a97b7-a735-4362-8374-afd8a92b2c4d to get deleted Nov 6 01:58:09.903: INFO: PersistentVolume pvc-1a9a97b7-a735-4362-8374-afd8a92b2c4d found and phase=Bound (1.698787ms) Nov 6 01:58:11.907: INFO: PersistentVolume pvc-1a9a97b7-a735-4362-8374-afd8a92b2c4d was removed STEP: Deleting storageclass csi-mock-volumes-3523-sck8jjb STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-3523 STEP: Waiting for namespaces [csi-mock-volumes-3523] to vanish STEP: uninstalling csi mock driver Nov 6 01:58:17.922: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3523-8887/csi-attacher Nov 6 01:58:17.926: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3523 Nov 6 01:58:17.930: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3523 Nov 6 01:58:17.934: INFO: deleting *v1.Role: csi-mock-volumes-3523-8887/external-attacher-cfg-csi-mock-volumes-3523 Nov 6 01:58:17.938: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3523-8887/csi-attacher-role-cfg Nov 6 01:58:17.941: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3523-8887/csi-provisioner Nov 6 01:58:17.945: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3523 Nov 6 01:58:17.948: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3523 Nov 6 01:58:17.952: INFO: deleting *v1.Role: csi-mock-volumes-3523-8887/external-provisioner-cfg-csi-mock-volumes-3523 Nov 6 01:58:17.956: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3523-8887/csi-provisioner-role-cfg Nov 6 01:58:17.959: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3523-8887/csi-resizer Nov 6 01:58:17.962: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3523 Nov 6 01:58:17.965: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3523 Nov 6 01:58:17.968: INFO: deleting *v1.Role: csi-mock-volumes-3523-8887/external-resizer-cfg-csi-mock-volumes-3523 Nov 6 01:58:17.972: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3523-8887/csi-resizer-role-cfg Nov 6 01:58:17.975: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3523-8887/csi-snapshotter Nov 6 01:58:17.978: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3523 Nov 6 01:58:17.981: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3523 Nov 6 01:58:17.984: INFO: deleting *v1.Role: csi-mock-volumes-3523-8887/external-snapshotter-leaderelection-csi-mock-volumes-3523 Nov 6 01:58:17.987: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3523-8887/external-snapshotter-leaderelection Nov 6 01:58:17.990: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3523-8887/csi-mock Nov 6 01:58:17.994: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3523 Nov 6 01:58:17.997: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3523 Nov 6 01:58:18.000: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3523 Nov 6 01:58:18.003: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3523 Nov 6 01:58:18.008: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3523 Nov 6 01:58:18.012: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3523 Nov 6 01:58:18.015: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3523 Nov 6 01:58:18.019: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3523-8887/csi-mockplugin Nov 6 01:58:18.022: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-3523 STEP: deleting the driver namespace: csi-mock-volumes-3523-8887 STEP: Waiting for namespaces [csi-mock-volumes-3523-8887] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:58:46.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:100.918 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI FSGroupPolicy [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1436 should modify fsGroup if fsGroupPolicy=File /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1460 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File","total":-1,"completed":4,"skipped":66,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:58:42.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] nonexistent volume subPath should have the correct mode and owner using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:63 STEP: Creating a pod to test emptydir subpath on tmpfs Nov 6 01:58:42.304: INFO: Waiting up to 5m0s for pod "pod-23398edc-fe20-4271-824c-b943442b1ffe" in namespace "emptydir-5699" to be "Succeeded or Failed" Nov 6 01:58:42.307: INFO: Pod "pod-23398edc-fe20-4271-824c-b943442b1ffe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222003ms Nov 6 01:58:44.311: INFO: Pod "pod-23398edc-fe20-4271-824c-b943442b1ffe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006858239s Nov 6 01:58:46.317: INFO: Pod "pod-23398edc-fe20-4271-824c-b943442b1ffe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012885226s STEP: Saw pod success Nov 6 01:58:46.317: INFO: Pod "pod-23398edc-fe20-4271-824c-b943442b1ffe" satisfied condition "Succeeded or Failed" Nov 6 01:58:46.320: INFO: Trying to get logs from node node1 pod pod-23398edc-fe20-4271-824c-b943442b1ffe container test-container: STEP: delete the pod Nov 6 01:58:46.331: INFO: Waiting for pod pod-23398edc-fe20-4271-824c-b943442b1ffe to disappear Nov 6 01:58:46.333: INFO: Pod pod-23398edc-fe20-4271-824c-b943442b1ffe no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:58:46.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5699" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup","total":-1,"completed":10,"skipped":487,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:58:35.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 6 01:58:41.741: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-91a25510-cf56-469e-98ab-3422eb9f259b && mount --bind /tmp/local-volume-test-91a25510-cf56-469e-98ab-3422eb9f259b /tmp/local-volume-test-91a25510-cf56-469e-98ab-3422eb9f259b] Namespace:persistent-local-volumes-test-3101 PodName:hostexec-node2-xm52s ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:58:41.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:58:42.114: INFO: Creating a PV followed by a PVC Nov 6 01:58:42.120: INFO: Waiting for PV local-pv7m9s4 to bind to PVC pvc-9sxvz Nov 6 01:58:42.120: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-9sxvz] to have phase Bound Nov 6 01:58:42.122: INFO: PersistentVolumeClaim pvc-9sxvz found but phase is Pending instead of Bound. Nov 6 01:58:44.125: INFO: PersistentVolumeClaim pvc-9sxvz found but phase is Pending instead of Bound. Nov 6 01:58:46.128: INFO: PersistentVolumeClaim pvc-9sxvz found but phase is Pending instead of Bound. Nov 6 01:58:48.131: INFO: PersistentVolumeClaim pvc-9sxvz found but phase is Pending instead of Bound. Nov 6 01:58:50.134: INFO: PersistentVolumeClaim pvc-9sxvz found but phase is Pending instead of Bound. Nov 6 01:58:52.139: INFO: PersistentVolumeClaim pvc-9sxvz found and phase=Bound (10.018932321s) Nov 6 01:58:52.139: INFO: Waiting up to 3m0s for PersistentVolume local-pv7m9s4 to have phase Bound Nov 6 01:58:52.141: INFO: PersistentVolume local-pv7m9s4 found and phase=Bound (2.09773ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 6 01:58:56.179: INFO: pod "pod-aa744f3e-4d02-44ed-a944-918ae9859eb0" created on Node "node2" STEP: Writing in pod1 Nov 6 01:58:56.179: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3101 PodName:pod-aa744f3e-4d02-44ed-a944-918ae9859eb0 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:58:56.179: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:58:56.263: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Nov 6 01:58:56.263: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3101 PodName:pod-aa744f3e-4d02-44ed-a944-918ae9859eb0 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:58:56.263: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:58:56.344: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Nov 6 01:58:56.344: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-91a25510-cf56-469e-98ab-3422eb9f259b > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3101 PodName:pod-aa744f3e-4d02-44ed-a944-918ae9859eb0 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:58:56.344: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:58:56.444: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-91a25510-cf56-469e-98ab-3422eb9f259b > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-aa744f3e-4d02-44ed-a944-918ae9859eb0 in namespace persistent-local-volumes-test-3101 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:58:56.449: INFO: Deleting PersistentVolumeClaim "pvc-9sxvz" Nov 6 01:58:56.453: INFO: Deleting PersistentVolume "local-pv7m9s4" STEP: Removing the test directory Nov 6 01:58:56.458: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-91a25510-cf56-469e-98ab-3422eb9f259b && rm -r /tmp/local-volume-test-91a25510-cf56-469e-98ab-3422eb9f259b] Namespace:persistent-local-volumes-test-3101 PodName:hostexec-node2-xm52s ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:58:56.458: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:58:56.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3101" for this suite. • [SLOW TEST:20.874 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":13,"skipped":390,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:58:35.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-27a27109-b803-4015-8dc2-7c553f80ba0a" Nov 6 01:58:37.726: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-27a27109-b803-4015-8dc2-7c553f80ba0a && dd if=/dev/zero of=/tmp/local-volume-test-27a27109-b803-4015-8dc2-7c553f80ba0a/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-27a27109-b803-4015-8dc2-7c553f80ba0a/file] Namespace:persistent-local-volumes-test-6608 PodName:hostexec-node1-7tf8h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:58:37.726: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:58:38.680: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-27a27109-b803-4015-8dc2-7c553f80ba0a/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6608 PodName:hostexec-node1-7tf8h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:58:38.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:58:38.915: INFO: Creating a PV followed by a PVC Nov 6 01:58:38.922: INFO: Waiting for PV local-pvz9wxl to bind to PVC pvc-mzhzq Nov 6 01:58:38.922: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-mzhzq] to have phase Bound Nov 6 01:58:38.924: INFO: PersistentVolumeClaim pvc-mzhzq found but phase is Pending instead of Bound. Nov 6 01:58:40.928: INFO: PersistentVolumeClaim pvc-mzhzq found but phase is Pending instead of Bound. Nov 6 01:58:42.934: INFO: PersistentVolumeClaim pvc-mzhzq found but phase is Pending instead of Bound. Nov 6 01:58:44.938: INFO: PersistentVolumeClaim pvc-mzhzq found but phase is Pending instead of Bound. Nov 6 01:58:46.942: INFO: PersistentVolumeClaim pvc-mzhzq found but phase is Pending instead of Bound. Nov 6 01:58:48.945: INFO: PersistentVolumeClaim pvc-mzhzq found but phase is Pending instead of Bound. Nov 6 01:58:50.948: INFO: PersistentVolumeClaim pvc-mzhzq found but phase is Pending instead of Bound. Nov 6 01:58:52.954: INFO: PersistentVolumeClaim pvc-mzhzq found and phase=Bound (14.031626226s) Nov 6 01:58:52.954: INFO: Waiting up to 3m0s for PersistentVolume local-pvz9wxl to have phase Bound Nov 6 01:58:52.956: INFO: PersistentVolume local-pvz9wxl found and phase=Bound (2.624911ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 6 01:58:56.984: INFO: pod "pod-d1f92d78-c565-4a04-8b65-fa79c3802288" created on Node "node1" STEP: Writing in pod1 Nov 6 01:58:56.985: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6608 PodName:pod-d1f92d78-c565-4a04-8b65-fa79c3802288 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:58:56.985: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:58:57.173: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Nov 6 01:58:57.173: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6608 PodName:pod-d1f92d78-c565-4a04-8b65-fa79c3802288 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:58:57.173: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:58:57.259: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-d1f92d78-c565-4a04-8b65-fa79c3802288 in namespace persistent-local-volumes-test-6608 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:58:57.265: INFO: Deleting PersistentVolumeClaim "pvc-mzhzq" Nov 6 01:58:57.270: INFO: Deleting PersistentVolume "local-pvz9wxl" Nov 6 01:58:57.276: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-27a27109-b803-4015-8dc2-7c553f80ba0a/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6608 PodName:hostexec-node1-7tf8h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:58:57.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node1" at path /tmp/local-volume-test-27a27109-b803-4015-8dc2-7c553f80ba0a/file Nov 6 01:58:57.374: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-6608 PodName:hostexec-node1-7tf8h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:58:57.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-27a27109-b803-4015-8dc2-7c553f80ba0a Nov 6 01:58:57.464: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-27a27109-b803-4015-8dc2-7c553f80ba0a] Namespace:persistent-local-volumes-test-6608 PodName:hostexec-node1-7tf8h ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:58:57.464: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:58:57.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6608" for this suite. • [SLOW TEST:21.958 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":15,"skipped":497,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:58:46.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-ac20f2e9-8846-4f69-9191-ede4cca5e746" Nov 6 01:58:50.119: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-ac20f2e9-8846-4f69-9191-ede4cca5e746 && dd if=/dev/zero of=/tmp/local-volume-test-ac20f2e9-8846-4f69-9191-ede4cca5e746/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-ac20f2e9-8846-4f69-9191-ede4cca5e746/file] Namespace:persistent-local-volumes-test-2207 PodName:hostexec-node1-phcgf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:58:50.119: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:58:50.242: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-ac20f2e9-8846-4f69-9191-ede4cca5e746/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-2207 PodName:hostexec-node1-phcgf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:58:50.243: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:58:50.384: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop1 && mount -t ext4 /dev/loop1 /tmp/local-volume-test-ac20f2e9-8846-4f69-9191-ede4cca5e746 && chmod o+rwx /tmp/local-volume-test-ac20f2e9-8846-4f69-9191-ede4cca5e746] Namespace:persistent-local-volumes-test-2207 PodName:hostexec-node1-phcgf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:58:50.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:58:50.596: INFO: Creating a PV followed by a PVC Nov 6 01:58:50.603: INFO: Waiting for PV local-pvnfz2c to bind to PVC pvc-8z6rc Nov 6 01:58:50.603: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-8z6rc] to have phase Bound Nov 6 01:58:50.605: INFO: PersistentVolumeClaim pvc-8z6rc found but phase is Pending instead of Bound. Nov 6 01:58:52.609: INFO: PersistentVolumeClaim pvc-8z6rc found and phase=Bound (2.006297135s) Nov 6 01:58:52.609: INFO: Waiting up to 3m0s for PersistentVolume local-pvnfz2c to have phase Bound Nov 6 01:58:52.612: INFO: PersistentVolume local-pvnfz2c found and phase=Bound (2.933955ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Nov 6 01:58:56.639: INFO: pod "pod-f74ae3ca-7a4d-4a0c-b122-506cfc0ac000" created on Node "node1" STEP: Writing in pod1 Nov 6 01:58:56.639: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2207 PodName:pod-f74ae3ca-7a4d-4a0c-b122-506cfc0ac000 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:58:56.639: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:58:57.182: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 6 01:58:57.182: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2207 PodName:pod-f74ae3ca-7a4d-4a0c-b122-506cfc0ac000 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:58:57.182: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:58:57.274: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-f74ae3ca-7a4d-4a0c-b122-506cfc0ac000 in namespace persistent-local-volumes-test-2207 STEP: Creating pod2 STEP: Creating a pod Nov 6 01:59:03.303: INFO: pod "pod-9d726a95-cbb8-4c84-a7d0-fe28a930857a" created on Node "node1" STEP: Reading in pod2 Nov 6 01:59:03.303: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-2207 PodName:pod-9d726a95-cbb8-4c84-a7d0-fe28a930857a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:59:03.303: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:03.386: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-9d726a95-cbb8-4c84-a7d0-fe28a930857a in namespace persistent-local-volumes-test-2207 [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:59:03.391: INFO: Deleting PersistentVolumeClaim "pvc-8z6rc" Nov 6 01:59:03.395: INFO: Deleting PersistentVolume "local-pvnfz2c" Nov 6 01:59:03.399: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-ac20f2e9-8846-4f69-9191-ede4cca5e746] Namespace:persistent-local-volumes-test-2207 PodName:hostexec-node1-phcgf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:03.399: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:03.529: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-ac20f2e9-8846-4f69-9191-ede4cca5e746/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-2207 PodName:hostexec-node1-phcgf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:03.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop1" on node "node1" at path /tmp/local-volume-test-ac20f2e9-8846-4f69-9191-ede4cca5e746/file Nov 6 01:59:03.612: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop1] Namespace:persistent-local-volumes-test-2207 PodName:hostexec-node1-phcgf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:03.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-ac20f2e9-8846-4f69-9191-ede4cca5e746 Nov 6 01:59:03.937: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ac20f2e9-8846-4f69-9191-ede4cca5e746] Namespace:persistent-local-volumes-test-2207 PodName:hostexec-node1-phcgf ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:03.937: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:59:04.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2207" for this suite. • [SLOW TEST:17.978 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":78,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:58:45.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 6 01:58:49.529: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-e60b22b1-33db-4a66-8b5d-99efded46f18] Namespace:persistent-local-volumes-test-3965 PodName:hostexec-node2-ssjp8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:58:49.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:58:49.618: INFO: Creating a PV followed by a PVC Nov 6 01:58:49.624: INFO: Waiting for PV local-pvphlmw to bind to PVC pvc-7t226 Nov 6 01:58:49.624: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-7t226] to have phase Bound Nov 6 01:58:49.627: INFO: PersistentVolumeClaim pvc-7t226 found but phase is Pending instead of Bound. Nov 6 01:58:51.630: INFO: PersistentVolumeClaim pvc-7t226 found but phase is Pending instead of Bound. Nov 6 01:58:53.632: INFO: PersistentVolumeClaim pvc-7t226 found and phase=Bound (4.007966292s) Nov 6 01:58:53.632: INFO: Waiting up to 3m0s for PersistentVolume local-pvphlmw to have phase Bound Nov 6 01:58:53.634: INFO: PersistentVolume local-pvphlmw found and phase=Bound (1.673086ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Nov 6 01:58:57.658: INFO: pod "pod-ecb7adc4-6892-4b6e-ac6d-7c6351684287" created on Node "node2" STEP: Writing in pod1 Nov 6 01:58:57.658: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3965 PodName:pod-ecb7adc4-6892-4b6e-ac6d-7c6351684287 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:58:57.658: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:58:57.756: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 6 01:58:57.756: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3965 PodName:pod-ecb7adc4-6892-4b6e-ac6d-7c6351684287 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:58:57.756: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:58:57.857: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-ecb7adc4-6892-4b6e-ac6d-7c6351684287 in namespace persistent-local-volumes-test-3965 STEP: Creating pod2 STEP: Creating a pod Nov 6 01:59:03.882: INFO: pod "pod-a21b1d26-3b95-42d2-ae7f-6a5f51bd7e59" created on Node "node2" STEP: Reading in pod2 Nov 6 01:59:03.882: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3965 PodName:pod-a21b1d26-3b95-42d2-ae7f-6a5f51bd7e59 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:59:03.882: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:03.957: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-a21b1d26-3b95-42d2-ae7f-6a5f51bd7e59 in namespace persistent-local-volumes-test-3965 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:59:03.962: INFO: Deleting PersistentVolumeClaim "pvc-7t226" Nov 6 01:59:03.967: INFO: Deleting PersistentVolume "local-pvphlmw" STEP: Removing the test directory Nov 6 01:59:03.972: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e60b22b1-33db-4a66-8b5d-99efded46f18] Namespace:persistent-local-volumes-test-3965 PodName:hostexec-node2-ssjp8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:03.972: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:59:04.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3965" for this suite. • [SLOW TEST:18.614 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":6,"skipped":297,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:59:04.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:59 STEP: Creating configMap with name configmap-test-volume-6328419f-18d8-4487-9cf8-181a91e70575 STEP: Creating a pod to test consume configMaps Nov 6 01:59:04.091: INFO: Waiting up to 5m0s for pod "pod-configmaps-f1669901-4e7a-4a6d-a2cb-a2f6e5aa2e3e" in namespace "configmap-8196" to be "Succeeded or Failed" Nov 6 01:59:04.100: INFO: Pod "pod-configmaps-f1669901-4e7a-4a6d-a2cb-a2f6e5aa2e3e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.050453ms Nov 6 01:59:06.104: INFO: Pod "pod-configmaps-f1669901-4e7a-4a6d-a2cb-a2f6e5aa2e3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013138175s Nov 6 01:59:08.108: INFO: Pod "pod-configmaps-f1669901-4e7a-4a6d-a2cb-a2f6e5aa2e3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017060256s STEP: Saw pod success Nov 6 01:59:08.108: INFO: Pod "pod-configmaps-f1669901-4e7a-4a6d-a2cb-a2f6e5aa2e3e" satisfied condition "Succeeded or Failed" Nov 6 01:59:08.110: INFO: Trying to get logs from node node1 pod pod-configmaps-f1669901-4e7a-4a6d-a2cb-a2f6e5aa2e3e container agnhost-container: STEP: delete the pod Nov 6 01:59:08.127: INFO: Waiting for pod pod-configmaps-f1669901-4e7a-4a6d-a2cb-a2f6e5aa2e3e to disappear Nov 6 01:59:08.129: INFO: Pod pod-configmaps-f1669901-4e7a-4a6d-a2cb-a2f6e5aa2e3e no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:59:08.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8196" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":6,"skipped":80,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:58:56.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-f604c62b-ce2d-4d82-9a1f-e8f28df6127f" Nov 6 01:59:00.743: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-f604c62b-ce2d-4d82-9a1f-e8f28df6127f && dd if=/dev/zero of=/tmp/local-volume-test-f604c62b-ce2d-4d82-9a1f-e8f28df6127f/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-f604c62b-ce2d-4d82-9a1f-e8f28df6127f/file] Namespace:persistent-local-volumes-test-9398 PodName:hostexec-node1-x49vr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:00.743: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:00.877: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-f604c62b-ce2d-4d82-9a1f-e8f28df6127f/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9398 PodName:hostexec-node1-x49vr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:00.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:59:00.968: INFO: Creating a PV followed by a PVC Nov 6 01:59:00.974: INFO: Waiting for PV local-pvpmxjb to bind to PVC pvc-zgsgv Nov 6 01:59:00.974: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-zgsgv] to have phase Bound Nov 6 01:59:00.977: INFO: PersistentVolumeClaim pvc-zgsgv found but phase is Pending instead of Bound. Nov 6 01:59:02.980: INFO: PersistentVolumeClaim pvc-zgsgv found but phase is Pending instead of Bound. Nov 6 01:59:04.986: INFO: PersistentVolumeClaim pvc-zgsgv found but phase is Pending instead of Bound. Nov 6 01:59:06.990: INFO: PersistentVolumeClaim pvc-zgsgv found and phase=Bound (6.015365882s) Nov 6 01:59:06.990: INFO: Waiting up to 3m0s for PersistentVolume local-pvpmxjb to have phase Bound Nov 6 01:59:06.992: INFO: PersistentVolume local-pvpmxjb found and phase=Bound (1.892951ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 6 01:59:11.020: INFO: pod "pod-af625ed1-6103-4e28-9f93-f7fd2d44e87a" created on Node "node1" STEP: Writing in pod1 Nov 6 01:59:11.020: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file] Namespace:persistent-local-volumes-test-9398 PodName:pod-af625ed1-6103-4e28-9f93-f7fd2d44e87a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:59:11.020: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:11.357: INFO: podRWCmdExec cmd: "mkdir -p /tmp/mnt/volume1; echo test-file-content > /tmp/mnt/volume1/test-file && SUDO_CMD=$(which sudo); echo ${SUDO_CMD} && ${SUDO_CMD} dd if=/tmp/mnt/volume1/test-file of=/mnt/volume1 bs=512 count=100 && rm /tmp/mnt/volume1/test-file", out: "", stderr: "0+1 records in\n0+1 records out\n18 bytes (18B) copied, 0.000121 seconds, 145.3KB/s", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Nov 6 01:59:11.357: INFO: ExecWithOptions {Command:[/bin/sh -c hexdump -n 100 -e '100 "%_p"' /mnt/volume1 | head -1] Namespace:persistent-local-volumes-test-9398 PodName:pod-af625ed1-6103-4e28-9f93-f7fd2d44e87a ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:59:11.357: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:11.490: INFO: podRWCmdExec cmd: "hexdump -n 100 -e '100 \"%_p\"' /mnt/volume1 | head -1", out: "test-file-content...................................................................................", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-af625ed1-6103-4e28-9f93-f7fd2d44e87a in namespace persistent-local-volumes-test-9398 [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:59:11.494: INFO: Deleting PersistentVolumeClaim "pvc-zgsgv" Nov 6 01:59:11.498: INFO: Deleting PersistentVolume "local-pvpmxjb" Nov 6 01:59:11.503: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-f604c62b-ce2d-4d82-9a1f-e8f28df6127f/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-9398 PodName:hostexec-node1-x49vr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:11.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node1" at path /tmp/local-volume-test-f604c62b-ce2d-4d82-9a1f-e8f28df6127f/file Nov 6 01:59:11.592: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-9398 PodName:hostexec-node1-x49vr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:11.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-f604c62b-ce2d-4d82-9a1f-e8f28df6127f Nov 6 01:59:11.675: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f604c62b-ce2d-4d82-9a1f-e8f28df6127f] Namespace:persistent-local-volumes-test-9398 PodName:hostexec-node1-x49vr ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:11.675: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:59:11.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9398" for this suite. • [SLOW TEST:15.097 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":14,"skipped":441,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:59:11.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename mounted-volume-expand STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:61 Nov 6 01:59:11.914: INFO: Only supported for providers [aws gce] (not local) [AfterEach] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:59:11.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "mounted-volume-expand-6295" for this suite. [AfterEach] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:108 Nov 6 01:59:11.924: INFO: AfterEach: Cleaning up resources for mounted volume resize S [SKIPPING] in Spec Setup (BeforeEach) [0.035 seconds] [sig-storage] Mounted volume expand /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should verify mounted devices can be resized [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:122 Only supported for providers [aws gce] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:62 ------------------------------ SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:55:55.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not expand volume if resizingOnDriver=off, resizingOnSC=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 STEP: Building a driver namespace object, basename csi-mock-volumes-106 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 6 01:55:55.213: INFO: creating *v1.ServiceAccount: csi-mock-volumes-106-2846/csi-attacher Nov 6 01:55:55.215: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-106 Nov 6 01:55:55.215: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-106 Nov 6 01:55:55.218: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-106 Nov 6 01:55:55.222: INFO: creating *v1.Role: csi-mock-volumes-106-2846/external-attacher-cfg-csi-mock-volumes-106 Nov 6 01:55:55.224: INFO: creating *v1.RoleBinding: csi-mock-volumes-106-2846/csi-attacher-role-cfg Nov 6 01:55:55.227: INFO: creating *v1.ServiceAccount: csi-mock-volumes-106-2846/csi-provisioner Nov 6 01:55:55.230: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-106 Nov 6 01:55:55.230: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-106 Nov 6 01:55:55.232: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-106 Nov 6 01:55:55.235: INFO: creating *v1.Role: csi-mock-volumes-106-2846/external-provisioner-cfg-csi-mock-volumes-106 Nov 6 01:55:55.237: INFO: creating *v1.RoleBinding: csi-mock-volumes-106-2846/csi-provisioner-role-cfg Nov 6 01:55:55.240: INFO: creating *v1.ServiceAccount: csi-mock-volumes-106-2846/csi-resizer Nov 6 01:55:55.242: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-106 Nov 6 01:55:55.242: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-106 Nov 6 01:55:55.244: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-106 Nov 6 01:55:55.246: INFO: creating *v1.Role: csi-mock-volumes-106-2846/external-resizer-cfg-csi-mock-volumes-106 Nov 6 01:55:55.248: INFO: creating *v1.RoleBinding: csi-mock-volumes-106-2846/csi-resizer-role-cfg Nov 6 01:55:55.251: INFO: creating *v1.ServiceAccount: csi-mock-volumes-106-2846/csi-snapshotter Nov 6 01:55:55.253: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-106 Nov 6 01:55:55.253: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-106 Nov 6 01:55:55.256: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-106 Nov 6 01:55:55.259: INFO: creating *v1.Role: csi-mock-volumes-106-2846/external-snapshotter-leaderelection-csi-mock-volumes-106 Nov 6 01:55:55.262: INFO: creating *v1.RoleBinding: csi-mock-volumes-106-2846/external-snapshotter-leaderelection Nov 6 01:55:55.264: INFO: creating *v1.ServiceAccount: csi-mock-volumes-106-2846/csi-mock Nov 6 01:55:55.267: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-106 Nov 6 01:55:55.270: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-106 Nov 6 01:55:55.272: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-106 Nov 6 01:55:55.275: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-106 Nov 6 01:55:55.277: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-106 Nov 6 01:55:55.279: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-106 Nov 6 01:55:55.282: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-106 Nov 6 01:55:55.285: INFO: creating *v1.StatefulSet: csi-mock-volumes-106-2846/csi-mockplugin Nov 6 01:55:55.291: INFO: creating *v1.StatefulSet: csi-mock-volumes-106-2846/csi-mockplugin-attacher Nov 6 01:55:55.294: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-106 to register on node node2 STEP: Creating pod Nov 6 01:56:11.563: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 6 01:56:11.568: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-b6lt2] to have phase Bound Nov 6 01:56:11.571: INFO: PersistentVolumeClaim pvc-b6lt2 found but phase is Pending instead of Bound. Nov 6 01:56:13.574: INFO: PersistentVolumeClaim pvc-b6lt2 found and phase=Bound (2.005913403s) STEP: Expanding current pvc STEP: Deleting pod pvc-volume-tester-qccr7 Nov 6 01:58:49.613: INFO: Deleting pod "pvc-volume-tester-qccr7" in namespace "csi-mock-volumes-106" Nov 6 01:58:49.618: INFO: Wait up to 5m0s for pod "pvc-volume-tester-qccr7" to be fully deleted STEP: Deleting claim pvc-b6lt2 Nov 6 01:58:59.629: INFO: Waiting up to 2m0s for PersistentVolume pvc-2587f209-d0fa-49ce-9267-ee1ce4b2fa3e to get deleted Nov 6 01:58:59.631: INFO: PersistentVolume pvc-2587f209-d0fa-49ce-9267-ee1ce4b2fa3e found and phase=Bound (1.941202ms) Nov 6 01:59:01.634: INFO: PersistentVolume pvc-2587f209-d0fa-49ce-9267-ee1ce4b2fa3e was removed STEP: Deleting storageclass csi-mock-volumes-106-scx7q9j STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-106 STEP: Waiting for namespaces [csi-mock-volumes-106] to vanish STEP: uninstalling csi mock driver Nov 6 01:59:07.645: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-106-2846/csi-attacher Nov 6 01:59:07.650: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-106 Nov 6 01:59:07.655: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-106 Nov 6 01:59:07.658: INFO: deleting *v1.Role: csi-mock-volumes-106-2846/external-attacher-cfg-csi-mock-volumes-106 Nov 6 01:59:07.662: INFO: deleting *v1.RoleBinding: csi-mock-volumes-106-2846/csi-attacher-role-cfg Nov 6 01:59:07.667: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-106-2846/csi-provisioner Nov 6 01:59:07.671: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-106 Nov 6 01:59:07.674: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-106 Nov 6 01:59:07.677: INFO: deleting *v1.Role: csi-mock-volumes-106-2846/external-provisioner-cfg-csi-mock-volumes-106 Nov 6 01:59:07.680: INFO: deleting *v1.RoleBinding: csi-mock-volumes-106-2846/csi-provisioner-role-cfg Nov 6 01:59:07.684: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-106-2846/csi-resizer Nov 6 01:59:07.687: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-106 Nov 6 01:59:07.690: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-106 Nov 6 01:59:07.694: INFO: deleting *v1.Role: csi-mock-volumes-106-2846/external-resizer-cfg-csi-mock-volumes-106 Nov 6 01:59:07.697: INFO: deleting *v1.RoleBinding: csi-mock-volumes-106-2846/csi-resizer-role-cfg Nov 6 01:59:07.700: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-106-2846/csi-snapshotter Nov 6 01:59:07.703: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-106 Nov 6 01:59:07.707: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-106 Nov 6 01:59:07.710: INFO: deleting *v1.Role: csi-mock-volumes-106-2846/external-snapshotter-leaderelection-csi-mock-volumes-106 Nov 6 01:59:07.714: INFO: deleting *v1.RoleBinding: csi-mock-volumes-106-2846/external-snapshotter-leaderelection Nov 6 01:59:07.718: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-106-2846/csi-mock Nov 6 01:59:07.722: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-106 Nov 6 01:59:07.725: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-106 Nov 6 01:59:07.729: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-106 Nov 6 01:59:07.731: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-106 Nov 6 01:59:07.735: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-106 Nov 6 01:59:07.739: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-106 Nov 6 01:59:07.742: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-106 Nov 6 01:59:07.745: INFO: deleting *v1.StatefulSet: csi-mock-volumes-106-2846/csi-mockplugin Nov 6 01:59:07.750: INFO: deleting *v1.StatefulSet: csi-mock-volumes-106-2846/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-106-2846 STEP: Waiting for namespaces [csi-mock-volumes-106-2846] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:59:19.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:204.622 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI Volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561 should not expand volume if resizingOnDriver=off, resizingOnSC=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:59:04.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 6 01:59:08.215: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-7398ddd0-fe54-4b0e-b4b7-137affe9b788] Namespace:persistent-local-volumes-test-4843 PodName:hostexec-node2-qsgg8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:08.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:59:08.618: INFO: Creating a PV followed by a PVC Nov 6 01:59:08.626: INFO: Waiting for PV local-pvbg77n to bind to PVC pvc-9fjgk Nov 6 01:59:08.626: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-9fjgk] to have phase Bound Nov 6 01:59:08.628: INFO: PersistentVolumeClaim pvc-9fjgk found but phase is Pending instead of Bound. Nov 6 01:59:10.634: INFO: PersistentVolumeClaim pvc-9fjgk found and phase=Bound (2.008152863s) Nov 6 01:59:10.634: INFO: Waiting up to 3m0s for PersistentVolume local-pvbg77n to have phase Bound Nov 6 01:59:10.637: INFO: PersistentVolume local-pvbg77n found and phase=Bound (2.558917ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 6 01:59:20.667: INFO: pod "pod-259bec68-068c-4fd6-bb22-3c5d46277458" created on Node "node2" STEP: Writing in pod1 Nov 6 01:59:20.667: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4843 PodName:pod-259bec68-068c-4fd6-bb22-3c5d46277458 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:59:20.667: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:20.768: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Nov 6 01:59:20.768: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4843 PodName:pod-259bec68-068c-4fd6-bb22-3c5d46277458 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:59:20.768: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:20.880: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-259bec68-068c-4fd6-bb22-3c5d46277458 in namespace persistent-local-volumes-test-4843 [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:59:20.885: INFO: Deleting PersistentVolumeClaim "pvc-9fjgk" Nov 6 01:59:20.889: INFO: Deleting PersistentVolume "local-pvbg77n" STEP: Removing the test directory Nov 6 01:59:20.893: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-7398ddd0-fe54-4b0e-b4b7-137affe9b788] Namespace:persistent-local-volumes-test-4843 PodName:hostexec-node2-qsgg8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:20.893: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:59:20.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4843" for this suite. • [SLOW TEST:16.830 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":7,"skipped":338,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:58:19.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should preserve attachment policy when no CSIDriver present /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 STEP: Building a driver namespace object, basename csi-mock-volumes-4232 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 6 01:58:19.956: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4232-6526/csi-attacher Nov 6 01:58:19.960: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4232 Nov 6 01:58:19.960: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4232 Nov 6 01:58:19.963: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4232 Nov 6 01:58:19.967: INFO: creating *v1.Role: csi-mock-volumes-4232-6526/external-attacher-cfg-csi-mock-volumes-4232 Nov 6 01:58:19.970: INFO: creating *v1.RoleBinding: csi-mock-volumes-4232-6526/csi-attacher-role-cfg Nov 6 01:58:19.972: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4232-6526/csi-provisioner Nov 6 01:58:19.975: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4232 Nov 6 01:58:19.975: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4232 Nov 6 01:58:19.978: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4232 Nov 6 01:58:19.980: INFO: creating *v1.Role: csi-mock-volumes-4232-6526/external-provisioner-cfg-csi-mock-volumes-4232 Nov 6 01:58:19.983: INFO: creating *v1.RoleBinding: csi-mock-volumes-4232-6526/csi-provisioner-role-cfg Nov 6 01:58:19.986: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4232-6526/csi-resizer Nov 6 01:58:19.988: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4232 Nov 6 01:58:19.988: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4232 Nov 6 01:58:19.991: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4232 Nov 6 01:58:19.995: INFO: creating *v1.Role: csi-mock-volumes-4232-6526/external-resizer-cfg-csi-mock-volumes-4232 Nov 6 01:58:19.997: INFO: creating *v1.RoleBinding: csi-mock-volumes-4232-6526/csi-resizer-role-cfg Nov 6 01:58:20.000: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4232-6526/csi-snapshotter Nov 6 01:58:20.003: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4232 Nov 6 01:58:20.003: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4232 Nov 6 01:58:20.020: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4232 Nov 6 01:58:20.023: INFO: creating *v1.Role: csi-mock-volumes-4232-6526/external-snapshotter-leaderelection-csi-mock-volumes-4232 Nov 6 01:58:20.025: INFO: creating *v1.RoleBinding: csi-mock-volumes-4232-6526/external-snapshotter-leaderelection Nov 6 01:58:20.028: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4232-6526/csi-mock Nov 6 01:58:20.030: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4232 Nov 6 01:58:20.033: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4232 Nov 6 01:58:20.035: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4232 Nov 6 01:58:20.039: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4232 Nov 6 01:58:20.041: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4232 Nov 6 01:58:20.043: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4232 Nov 6 01:58:20.046: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4232 Nov 6 01:58:20.048: INFO: creating *v1.StatefulSet: csi-mock-volumes-4232-6526/csi-mockplugin Nov 6 01:58:20.052: INFO: creating *v1.StatefulSet: csi-mock-volumes-4232-6526/csi-mockplugin-attacher Nov 6 01:58:20.056: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4232 to register on node node2 STEP: Creating pod Nov 6 01:58:29.571: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 6 01:58:29.575: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-mq6mz] to have phase Bound Nov 6 01:58:29.577: INFO: PersistentVolumeClaim pvc-mq6mz found but phase is Pending instead of Bound. Nov 6 01:58:31.580: INFO: PersistentVolumeClaim pvc-mq6mz found and phase=Bound (2.004476524s) STEP: Checking if VolumeAttachment was created for the pod STEP: Deleting pod pvc-volume-tester-26xg4 Nov 6 01:58:45.613: INFO: Deleting pod "pvc-volume-tester-26xg4" in namespace "csi-mock-volumes-4232" Nov 6 01:58:45.617: INFO: Wait up to 5m0s for pod "pvc-volume-tester-26xg4" to be fully deleted STEP: Deleting claim pvc-mq6mz Nov 6 01:59:01.631: INFO: Waiting up to 2m0s for PersistentVolume pvc-ed81ad84-c0a1-42d5-b3c9-e2eb3c3d18e9 to get deleted Nov 6 01:59:01.633: INFO: PersistentVolume pvc-ed81ad84-c0a1-42d5-b3c9-e2eb3c3d18e9 found and phase=Bound (2.129147ms) Nov 6 01:59:03.636: INFO: PersistentVolume pvc-ed81ad84-c0a1-42d5-b3c9-e2eb3c3d18e9 was removed STEP: Deleting storageclass csi-mock-volumes-4232-sch42mt STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4232 STEP: Waiting for namespaces [csi-mock-volumes-4232] to vanish STEP: uninstalling csi mock driver Nov 6 01:59:09.648: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4232-6526/csi-attacher Nov 6 01:59:09.652: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4232 Nov 6 01:59:09.655: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4232 Nov 6 01:59:09.662: INFO: deleting *v1.Role: csi-mock-volumes-4232-6526/external-attacher-cfg-csi-mock-volumes-4232 Nov 6 01:59:09.677: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4232-6526/csi-attacher-role-cfg Nov 6 01:59:09.686: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4232-6526/csi-provisioner Nov 6 01:59:09.693: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4232 Nov 6 01:59:09.697: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4232 Nov 6 01:59:09.706: INFO: deleting *v1.Role: csi-mock-volumes-4232-6526/external-provisioner-cfg-csi-mock-volumes-4232 Nov 6 01:59:09.709: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4232-6526/csi-provisioner-role-cfg Nov 6 01:59:09.712: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4232-6526/csi-resizer Nov 6 01:59:09.715: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4232 Nov 6 01:59:09.719: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4232 Nov 6 01:59:09.722: INFO: deleting *v1.Role: csi-mock-volumes-4232-6526/external-resizer-cfg-csi-mock-volumes-4232 Nov 6 01:59:09.726: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4232-6526/csi-resizer-role-cfg Nov 6 01:59:09.729: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4232-6526/csi-snapshotter Nov 6 01:59:09.733: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4232 Nov 6 01:59:09.736: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4232 Nov 6 01:59:09.739: INFO: deleting *v1.Role: csi-mock-volumes-4232-6526/external-snapshotter-leaderelection-csi-mock-volumes-4232 Nov 6 01:59:09.743: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4232-6526/external-snapshotter-leaderelection Nov 6 01:59:09.746: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4232-6526/csi-mock Nov 6 01:59:09.750: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4232 Nov 6 01:59:09.754: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4232 Nov 6 01:59:09.757: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4232 Nov 6 01:59:09.761: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4232 Nov 6 01:59:09.764: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4232 Nov 6 01:59:09.767: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4232 Nov 6 01:59:09.770: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4232 Nov 6 01:59:09.773: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4232-6526/csi-mockplugin Nov 6 01:59:09.777: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4232-6526/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-4232-6526 STEP: Waiting for namespaces [csi-mock-volumes-4232-6526] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:59:21.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:61.896 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI attach test using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:316 should preserve attachment policy when no CSIDriver present /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present","total":-1,"completed":11,"skipped":225,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:59:11.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 6 01:59:19.990: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-720d9f28-0246-4a7c-80ec-8eb912504991] Namespace:persistent-local-volumes-test-6486 PodName:hostexec-node2-tt6kt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:19.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:59:20.081: INFO: Creating a PV followed by a PVC Nov 6 01:59:20.088: INFO: Waiting for PV local-pvwnc2r to bind to PVC pvc-pq6dr Nov 6 01:59:20.088: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-pq6dr] to have phase Bound Nov 6 01:59:20.089: INFO: PersistentVolumeClaim pvc-pq6dr found but phase is Pending instead of Bound. Nov 6 01:59:22.092: INFO: PersistentVolumeClaim pvc-pq6dr found and phase=Bound (2.004652879s) Nov 6 01:59:22.092: INFO: Waiting up to 3m0s for PersistentVolume local-pvwnc2r to have phase Bound Nov 6 01:59:22.094: INFO: PersistentVolume local-pvwnc2r found and phase=Bound (1.898838ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Nov 6 01:59:22.098: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:59:22.100: INFO: Deleting PersistentVolumeClaim "pvc-pq6dr" Nov 6 01:59:22.103: INFO: Deleting PersistentVolume "local-pvwnc2r" STEP: Removing the test directory Nov 6 01:59:22.107: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-720d9f28-0246-4a7c-80ec-8eb912504991] Namespace:persistent-local-volumes-test-6486 PodName:hostexec-node2-tt6kt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:22.107: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:59:22.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6486" for this suite. S [SKIPPING] [10.273 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:59:22.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Nov 6 01:59:22.380: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:59:22.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-1990" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create unbound pv count metrics for pvc controller after creating pv only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:485 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:58:57.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-7f33de38-8f4e-4962-87b0-c140ced0d6d4" Nov 6 01:59:01.849: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-7f33de38-8f4e-4962-87b0-c140ced0d6d4 && dd if=/dev/zero of=/tmp/local-volume-test-7f33de38-8f4e-4962-87b0-c140ced0d6d4/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-7f33de38-8f4e-4962-87b0-c140ced0d6d4/file] Namespace:persistent-local-volumes-test-1372 PodName:hostexec-node2-8xzmp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:01.849: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:02.005: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-7f33de38-8f4e-4962-87b0-c140ced0d6d4/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1372 PodName:hostexec-node2-8xzmp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:02.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:59:02.096: INFO: Creating a PV followed by a PVC Nov 6 01:59:02.104: INFO: Waiting for PV local-pv696rw to bind to PVC pvc-jt4f2 Nov 6 01:59:02.104: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-jt4f2] to have phase Bound Nov 6 01:59:02.106: INFO: PersistentVolumeClaim pvc-jt4f2 found but phase is Pending instead of Bound. Nov 6 01:59:04.111: INFO: PersistentVolumeClaim pvc-jt4f2 found but phase is Pending instead of Bound. Nov 6 01:59:06.114: INFO: PersistentVolumeClaim pvc-jt4f2 found but phase is Pending instead of Bound. Nov 6 01:59:08.121: INFO: PersistentVolumeClaim pvc-jt4f2 found and phase=Bound (6.016886728s) Nov 6 01:59:08.121: INFO: Waiting up to 3m0s for PersistentVolume local-pv696rw to have phase Bound Nov 6 01:59:08.123: INFO: PersistentVolume local-pv696rw found and phase=Bound (2.246646ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Nov 6 01:59:20.151: INFO: pod "pod-1cd01fe6-6de0-4b9d-a7f2-7962b27bab9b" created on Node "node2" STEP: Writing in pod1 Nov 6 01:59:20.151: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1372 PodName:pod-1cd01fe6-6de0-4b9d-a7f2-7962b27bab9b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:59:20.151: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:20.240: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 6 01:59:20.240: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1372 PodName:pod-1cd01fe6-6de0-4b9d-a7f2-7962b27bab9b ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:59:20.240: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:20.317: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-1cd01fe6-6de0-4b9d-a7f2-7962b27bab9b in namespace persistent-local-volumes-test-1372 STEP: Creating pod2 STEP: Creating a pod Nov 6 01:59:28.341: INFO: pod "pod-36b8b51b-de53-48c6-94a0-5e9362b7620e" created on Node "node2" STEP: Reading in pod2 Nov 6 01:59:28.341: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-1372 PodName:pod-36b8b51b-de53-48c6-94a0-5e9362b7620e ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:59:28.341: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:28.874: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-36b8b51b-de53-48c6-94a0-5e9362b7620e in namespace persistent-local-volumes-test-1372 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:59:28.879: INFO: Deleting PersistentVolumeClaim "pvc-jt4f2" Nov 6 01:59:28.882: INFO: Deleting PersistentVolume "local-pv696rw" Nov 6 01:59:28.887: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-7f33de38-8f4e-4962-87b0-c140ced0d6d4/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1372 PodName:hostexec-node2-8xzmp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:28.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-7f33de38-8f4e-4962-87b0-c140ced0d6d4/file Nov 6 01:59:29.000: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-1372 PodName:hostexec-node2-8xzmp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:29.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-7f33de38-8f4e-4962-87b0-c140ced0d6d4 Nov 6 01:59:29.111: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-7f33de38-8f4e-4962-87b0-c140ced0d6d4] Namespace:persistent-local-volumes-test-1372 PodName:hostexec-node2-8xzmp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:29.111: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:59:29.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1372" for this suite. • [SLOW TEST:31.420 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":16,"skipped":580,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:59:22.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node1" using path "/tmp/local-volume-test-818450e1-fac4-479a-8b0f-9c52bf662728" Nov 6 01:59:26.475: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-818450e1-fac4-479a-8b0f-9c52bf662728 && dd if=/dev/zero of=/tmp/local-volume-test-818450e1-fac4-479a-8b0f-9c52bf662728/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-818450e1-fac4-479a-8b0f-9c52bf662728/file] Namespace:persistent-local-volumes-test-4581 PodName:hostexec-node1-vx44k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:26.475: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:26.806: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-818450e1-fac4-479a-8b0f-9c52bf662728/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-4581 PodName:hostexec-node1-vx44k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:26.806: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:27.186: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-818450e1-fac4-479a-8b0f-9c52bf662728 && chmod o+rwx /tmp/local-volume-test-818450e1-fac4-479a-8b0f-9c52bf662728] Namespace:persistent-local-volumes-test-4581 PodName:hostexec-node1-vx44k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:27.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:59:27.484: INFO: Creating a PV followed by a PVC Nov 6 01:59:27.491: INFO: Waiting for PV local-pvjsfbd to bind to PVC pvc-wvhl6 Nov 6 01:59:27.491: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-wvhl6] to have phase Bound Nov 6 01:59:27.493: INFO: PersistentVolumeClaim pvc-wvhl6 found but phase is Pending instead of Bound. Nov 6 01:59:29.497: INFO: PersistentVolumeClaim pvc-wvhl6 found and phase=Bound (2.006888401s) Nov 6 01:59:29.498: INFO: Waiting up to 3m0s for PersistentVolume local-pvjsfbd to have phase Bound Nov 6 01:59:29.500: INFO: PersistentVolume local-pvjsfbd found and phase=Bound (2.672495ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Nov 6 01:59:29.505: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:59:29.506: INFO: Deleting PersistentVolumeClaim "pvc-wvhl6" Nov 6 01:59:29.511: INFO: Deleting PersistentVolume "local-pvjsfbd" Nov 6 01:59:29.515: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-818450e1-fac4-479a-8b0f-9c52bf662728] Namespace:persistent-local-volumes-test-4581 PodName:hostexec-node1-vx44k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:29.515: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:29.628: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-818450e1-fac4-479a-8b0f-9c52bf662728/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-4581 PodName:hostexec-node1-vx44k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:29.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node1" at path /tmp/local-volume-test-818450e1-fac4-479a-8b0f-9c52bf662728/file Nov 6 01:59:29.731: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-4581 PodName:hostexec-node1-vx44k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:29.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-818450e1-fac4-479a-8b0f-9c52bf662728 Nov 6 01:59:29.835: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-818450e1-fac4-479a-8b0f-9c52bf662728] Namespace:persistent-local-volumes-test-4581 PodName:hostexec-node1-vx44k ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:29.835: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:59:29.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4581" for this suite. S [SKIPPING] [7.508 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:59:21.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-b1bcfcde-0843-434c-8177-020275404dbf" Nov 6 01:59:29.151: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-b1bcfcde-0843-434c-8177-020275404dbf" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-b1bcfcde-0843-434c-8177-020275404dbf" "/tmp/local-volume-test-b1bcfcde-0843-434c-8177-020275404dbf"] Namespace:persistent-local-volumes-test-3224 PodName:hostexec-node2-5d697 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:29.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:59:29.283: INFO: Creating a PV followed by a PVC Nov 6 01:59:29.290: INFO: Waiting for PV local-pvgbtsw to bind to PVC pvc-b5b9w Nov 6 01:59:29.290: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-b5b9w] to have phase Bound Nov 6 01:59:29.292: INFO: PersistentVolumeClaim pvc-b5b9w found but phase is Pending instead of Bound. Nov 6 01:59:31.295: INFO: PersistentVolumeClaim pvc-b5b9w found and phase=Bound (2.005039495s) Nov 6 01:59:31.295: INFO: Waiting up to 3m0s for PersistentVolume local-pvgbtsw to have phase Bound Nov 6 01:59:31.297: INFO: PersistentVolume local-pvgbtsw found and phase=Bound (1.819756ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Nov 6 01:59:31.301: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:59:31.303: INFO: Deleting PersistentVolumeClaim "pvc-b5b9w" Nov 6 01:59:31.308: INFO: Deleting PersistentVolume "local-pvgbtsw" STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-b1bcfcde-0843-434c-8177-020275404dbf" Nov 6 01:59:31.312: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-b1bcfcde-0843-434c-8177-020275404dbf"] Namespace:persistent-local-volumes-test-3224 PodName:hostexec-node2-5d697 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:31.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:59:31.826: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-b1bcfcde-0843-434c-8177-020275404dbf] Namespace:persistent-local-volumes-test-3224 PodName:hostexec-node2-5d697 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:31.827: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:59:31.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3224" for this suite. S [SKIPPING] [10.868 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Flexvolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:59:32.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename flexvolume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Flexvolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/flexvolume.go:169 Nov 6 01:59:32.059: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-storage] Flexvolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:59:32.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "flexvolume-834" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.038 seconds] [sig-storage] Flexvolumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should be mountable when non-attachable [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/flexvolume.go:188 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/flexvolume.go:173 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:59:08.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Nov 6 01:59:08.268: INFO: The status of Pod test-hostpath-type-fptr6 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:59:10.272: INFO: The status of Pod test-hostpath-type-fptr6 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:59:12.272: INFO: The status of Pod test-hostpath-type-fptr6 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:59:14.273: INFO: The status of Pod test-hostpath-type-fptr6 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:59:16.272: INFO: The status of Pod test-hostpath-type-fptr6 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:59:18.270: INFO: The status of Pod test-hostpath-type-fptr6 is Pending, waiting for it to be Running (with Ready = true) Nov 6 01:59:20.271: INFO: The status of Pod test-hostpath-type-fptr6 is Running (Ready = true) STEP: running on node node2 STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should be able to mount directory 'adir' successfully when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:76 [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:59:32.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-3804" for this suite. • [SLOW TEST:24.102 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount directory 'adir' successfully when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:76 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should be able to mount directory 'adir' successfully when HostPathType is HostPathDirectory","total":-1,"completed":7,"skipped":128,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:59:32.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename regional-pd STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:68 Nov 6 01:59:32.412: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:59:32.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "regional-pd-7360" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 RegionalPD [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:76 should provision storage [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:77 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:72 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:59:29.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91 STEP: Creating a pod to test downward API volume plugin Nov 6 01:59:29.980: INFO: Waiting up to 5m0s for pod "metadata-volume-54a68c78-4d85-4739-8fa1-b98646f955d5" in namespace "projected-5405" to be "Succeeded or Failed" Nov 6 01:59:29.983: INFO: Pod "metadata-volume-54a68c78-4d85-4739-8fa1-b98646f955d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.919402ms Nov 6 01:59:31.986: INFO: Pod "metadata-volume-54a68c78-4d85-4739-8fa1-b98646f955d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005865099s Nov 6 01:59:33.989: INFO: Pod "metadata-volume-54a68c78-4d85-4739-8fa1-b98646f955d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008787056s STEP: Saw pod success Nov 6 01:59:33.989: INFO: Pod "metadata-volume-54a68c78-4d85-4739-8fa1-b98646f955d5" satisfied condition "Succeeded or Failed" Nov 6 01:59:33.991: INFO: Trying to get logs from node node2 pod metadata-volume-54a68c78-4d85-4739-8fa1-b98646f955d5 container client-container: STEP: delete the pod Nov 6 01:59:34.011: INFO: Waiting for pod metadata-volume-54a68c78-4d85-4739-8fa1-b98646f955d5 to disappear Nov 6 01:59:34.013: INFO: Pod metadata-volume-54a68c78-4d85-4739-8fa1-b98646f955d5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:59:34.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5405" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":15,"skipped":593,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:59:21.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-8bc7bbff-dc60-49c6-81ba-b060519d5e2a" Nov 6 01:59:29.875: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-8bc7bbff-dc60-49c6-81ba-b060519d5e2a && dd if=/dev/zero of=/tmp/local-volume-test-8bc7bbff-dc60-49c6-81ba-b060519d5e2a/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-8bc7bbff-dc60-49c6-81ba-b060519d5e2a/file] Namespace:persistent-local-volumes-test-8213 PodName:hostexec-node2-4h8qv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:29.875: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:30.363: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-8bc7bbff-dc60-49c6-81ba-b060519d5e2a/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-8213 PodName:hostexec-node2-4h8qv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:30.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:59:30.528: INFO: Creating a PV followed by a PVC Nov 6 01:59:30.535: INFO: Waiting for PV local-pv77pw4 to bind to PVC pvc-gzzms Nov 6 01:59:30.535: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-gzzms] to have phase Bound Nov 6 01:59:30.537: INFO: PersistentVolumeClaim pvc-gzzms found but phase is Pending instead of Bound. Nov 6 01:59:32.541: INFO: PersistentVolumeClaim pvc-gzzms found but phase is Pending instead of Bound. Nov 6 01:59:34.546: INFO: PersistentVolumeClaim pvc-gzzms found but phase is Pending instead of Bound. Nov 6 01:59:36.552: INFO: PersistentVolumeClaim pvc-gzzms found but phase is Pending instead of Bound. Nov 6 01:59:38.556: INFO: PersistentVolumeClaim pvc-gzzms found and phase=Bound (8.02138956s) Nov 6 01:59:38.556: INFO: Waiting up to 3m0s for PersistentVolume local-pv77pw4 to have phase Bound Nov 6 01:59:38.559: INFO: PersistentVolume local-pv77pw4 found and phase=Bound (2.895861ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Nov 6 01:59:38.564: INFO: Disabled temporarily, reopen after #73168 is fixed [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:59:38.566: INFO: Deleting PersistentVolumeClaim "pvc-gzzms" Nov 6 01:59:38.572: INFO: Deleting PersistentVolume "local-pv77pw4" Nov 6 01:59:38.576: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-8bc7bbff-dc60-49c6-81ba-b060519d5e2a/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-8213 PodName:hostexec-node2-4h8qv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:38.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop1" on node "node2" at path /tmp/local-volume-test-8bc7bbff-dc60-49c6-81ba-b060519d5e2a/file Nov 6 01:59:38.858: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop1] Namespace:persistent-local-volumes-test-8213 PodName:hostexec-node2-4h8qv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:38.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-8bc7bbff-dc60-49c6-81ba-b060519d5e2a Nov 6 01:59:39.046: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8bc7bbff-dc60-49c6-81ba-b060519d5e2a] Namespace:persistent-local-volumes-test-8213 PodName:hostexec-node2-4h8qv ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:39.046: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:59:39.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8213" for this suite. S [SKIPPING] [17.508 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Disabled temporarily, reopen after #73168 is fixed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:287 ------------------------------ SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:59:39.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:74 Nov 6 01:59:39.390: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:59:39.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-8948" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 schedule a pod w/ RW PD(s) mounted to 1 or more containers, write to PD, verify content, delete pod, and repeat in rapid succession [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:231 using 4 containers and 1 PDs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:254 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:59:29.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-0d2762de-b21f-4a12-8b45-924147227a00" Nov 6 01:59:33.310: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-0d2762de-b21f-4a12-8b45-924147227a00 && dd if=/dev/zero of=/tmp/local-volume-test-0d2762de-b21f-4a12-8b45-924147227a00/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-0d2762de-b21f-4a12-8b45-924147227a00/file] Namespace:persistent-local-volumes-test-6906 PodName:hostexec-node2-rj2bg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:33.310: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:33.515: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-0d2762de-b21f-4a12-8b45-924147227a00/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6906 PodName:hostexec-node2-rj2bg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:33.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:59:33.621: INFO: Creating a PV followed by a PVC Nov 6 01:59:33.627: INFO: Waiting for PV local-pvwt6nn to bind to PVC pvc-l5sjn Nov 6 01:59:33.627: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-l5sjn] to have phase Bound Nov 6 01:59:33.630: INFO: PersistentVolumeClaim pvc-l5sjn found but phase is Pending instead of Bound. Nov 6 01:59:35.634: INFO: PersistentVolumeClaim pvc-l5sjn found and phase=Bound (2.006425258s) Nov 6 01:59:35.634: INFO: Waiting up to 3m0s for PersistentVolume local-pvwt6nn to have phase Bound Nov 6 01:59:35.636: INFO: PersistentVolume local-pvwt6nn found and phase=Bound (1.880032ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 6 01:59:41.663: INFO: pod "pod-e77aa4a2-665e-4671-9b31-8a225081c545" created on Node "node2" STEP: Writing in pod1 Nov 6 01:59:41.663: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6906 PodName:pod-e77aa4a2-665e-4671-9b31-8a225081c545 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:59:41.663: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:41.752: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Nov 6 01:59:41.752: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6906 PodName:pod-e77aa4a2-665e-4671-9b31-8a225081c545 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:59:41.752: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:41.834: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Nov 6 01:59:41.834: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /dev/loop0 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6906 PodName:pod-e77aa4a2-665e-4671-9b31-8a225081c545 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 01:59:41.834: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:41.913: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /dev/loop0 > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-e77aa4a2-665e-4671-9b31-8a225081c545 in namespace persistent-local-volumes-test-6906 [AfterEach] [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:59:41.919: INFO: Deleting PersistentVolumeClaim "pvc-l5sjn" Nov 6 01:59:41.922: INFO: Deleting PersistentVolume "local-pvwt6nn" Nov 6 01:59:41.925: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-0d2762de-b21f-4a12-8b45-924147227a00/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-6906 PodName:hostexec-node2-rj2bg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:41.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-0d2762de-b21f-4a12-8b45-924147227a00/file Nov 6 01:59:42.010: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-6906 PodName:hostexec-node2-rj2bg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:42.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-0d2762de-b21f-4a12-8b45-924147227a00 Nov 6 01:59:42.095: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0d2762de-b21f-4a12-8b45-924147227a00] Namespace:persistent-local-volumes-test-6906 PodName:hostexec-node2-rj2bg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:42.095: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:59:42.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6906" for this suite. • [SLOW TEST:12.944 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: blockfswithoutformat] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":17,"skipped":594,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:59:39.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-67da429a-2666-4b58-87fc-220c20d4b637" Nov 6 01:59:43.475: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-67da429a-2666-4b58-87fc-220c20d4b637 && dd if=/dev/zero of=/tmp/local-volume-test-67da429a-2666-4b58-87fc-220c20d4b637/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-67da429a-2666-4b58-87fc-220c20d4b637/file] Namespace:persistent-local-volumes-test-2812 PodName:hostexec-node2-rtmqc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:43.475: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:43.830: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-67da429a-2666-4b58-87fc-220c20d4b637/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-2812 PodName:hostexec-node2-rtmqc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:43.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:59:44.273: INFO: Creating a PV followed by a PVC Nov 6 01:59:44.283: INFO: Waiting for PV local-pvv6t69 to bind to PVC pvc-x98r2 Nov 6 01:59:44.283: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-x98r2] to have phase Bound Nov 6 01:59:44.285: INFO: PersistentVolumeClaim pvc-x98r2 found but phase is Pending instead of Bound. Nov 6 01:59:46.289: INFO: PersistentVolumeClaim pvc-x98r2 found and phase=Bound (2.006232872s) Nov 6 01:59:46.289: INFO: Waiting up to 3m0s for PersistentVolume local-pvv6t69 to have phase Bound Nov 6 01:59:46.292: INFO: PersistentVolume local-pvv6t69 found and phase=Bound (2.25539ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 Nov 6 01:59:46.296: INFO: We don't set fsGroup on block device, skipped. [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 01:59:46.297: INFO: Deleting PersistentVolumeClaim "pvc-x98r2" Nov 6 01:59:46.302: INFO: Deleting PersistentVolume "local-pvv6t69" Nov 6 01:59:46.307: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-67da429a-2666-4b58-87fc-220c20d4b637/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-2812 PodName:hostexec-node2-rtmqc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:46.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop1" on node "node2" at path /tmp/local-volume-test-67da429a-2666-4b58-87fc-220c20d4b637/file Nov 6 01:59:46.488: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop1] Namespace:persistent-local-volumes-test-2812 PodName:hostexec-node2-rtmqc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:46.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-67da429a-2666-4b58-87fc-220c20d4b637 Nov 6 01:59:46.574: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-67da429a-2666-4b58-87fc-220c20d4b637] Namespace:persistent-local-volumes-test-2812 PodName:hostexec-node2-rtmqc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:46.574: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:59:46.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2812" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [7.381 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 We don't set fsGroup on block device, skipped. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:263 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:59:46.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume-provisioning STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146 [It] deletion should be idempotent /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:557 Nov 6 01:59:46.814: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:59:46.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-provisioning-1651" for this suite. S [SKIPPING] [0.031 seconds] [sig-storage] Dynamic Provisioning /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 DynamicProvisioner [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:152 deletion should be idempotent [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:557 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:563 ------------------------------ SSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on","total":-1,"completed":13,"skipped":550,"failed":0} [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:59:19.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:391 STEP: Setting up local volumes on node "node1" STEP: Initializing test volumes Nov 6 01:59:23.817: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-21f2f0e3-ddb5-4f12-9d62-b5e7f2c7aba8] Namespace:persistent-local-volumes-test-8322 PodName:hostexec-node1-hjtkh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:23.817: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:24.213: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-3d02d64b-7628-42db-8d61-a6937ac85da6] Namespace:persistent-local-volumes-test-8322 PodName:hostexec-node1-hjtkh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:24.213: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:24.309: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-07813346-90d3-462f-920f-b91cbb07efda] Namespace:persistent-local-volumes-test-8322 PodName:hostexec-node1-hjtkh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:24.309: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:24.411: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-d4be1d35-8fc9-4e3e-9a3d-fb485da049f1] Namespace:persistent-local-volumes-test-8322 PodName:hostexec-node1-hjtkh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:24.411: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:24.515: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-277f2bed-8182-4ee9-b347-d7ad3d3a73dc] Namespace:persistent-local-volumes-test-8322 PodName:hostexec-node1-hjtkh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:24.515: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:24.810: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-6b9c3a51-eaa0-499d-b889-6b6e07c8dc06] Namespace:persistent-local-volumes-test-8322 PodName:hostexec-node1-hjtkh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:24.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:59:24.912: INFO: Creating a PV followed by a PVC Nov 6 01:59:24.919: INFO: Creating a PV followed by a PVC Nov 6 01:59:24.925: INFO: Creating a PV followed by a PVC Nov 6 01:59:24.930: INFO: Creating a PV followed by a PVC Nov 6 01:59:24.936: INFO: Creating a PV followed by a PVC Nov 6 01:59:24.942: INFO: Creating a PV followed by a PVC Nov 6 01:59:34.985: INFO: PVCs were not bound within 10s (that's good) STEP: Setting up local volumes on node "node2" STEP: Initializing test volumes Nov 6 01:59:41.002: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-9af82438-3aeb-4175-84fe-63f4954f9ef2] Namespace:persistent-local-volumes-test-8322 PodName:hostexec-node2-l55kg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:41.002: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:41.103: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-547acd9b-f406-4962-9725-68efc9235f7d] Namespace:persistent-local-volumes-test-8322 PodName:hostexec-node2-l55kg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:41.103: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:41.200: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-3ad2dfc2-3dfe-496e-8c54-9b7b85fed92a] Namespace:persistent-local-volumes-test-8322 PodName:hostexec-node2-l55kg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:41.200: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:41.295: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-b62330e6-f292-4e75-b5f1-397124381f5f] Namespace:persistent-local-volumes-test-8322 PodName:hostexec-node2-l55kg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:41.295: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:41.391: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-c05bfee7-844a-4616-8370-7c648871037c] Namespace:persistent-local-volumes-test-8322 PodName:hostexec-node2-l55kg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:41.391: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:41.532: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-2ec487c2-5520-4168-9acd-e7c35c7c907a] Namespace:persistent-local-volumes-test-8322 PodName:hostexec-node2-l55kg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:41.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:59:41.675: INFO: Creating a PV followed by a PVC Nov 6 01:59:41.682: INFO: Creating a PV followed by a PVC Nov 6 01:59:41.687: INFO: Creating a PV followed by a PVC Nov 6 01:59:41.693: INFO: Creating a PV followed by a PVC Nov 6 01:59:41.699: INFO: Creating a PV followed by a PVC Nov 6 01:59:41.705: INFO: Creating a PV followed by a PVC Nov 6 01:59:51.749: INFO: PVCs were not bound within 10s (that's good) [It] should use volumes spread across nodes when pod has anti-affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:410 Nov 6 01:59:51.749: INFO: Runs only when number of nodes >= 3 [AfterEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:403 STEP: Cleaning up PVC and PV Nov 6 01:59:51.751: INFO: Deleting PersistentVolumeClaim "pvc-gqm6w" Nov 6 01:59:51.755: INFO: Deleting PersistentVolume "local-pvgf55l" STEP: Cleaning up PVC and PV Nov 6 01:59:51.759: INFO: Deleting PersistentVolumeClaim "pvc-sz5nm" Nov 6 01:59:51.762: INFO: Deleting PersistentVolume "local-pvrtw8d" STEP: Cleaning up PVC and PV Nov 6 01:59:51.766: INFO: Deleting PersistentVolumeClaim "pvc-mg86x" Nov 6 01:59:51.770: INFO: Deleting PersistentVolume "local-pvsxx4m" STEP: Cleaning up PVC and PV Nov 6 01:59:51.773: INFO: Deleting PersistentVolumeClaim "pvc-5ckxk" Nov 6 01:59:51.777: INFO: Deleting PersistentVolume "local-pvljdmt" STEP: Cleaning up PVC and PV Nov 6 01:59:51.782: INFO: Deleting PersistentVolumeClaim "pvc-mhspr" Nov 6 01:59:51.785: INFO: Deleting PersistentVolume "local-pvnlhsq" STEP: Cleaning up PVC and PV Nov 6 01:59:51.788: INFO: Deleting PersistentVolumeClaim "pvc-qf87d" Nov 6 01:59:51.791: INFO: Deleting PersistentVolume "local-pv27jvr" STEP: Removing the test directory Nov 6 01:59:51.795: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-21f2f0e3-ddb5-4f12-9d62-b5e7f2c7aba8] Namespace:persistent-local-volumes-test-8322 PodName:hostexec-node1-hjtkh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:51.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:59:51.903: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-3d02d64b-7628-42db-8d61-a6937ac85da6] Namespace:persistent-local-volumes-test-8322 PodName:hostexec-node1-hjtkh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:51.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:59:52.026: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-07813346-90d3-462f-920f-b91cbb07efda] Namespace:persistent-local-volumes-test-8322 PodName:hostexec-node1-hjtkh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:52.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:59:52.121: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-d4be1d35-8fc9-4e3e-9a3d-fb485da049f1] Namespace:persistent-local-volumes-test-8322 PodName:hostexec-node1-hjtkh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:52.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:59:52.228: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-277f2bed-8182-4ee9-b347-d7ad3d3a73dc] Namespace:persistent-local-volumes-test-8322 PodName:hostexec-node1-hjtkh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:52.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:59:52.332: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6b9c3a51-eaa0-499d-b889-6b6e07c8dc06] Namespace:persistent-local-volumes-test-8322 PodName:hostexec-node1-hjtkh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:52.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up PVC and PV Nov 6 01:59:52.422: INFO: Deleting PersistentVolumeClaim "pvc-ptd8p" Nov 6 01:59:52.426: INFO: Deleting PersistentVolume "local-pvzbmlg" STEP: Cleaning up PVC and PV Nov 6 01:59:52.430: INFO: Deleting PersistentVolumeClaim "pvc-6gqqz" Nov 6 01:59:52.433: INFO: Deleting PersistentVolume "local-pvgpf2c" STEP: Cleaning up PVC and PV Nov 6 01:59:52.437: INFO: Deleting PersistentVolumeClaim "pvc-g76sr" Nov 6 01:59:52.440: INFO: Deleting PersistentVolume "local-pv5nt7v" STEP: Cleaning up PVC and PV Nov 6 01:59:52.444: INFO: Deleting PersistentVolumeClaim "pvc-x2f7t" Nov 6 01:59:52.447: INFO: Deleting PersistentVolume "local-pvlvqsk" STEP: Cleaning up PVC and PV Nov 6 01:59:52.451: INFO: Deleting PersistentVolumeClaim "pvc-ppj8j" Nov 6 01:59:52.455: INFO: Deleting PersistentVolume "local-pvb5224" STEP: Cleaning up PVC and PV Nov 6 01:59:52.458: INFO: Deleting PersistentVolumeClaim "pvc-fvp8m" Nov 6 01:59:52.462: INFO: Deleting PersistentVolume "local-pvzfmvv" STEP: Removing the test directory Nov 6 01:59:52.466: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-9af82438-3aeb-4175-84fe-63f4954f9ef2] Namespace:persistent-local-volumes-test-8322 PodName:hostexec-node2-l55kg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:52.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:59:52.655: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-547acd9b-f406-4962-9725-68efc9235f7d] Namespace:persistent-local-volumes-test-8322 PodName:hostexec-node2-l55kg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:52.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:59:52.763: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-3ad2dfc2-3dfe-496e-8c54-9b7b85fed92a] Namespace:persistent-local-volumes-test-8322 PodName:hostexec-node2-l55kg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:52.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:59:52.879: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-b62330e6-f292-4e75-b5f1-397124381f5f] Namespace:persistent-local-volumes-test-8322 PodName:hostexec-node2-l55kg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:52.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:59:53.447: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c05bfee7-844a-4616-8370-7c648871037c] Namespace:persistent-local-volumes-test-8322 PodName:hostexec-node2-l55kg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:53.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 01:59:53.531: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-2ec487c2-5520-4168-9acd-e7c35c7c907a] Namespace:persistent-local-volumes-test-8322 PodName:hostexec-node2-l55kg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:53.531: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:59:53.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8322" for this suite. S [SKIPPING] [33.871 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:384 should use volumes spread across nodes when pod has anti-affinity [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:410 Runs only when number of nodes >= 3 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:412 ------------------------------ SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:58:46.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] token should not be plumbed down when csiServiceAccountTokenEnabled=false /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1402 STEP: Building a driver namespace object, basename csi-mock-volumes-5221 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 6 01:58:46.430: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5221-4471/csi-attacher Nov 6 01:58:46.433: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5221 Nov 6 01:58:46.433: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-5221 Nov 6 01:58:46.435: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5221 Nov 6 01:58:46.438: INFO: creating *v1.Role: csi-mock-volumes-5221-4471/external-attacher-cfg-csi-mock-volumes-5221 Nov 6 01:58:46.441: INFO: creating *v1.RoleBinding: csi-mock-volumes-5221-4471/csi-attacher-role-cfg Nov 6 01:58:46.444: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5221-4471/csi-provisioner Nov 6 01:58:46.447: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5221 Nov 6 01:58:46.447: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-5221 Nov 6 01:58:46.450: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5221 Nov 6 01:58:46.452: INFO: creating *v1.Role: csi-mock-volumes-5221-4471/external-provisioner-cfg-csi-mock-volumes-5221 Nov 6 01:58:46.456: INFO: creating *v1.RoleBinding: csi-mock-volumes-5221-4471/csi-provisioner-role-cfg Nov 6 01:58:46.458: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5221-4471/csi-resizer Nov 6 01:58:46.461: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5221 Nov 6 01:58:46.461: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-5221 Nov 6 01:58:46.463: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5221 Nov 6 01:58:46.467: INFO: creating *v1.Role: csi-mock-volumes-5221-4471/external-resizer-cfg-csi-mock-volumes-5221 Nov 6 01:58:46.469: INFO: creating *v1.RoleBinding: csi-mock-volumes-5221-4471/csi-resizer-role-cfg Nov 6 01:58:46.472: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5221-4471/csi-snapshotter Nov 6 01:58:46.475: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5221 Nov 6 01:58:46.475: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-5221 Nov 6 01:58:46.477: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5221 Nov 6 01:58:46.480: INFO: creating *v1.Role: csi-mock-volumes-5221-4471/external-snapshotter-leaderelection-csi-mock-volumes-5221 Nov 6 01:58:46.482: INFO: creating *v1.RoleBinding: csi-mock-volumes-5221-4471/external-snapshotter-leaderelection Nov 6 01:58:46.485: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5221-4471/csi-mock Nov 6 01:58:46.489: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5221 Nov 6 01:58:46.492: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5221 Nov 6 01:58:46.495: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5221 Nov 6 01:58:46.497: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5221 Nov 6 01:58:46.500: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5221 Nov 6 01:58:46.502: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5221 Nov 6 01:58:46.506: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5221 Nov 6 01:58:46.510: INFO: creating *v1.StatefulSet: csi-mock-volumes-5221-4471/csi-mockplugin Nov 6 01:58:46.514: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-5221 Nov 6 01:58:46.517: INFO: creating *v1.StatefulSet: csi-mock-volumes-5221-4471/csi-mockplugin-attacher Nov 6 01:58:46.520: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-5221" Nov 6 01:58:46.522: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5221 to register on node node1 STEP: Creating pod Nov 6 01:58:51.533: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 6 01:58:51.538: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-p6wl6] to have phase Bound Nov 6 01:58:51.540: INFO: PersistentVolumeClaim pvc-p6wl6 found but phase is Pending instead of Bound. Nov 6 01:58:53.542: INFO: PersistentVolumeClaim pvc-p6wl6 found and phase=Bound (2.004312575s) STEP: Deleting the previously created pod Nov 6 01:59:13.563: INFO: Deleting pod "pvc-volume-tester-hmjt2" in namespace "csi-mock-volumes-5221" Nov 6 01:59:13.567: INFO: Wait up to 5m0s for pod "pvc-volume-tester-hmjt2" to be fully deleted STEP: Checking CSI driver logs Nov 6 01:59:19.580: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/90f1dca6-3873-4664-9715-181d7e62481b/volumes/kubernetes.io~csi/pvc-80083f63-ec4c-47b0-8258-f39fdfadcf9e/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-hmjt2 Nov 6 01:59:19.580: INFO: Deleting pod "pvc-volume-tester-hmjt2" in namespace "csi-mock-volumes-5221" STEP: Deleting claim pvc-p6wl6 Nov 6 01:59:19.589: INFO: Waiting up to 2m0s for PersistentVolume pvc-80083f63-ec4c-47b0-8258-f39fdfadcf9e to get deleted Nov 6 01:59:19.591: INFO: PersistentVolume pvc-80083f63-ec4c-47b0-8258-f39fdfadcf9e found and phase=Bound (1.735893ms) Nov 6 01:59:21.595: INFO: PersistentVolume pvc-80083f63-ec4c-47b0-8258-f39fdfadcf9e found and phase=Released (2.005739097s) Nov 6 01:59:23.597: INFO: PersistentVolume pvc-80083f63-ec4c-47b0-8258-f39fdfadcf9e found and phase=Released (4.008562868s) Nov 6 01:59:25.602: INFO: PersistentVolume pvc-80083f63-ec4c-47b0-8258-f39fdfadcf9e found and phase=Released (6.01358718s) Nov 6 01:59:27.605: INFO: PersistentVolume pvc-80083f63-ec4c-47b0-8258-f39fdfadcf9e was removed STEP: Deleting storageclass csi-mock-volumes-5221-sc8qqtk STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-5221 STEP: Waiting for namespaces [csi-mock-volumes-5221] to vanish STEP: uninstalling csi mock driver Nov 6 01:59:33.616: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5221-4471/csi-attacher Nov 6 01:59:33.620: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5221 Nov 6 01:59:33.624: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5221 Nov 6 01:59:33.628: INFO: deleting *v1.Role: csi-mock-volumes-5221-4471/external-attacher-cfg-csi-mock-volumes-5221 Nov 6 01:59:33.631: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5221-4471/csi-attacher-role-cfg Nov 6 01:59:33.635: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5221-4471/csi-provisioner Nov 6 01:59:33.638: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5221 Nov 6 01:59:33.642: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5221 Nov 6 01:59:33.645: INFO: deleting *v1.Role: csi-mock-volumes-5221-4471/external-provisioner-cfg-csi-mock-volumes-5221 Nov 6 01:59:33.649: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5221-4471/csi-provisioner-role-cfg Nov 6 01:59:33.652: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5221-4471/csi-resizer Nov 6 01:59:33.655: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5221 Nov 6 01:59:33.660: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5221 Nov 6 01:59:33.664: INFO: deleting *v1.Role: csi-mock-volumes-5221-4471/external-resizer-cfg-csi-mock-volumes-5221 Nov 6 01:59:33.669: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5221-4471/csi-resizer-role-cfg Nov 6 01:59:33.676: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5221-4471/csi-snapshotter Nov 6 01:59:33.679: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5221 Nov 6 01:59:33.682: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5221 Nov 6 01:59:33.690: INFO: deleting *v1.Role: csi-mock-volumes-5221-4471/external-snapshotter-leaderelection-csi-mock-volumes-5221 Nov 6 01:59:33.693: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5221-4471/external-snapshotter-leaderelection Nov 6 01:59:33.696: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5221-4471/csi-mock Nov 6 01:59:33.700: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5221 Nov 6 01:59:33.703: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5221 Nov 6 01:59:33.706: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5221 Nov 6 01:59:33.710: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5221 Nov 6 01:59:33.713: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5221 Nov 6 01:59:33.717: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5221 Nov 6 01:59:33.721: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5221 Nov 6 01:59:33.724: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5221-4471/csi-mockplugin Nov 6 01:59:33.727: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-5221 Nov 6 01:59:33.731: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5221-4471/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-5221-4471 STEP: Waiting for namespaces [csi-mock-volumes-5221-4471] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:00:01.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:75.376 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIServiceAccountToken /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1374 token should not be plumbed down when csiServiceAccountTokenEnabled=false /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1402 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false","total":-1,"completed":11,"skipped":502,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:59:32.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:49 [It] should allow deletion of pod with invalid volume : projected /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 Nov 6 02:00:02.523: INFO: Deleting pod "pv-661"/"pod-ephm-test-projected-5vch" Nov 6 02:00:02.523: INFO: Deleting pod "pod-ephm-test-projected-5vch" in namespace "pv-661" Nov 6 02:00:02.528: INFO: Wait up to 5m0s for pod "pod-ephm-test-projected-5vch" to be fully deleted [AfterEach] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:00:10.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-661" for this suite. • [SLOW TEST:38.056 seconds] [sig-storage] Ephemeralstorage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 When pod refers to non-existent ephemeral storage /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53 should allow deletion of pod with invalid volume : projected /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55 ------------------------------ {"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected","total":-1,"completed":8,"skipped":190,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:00:10.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Nov 6 02:00:10.580: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:00:10.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-1047" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create volume metrics with the correct PVC ref [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:204 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SS ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:00:10.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Nov 6 02:00:10.637: INFO: The status of Pod test-hostpath-type-22bdt is Pending, waiting for it to be Running (with Ready = true) Nov 6 02:00:12.641: INFO: The status of Pod test-hostpath-type-22bdt is Pending, waiting for it to be Running (with Ready = true) Nov 6 02:00:14.642: INFO: The status of Pod test-hostpath-type-22bdt is Pending, waiting for it to be Running (with Ready = true) Nov 6 02:00:16.641: INFO: The status of Pod test-hostpath-type-22bdt is Running (Ready = true) STEP: running on node node2 STEP: Create a character device for further testing Nov 6 02:00:16.643: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-9795 PodName:test-hostpath-type-22bdt ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 02:00:16.643: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting character device 'achardev' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:300 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:00:21.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-9795" for this suite. • [SLOW TEST:10.809 seconds] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting character device 'achardev' when HostPathType is HostPathBlockDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:300 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathBlockDev","total":-1,"completed":9,"skipped":197,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:00:21.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Nov 6 02:00:21.648: INFO: The status of Pod test-hostpath-type-5q6xh is Pending, waiting for it to be Running (with Ready = true) Nov 6 02:00:23.652: INFO: The status of Pod test-hostpath-type-5q6xh is Pending, waiting for it to be Running (with Ready = true) Nov 6 02:00:25.654: INFO: The status of Pod test-hostpath-type-5q6xh is Running (Ready = true) STEP: running on node node1 STEP: Create a character device for further testing Nov 6 02:00:25.656: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-4566 PodName:test-hostpath-type-5q6xh ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 02:00:25.656: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting character device 'achardev' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:290 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:00:27.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-4566" for this suite. • [SLOW TEST:6.353 seconds] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting character device 'achardev' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:290 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathFile","total":-1,"completed":10,"skipped":301,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:59:34.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:391 STEP: Setting up local volumes on node "node1" STEP: Initializing test volumes Nov 6 01:59:38.089: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-e69266ff-80b5-4eb3-9f9e-ebd33096c894] Namespace:persistent-local-volumes-test-3676 PodName:hostexec-node1-r2b9r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:38.089: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:38.291: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-f8ac6fc9-2f56-4dfa-b329-47405c1b5533] Namespace:persistent-local-volumes-test-3676 PodName:hostexec-node1-r2b9r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:38.291: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:38.642: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-180b850e-29d2-4a2e-a87e-612e152c90d2] Namespace:persistent-local-volumes-test-3676 PodName:hostexec-node1-r2b9r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:38.642: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:38.754: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-ae6149dc-a226-48d0-8075-c3a862e52d3f] Namespace:persistent-local-volumes-test-3676 PodName:hostexec-node1-r2b9r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:38.754: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:38.841: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-155dd6c2-7baf-4d06-ae60-b457e2852bb9] Namespace:persistent-local-volumes-test-3676 PodName:hostexec-node1-r2b9r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:38.841: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:38.927: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-5b960bc2-eec6-49d7-afed-3fff70c7f6d2] Namespace:persistent-local-volumes-test-3676 PodName:hostexec-node1-r2b9r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:38.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:59:39.040: INFO: Creating a PV followed by a PVC Nov 6 01:59:39.045: INFO: Creating a PV followed by a PVC Nov 6 01:59:39.051: INFO: Creating a PV followed by a PVC Nov 6 01:59:39.056: INFO: Creating a PV followed by a PVC Nov 6 01:59:39.062: INFO: Creating a PV followed by a PVC Nov 6 01:59:39.072: INFO: Creating a PV followed by a PVC Nov 6 01:59:49.128: INFO: PVCs were not bound within 10s (that's good) STEP: Setting up local volumes on node "node2" STEP: Initializing test volumes Nov 6 01:59:51.148: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-5b620107-cff8-42a3-8da3-c1ec600b8f65] Namespace:persistent-local-volumes-test-3676 PodName:hostexec-node2-lmsg4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:51.148: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:51.246: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-d08c14c3-d138-4ae3-9936-0c8405dc400a] Namespace:persistent-local-volumes-test-3676 PodName:hostexec-node2-lmsg4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:51.246: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:51.353: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-23c6cb70-f2c2-43c6-8ebd-87180b7452d0] Namespace:persistent-local-volumes-test-3676 PodName:hostexec-node2-lmsg4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:51.353: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:51.436: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-5ba51b4e-10ac-4324-a165-8d8f4094b2e4] Namespace:persistent-local-volumes-test-3676 PodName:hostexec-node2-lmsg4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:51.436: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:51.537: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-2eb59091-c22e-40c2-82fb-259d02d89d28] Namespace:persistent-local-volumes-test-3676 PodName:hostexec-node2-lmsg4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:51.537: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:59:51.653: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-d771a4fb-3e28-4a9d-bea5-0f533f9d925b] Namespace:persistent-local-volumes-test-3676 PodName:hostexec-node2-lmsg4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:51.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:59:51.745: INFO: Creating a PV followed by a PVC Nov 6 01:59:51.752: INFO: Creating a PV followed by a PVC Nov 6 01:59:51.759: INFO: Creating a PV followed by a PVC Nov 6 01:59:51.765: INFO: Creating a PV followed by a PVC Nov 6 01:59:51.771: INFO: Creating a PV followed by a PVC Nov 6 01:59:51.777: INFO: Creating a PV followed by a PVC Nov 6 02:00:01.826: INFO: PVCs were not bound within 10s (that's good) [It] should use volumes on one node when pod has affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:419 STEP: Creating a StatefulSet with pod affinity on nodes Nov 6 02:00:01.834: INFO: Found 0 stateful pods, waiting for 3 Nov 6 02:00:11.838: INFO: Found 2 stateful pods, waiting for 3 Nov 6 02:00:21.838: INFO: Waiting for pod local-volume-statefulset-0 to enter Running - Ready=true, currently Running - Ready=true Nov 6 02:00:21.838: INFO: Waiting for pod local-volume-statefulset-1 to enter Running - Ready=true, currently Running - Ready=true Nov 6 02:00:21.838: INFO: Waiting for pod local-volume-statefulset-2 to enter Running - Ready=true, currently Pending - Ready=false Nov 6 02:00:31.837: INFO: Waiting for pod local-volume-statefulset-0 to enter Running - Ready=true, currently Running - Ready=true Nov 6 02:00:31.837: INFO: Waiting for pod local-volume-statefulset-1 to enter Running - Ready=true, currently Running - Ready=true Nov 6 02:00:31.838: INFO: Waiting for pod local-volume-statefulset-2 to enter Running - Ready=true, currently Running - Ready=true Nov 6 02:00:31.840: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol1-local-volume-statefulset-0] to have phase Bound Nov 6 02:00:31.842: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-0 found and phase=Bound (1.792471ms) Nov 6 02:00:31.842: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol2-local-volume-statefulset-0] to have phase Bound Nov 6 02:00:31.843: INFO: PersistentVolumeClaim vol2-local-volume-statefulset-0 found and phase=Bound (1.412728ms) Nov 6 02:00:31.843: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol1-local-volume-statefulset-1] to have phase Bound Nov 6 02:00:31.845: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-1 found and phase=Bound (1.891099ms) Nov 6 02:00:31.845: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol2-local-volume-statefulset-1] to have phase Bound Nov 6 02:00:31.847: INFO: PersistentVolumeClaim vol2-local-volume-statefulset-1 found and phase=Bound (2.04139ms) Nov 6 02:00:31.847: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol1-local-volume-statefulset-2] to have phase Bound Nov 6 02:00:31.849: INFO: PersistentVolumeClaim vol1-local-volume-statefulset-2 found and phase=Bound (1.620035ms) Nov 6 02:00:31.849: INFO: Waiting up to timeout=1s for PersistentVolumeClaims [vol2-local-volume-statefulset-2] to have phase Bound Nov 6 02:00:31.851: INFO: PersistentVolumeClaim vol2-local-volume-statefulset-2 found and phase=Bound (2.242456ms) [AfterEach] StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:403 STEP: Cleaning up PVC and PV Nov 6 02:00:31.851: INFO: Deleting PersistentVolumeClaim "pvc-9vl4x" Nov 6 02:00:31.856: INFO: Deleting PersistentVolume "local-pvt82qm" STEP: Cleaning up PVC and PV Nov 6 02:00:31.860: INFO: Deleting PersistentVolumeClaim "pvc-cfvxr" Nov 6 02:00:31.863: INFO: Deleting PersistentVolume "local-pvcqcjn" STEP: Cleaning up PVC and PV Nov 6 02:00:31.867: INFO: Deleting PersistentVolumeClaim "pvc-mn6nw" Nov 6 02:00:31.870: INFO: Deleting PersistentVolume "local-pvlshhn" STEP: Cleaning up PVC and PV Nov 6 02:00:31.874: INFO: Deleting PersistentVolumeClaim "pvc-47jgl" Nov 6 02:00:31.878: INFO: Deleting PersistentVolume "local-pvxwrlw" STEP: Cleaning up PVC and PV Nov 6 02:00:31.882: INFO: Deleting PersistentVolumeClaim "pvc-h6mqh" Nov 6 02:00:31.886: INFO: Deleting PersistentVolume "local-pv7rgqf" STEP: Cleaning up PVC and PV Nov 6 02:00:31.890: INFO: Deleting PersistentVolumeClaim "pvc-xcl79" Nov 6 02:00:31.893: INFO: Deleting PersistentVolume "local-pvcnj8r" STEP: Removing the test directory Nov 6 02:00:31.897: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e69266ff-80b5-4eb3-9f9e-ebd33096c894] Namespace:persistent-local-volumes-test-3676 PodName:hostexec-node1-r2b9r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 02:00:31.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 02:00:32.542: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f8ac6fc9-2f56-4dfa-b329-47405c1b5533] Namespace:persistent-local-volumes-test-3676 PodName:hostexec-node1-r2b9r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 02:00:32.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 02:00:32.624: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-180b850e-29d2-4a2e-a87e-612e152c90d2] Namespace:persistent-local-volumes-test-3676 PodName:hostexec-node1-r2b9r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 02:00:32.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 02:00:32.872: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ae6149dc-a226-48d0-8075-c3a862e52d3f] Namespace:persistent-local-volumes-test-3676 PodName:hostexec-node1-r2b9r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 02:00:32.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 02:00:32.977: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-155dd6c2-7baf-4d06-ae60-b457e2852bb9] Namespace:persistent-local-volumes-test-3676 PodName:hostexec-node1-r2b9r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 02:00:32.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 02:00:33.061: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-5b960bc2-eec6-49d7-afed-3fff70c7f6d2] Namespace:persistent-local-volumes-test-3676 PodName:hostexec-node1-r2b9r ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 02:00:33.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up PVC and PV Nov 6 02:00:33.187: INFO: Deleting PersistentVolumeClaim "pvc-ks68w" Nov 6 02:00:33.192: INFO: Deleting PersistentVolume "local-pvpqzqm" STEP: Cleaning up PVC and PV Nov 6 02:00:33.196: INFO: Deleting PersistentVolumeClaim "pvc-2zxsr" Nov 6 02:00:33.200: INFO: Deleting PersistentVolume "local-pvl4rfv" STEP: Cleaning up PVC and PV Nov 6 02:00:33.204: INFO: Deleting PersistentVolumeClaim "pvc-9dg42" Nov 6 02:00:33.207: INFO: Deleting PersistentVolume "local-pvv2bxh" STEP: Cleaning up PVC and PV Nov 6 02:00:33.211: INFO: Deleting PersistentVolumeClaim "pvc-v2r9t" Nov 6 02:00:33.214: INFO: Deleting PersistentVolume "local-pvqpnb2" STEP: Cleaning up PVC and PV Nov 6 02:00:33.218: INFO: Deleting PersistentVolumeClaim "pvc-q78sn" Nov 6 02:00:33.222: INFO: Deleting PersistentVolume "local-pv46rh5" STEP: Cleaning up PVC and PV Nov 6 02:00:33.225: INFO: Deleting PersistentVolumeClaim "pvc-rqmtp" Nov 6 02:00:33.228: INFO: Deleting PersistentVolume "local-pvg768p" STEP: Removing the test directory Nov 6 02:00:33.232: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-5b620107-cff8-42a3-8da3-c1ec600b8f65] Namespace:persistent-local-volumes-test-3676 PodName:hostexec-node2-lmsg4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 02:00:33.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 02:00:33.320: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-d08c14c3-d138-4ae3-9936-0c8405dc400a] Namespace:persistent-local-volumes-test-3676 PodName:hostexec-node2-lmsg4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 02:00:33.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 02:00:33.398: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-23c6cb70-f2c2-43c6-8ebd-87180b7452d0] Namespace:persistent-local-volumes-test-3676 PodName:hostexec-node2-lmsg4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 02:00:33.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 02:00:33.504: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-5ba51b4e-10ac-4324-a165-8d8f4094b2e4] Namespace:persistent-local-volumes-test-3676 PodName:hostexec-node2-lmsg4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 02:00:33.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 02:00:33.594: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-2eb59091-c22e-40c2-82fb-259d02d89d28] Namespace:persistent-local-volumes-test-3676 PodName:hostexec-node2-lmsg4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 02:00:33.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 02:00:33.710: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-d771a4fb-3e28-4a9d-bea5-0f533f9d925b] Namespace:persistent-local-volumes-test-3676 PodName:hostexec-node2-lmsg4 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 02:00:33.710: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:00:33.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3676" for this suite. • [SLOW TEST:59.762 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 StatefulSet with pod affinity [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:384 should use volumes on one node when pod has affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:419 ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:59:42.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] contain ephemeral=true when using inline volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 STEP: Building a driver namespace object, basename csi-mock-volumes-4596 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 6 01:59:42.286: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4596-5569/csi-attacher Nov 6 01:59:42.289: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4596 Nov 6 01:59:42.289: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4596 Nov 6 01:59:42.292: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4596 Nov 6 01:59:42.295: INFO: creating *v1.Role: csi-mock-volumes-4596-5569/external-attacher-cfg-csi-mock-volumes-4596 Nov 6 01:59:42.298: INFO: creating *v1.RoleBinding: csi-mock-volumes-4596-5569/csi-attacher-role-cfg Nov 6 01:59:42.303: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4596-5569/csi-provisioner Nov 6 01:59:42.306: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4596 Nov 6 01:59:42.306: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4596 Nov 6 01:59:42.309: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4596 Nov 6 01:59:42.312: INFO: creating *v1.Role: csi-mock-volumes-4596-5569/external-provisioner-cfg-csi-mock-volumes-4596 Nov 6 01:59:42.315: INFO: creating *v1.RoleBinding: csi-mock-volumes-4596-5569/csi-provisioner-role-cfg Nov 6 01:59:42.318: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4596-5569/csi-resizer Nov 6 01:59:42.322: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4596 Nov 6 01:59:42.322: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4596 Nov 6 01:59:42.324: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4596 Nov 6 01:59:42.327: INFO: creating *v1.Role: csi-mock-volumes-4596-5569/external-resizer-cfg-csi-mock-volumes-4596 Nov 6 01:59:42.329: INFO: creating *v1.RoleBinding: csi-mock-volumes-4596-5569/csi-resizer-role-cfg Nov 6 01:59:42.331: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4596-5569/csi-snapshotter Nov 6 01:59:42.334: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4596 Nov 6 01:59:42.334: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4596 Nov 6 01:59:42.337: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4596 Nov 6 01:59:42.340: INFO: creating *v1.Role: csi-mock-volumes-4596-5569/external-snapshotter-leaderelection-csi-mock-volumes-4596 Nov 6 01:59:42.343: INFO: creating *v1.RoleBinding: csi-mock-volumes-4596-5569/external-snapshotter-leaderelection Nov 6 01:59:42.346: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4596-5569/csi-mock Nov 6 01:59:42.348: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4596 Nov 6 01:59:42.353: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4596 Nov 6 01:59:42.355: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4596 Nov 6 01:59:42.358: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4596 Nov 6 01:59:42.360: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4596 Nov 6 01:59:42.362: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4596 Nov 6 01:59:42.365: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4596 Nov 6 01:59:42.368: INFO: creating *v1.StatefulSet: csi-mock-volumes-4596-5569/csi-mockplugin Nov 6 01:59:42.373: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4596 Nov 6 01:59:42.376: INFO: creating *v1.StatefulSet: csi-mock-volumes-4596-5569/csi-mockplugin-attacher Nov 6 01:59:42.379: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4596" Nov 6 01:59:42.382: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4596 to register on node node2 STEP: Creating pod STEP: checking for CSIInlineVolumes feature Nov 6 01:59:55.935: INFO: Pod inline-volume-lxv9v has the following logs: Nov 6 01:59:55.938: INFO: Deleting pod "inline-volume-lxv9v" in namespace "csi-mock-volumes-4596" Nov 6 01:59:55.942: INFO: Wait up to 5m0s for pod "inline-volume-lxv9v" to be fully deleted STEP: Deleting the previously created pod Nov 6 01:59:57.946: INFO: Deleting pod "pvc-volume-tester-nvkq8" in namespace "csi-mock-volumes-4596" Nov 6 01:59:57.954: INFO: Wait up to 5m0s for pod "pvc-volume-tester-nvkq8" to be fully deleted STEP: Checking CSI driver logs Nov 6 02:00:09.973: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-nvkq8 Nov 6 02:00:09.973: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-4596 Nov 6 02:00:09.973: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 309086aa-708c-4df3-9659-c24527ea319b Nov 6 02:00:09.973: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default Nov 6 02:00:09.973: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: true Nov 6 02:00:09.973: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"csi-4549d785dc7d8596b6fdafa5fda3143178a64589aeeec687cb7f61c52dcaf1aa","target_path":"/var/lib/kubelet/pods/309086aa-708c-4df3-9659-c24527ea319b/volumes/kubernetes.io~csi/my-volume/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-nvkq8 Nov 6 02:00:09.973: INFO: Deleting pod "pvc-volume-tester-nvkq8" in namespace "csi-mock-volumes-4596" STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4596 STEP: Waiting for namespaces [csi-mock-volumes-4596] to vanish STEP: uninstalling csi mock driver Nov 6 02:00:15.988: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4596-5569/csi-attacher Nov 6 02:00:15.991: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4596 Nov 6 02:00:15.995: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4596 Nov 6 02:00:15.998: INFO: deleting *v1.Role: csi-mock-volumes-4596-5569/external-attacher-cfg-csi-mock-volumes-4596 Nov 6 02:00:16.002: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4596-5569/csi-attacher-role-cfg Nov 6 02:00:16.005: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4596-5569/csi-provisioner Nov 6 02:00:16.009: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4596 Nov 6 02:00:16.012: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4596 Nov 6 02:00:16.015: INFO: deleting *v1.Role: csi-mock-volumes-4596-5569/external-provisioner-cfg-csi-mock-volumes-4596 Nov 6 02:00:16.018: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4596-5569/csi-provisioner-role-cfg Nov 6 02:00:16.021: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4596-5569/csi-resizer Nov 6 02:00:16.024: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4596 Nov 6 02:00:16.027: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4596 Nov 6 02:00:16.031: INFO: deleting *v1.Role: csi-mock-volumes-4596-5569/external-resizer-cfg-csi-mock-volumes-4596 Nov 6 02:00:16.034: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4596-5569/csi-resizer-role-cfg Nov 6 02:00:16.037: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4596-5569/csi-snapshotter Nov 6 02:00:16.040: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4596 Nov 6 02:00:16.043: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4596 Nov 6 02:00:16.046: INFO: deleting *v1.Role: csi-mock-volumes-4596-5569/external-snapshotter-leaderelection-csi-mock-volumes-4596 Nov 6 02:00:16.050: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4596-5569/external-snapshotter-leaderelection Nov 6 02:00:16.053: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4596-5569/csi-mock Nov 6 02:00:16.057: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4596 Nov 6 02:00:16.061: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4596 Nov 6 02:00:16.064: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4596 Nov 6 02:00:16.067: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4596 Nov 6 02:00:16.070: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4596 Nov 6 02:00:16.073: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4596 Nov 6 02:00:16.077: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4596 Nov 6 02:00:16.080: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4596-5569/csi-mockplugin Nov 6 02:00:16.084: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-4596 Nov 6 02:00:16.088: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4596-5569/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-4596-5569 STEP: Waiting for namespaces [csi-mock-volumes-4596-5569] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:00:44.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:61.878 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443 contain ephemeral=true when using inline volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","total":-1,"completed":18,"skipped":606,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:00:44.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:77 Nov 6 02:00:44.246: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:00:44.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-2389" for this suite. [AfterEach] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:111 Nov 6 02:00:44.255: INFO: AfterEach: Cleaning up test resources Nov 6 02:00:44.255: INFO: pvc is nil Nov 6 02:00:44.255: INFO: pv is nil S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds] [sig-storage] PersistentVolumes GCEPD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:127 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85 ------------------------------ SS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:00:01.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should be passed when podInfoOnMount=true /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 STEP: Building a driver namespace object, basename csi-mock-volumes-5558 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 6 02:00:02.045: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5558-2848/csi-attacher Nov 6 02:00:02.048: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5558 Nov 6 02:00:02.048: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-5558 Nov 6 02:00:02.050: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5558 Nov 6 02:00:02.053: INFO: creating *v1.Role: csi-mock-volumes-5558-2848/external-attacher-cfg-csi-mock-volumes-5558 Nov 6 02:00:02.056: INFO: creating *v1.RoleBinding: csi-mock-volumes-5558-2848/csi-attacher-role-cfg Nov 6 02:00:02.060: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5558-2848/csi-provisioner Nov 6 02:00:02.063: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5558 Nov 6 02:00:02.063: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-5558 Nov 6 02:00:02.069: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5558 Nov 6 02:00:02.075: INFO: creating *v1.Role: csi-mock-volumes-5558-2848/external-provisioner-cfg-csi-mock-volumes-5558 Nov 6 02:00:02.081: INFO: creating *v1.RoleBinding: csi-mock-volumes-5558-2848/csi-provisioner-role-cfg Nov 6 02:00:02.089: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5558-2848/csi-resizer Nov 6 02:00:02.092: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5558 Nov 6 02:00:02.092: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-5558 Nov 6 02:00:02.095: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5558 Nov 6 02:00:02.098: INFO: creating *v1.Role: csi-mock-volumes-5558-2848/external-resizer-cfg-csi-mock-volumes-5558 Nov 6 02:00:02.101: INFO: creating *v1.RoleBinding: csi-mock-volumes-5558-2848/csi-resizer-role-cfg Nov 6 02:00:02.103: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5558-2848/csi-snapshotter Nov 6 02:00:02.106: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5558 Nov 6 02:00:02.106: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-5558 Nov 6 02:00:02.108: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5558 Nov 6 02:00:02.111: INFO: creating *v1.Role: csi-mock-volumes-5558-2848/external-snapshotter-leaderelection-csi-mock-volumes-5558 Nov 6 02:00:02.113: INFO: creating *v1.RoleBinding: csi-mock-volumes-5558-2848/external-snapshotter-leaderelection Nov 6 02:00:02.115: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5558-2848/csi-mock Nov 6 02:00:02.117: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5558 Nov 6 02:00:02.120: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5558 Nov 6 02:00:02.123: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5558 Nov 6 02:00:02.125: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5558 Nov 6 02:00:02.128: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5558 Nov 6 02:00:02.131: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5558 Nov 6 02:00:02.133: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5558 Nov 6 02:00:02.136: INFO: creating *v1.StatefulSet: csi-mock-volumes-5558-2848/csi-mockplugin Nov 6 02:00:02.140: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-5558 Nov 6 02:00:02.144: INFO: creating *v1.StatefulSet: csi-mock-volumes-5558-2848/csi-mockplugin-attacher Nov 6 02:00:02.147: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-5558" Nov 6 02:00:02.148: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5558 to register on node node2 STEP: Creating pod Nov 6 02:00:11.667: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 6 02:00:11.671: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-28g2l] to have phase Bound Nov 6 02:00:11.673: INFO: PersistentVolumeClaim pvc-28g2l found but phase is Pending instead of Bound. Nov 6 02:00:13.676: INFO: PersistentVolumeClaim pvc-28g2l found and phase=Bound (2.005463801s) STEP: checking for CSIInlineVolumes feature Nov 6 02:00:25.716: INFO: Pod inline-volume-vk82f has the following logs: Nov 6 02:00:25.723: INFO: Deleting pod "inline-volume-vk82f" in namespace "csi-mock-volumes-5558" Nov 6 02:00:25.726: INFO: Wait up to 5m0s for pod "inline-volume-vk82f" to be fully deleted STEP: Deleting the previously created pod Nov 6 02:00:27.736: INFO: Deleting pod "pvc-volume-tester-fk4x2" in namespace "csi-mock-volumes-5558" Nov 6 02:00:27.740: INFO: Wait up to 5m0s for pod "pvc-volume-tester-fk4x2" to be fully deleted STEP: Checking CSI driver logs Nov 6 02:00:39.826: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: false Nov 6 02:00:39.826: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-fk4x2 Nov 6 02:00:39.826: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-5558 Nov 6 02:00:39.826: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 49850b0a-8cc0-42c0-8c2d-adea79295a3f Nov 6 02:00:39.826: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default Nov 6 02:00:39.826: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/49850b0a-8cc0-42c0-8c2d-adea79295a3f/volumes/kubernetes.io~csi/pvc-7bb9bebd-9ef9-4670-8149-6761d48cdd7c/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-fk4x2 Nov 6 02:00:39.826: INFO: Deleting pod "pvc-volume-tester-fk4x2" in namespace "csi-mock-volumes-5558" STEP: Deleting claim pvc-28g2l Nov 6 02:00:39.834: INFO: Waiting up to 2m0s for PersistentVolume pvc-7bb9bebd-9ef9-4670-8149-6761d48cdd7c to get deleted Nov 6 02:00:39.836: INFO: PersistentVolume pvc-7bb9bebd-9ef9-4670-8149-6761d48cdd7c found and phase=Bound (2.332577ms) Nov 6 02:00:41.841: INFO: PersistentVolume pvc-7bb9bebd-9ef9-4670-8149-6761d48cdd7c was removed STEP: Deleting storageclass csi-mock-volumes-5558-sctpzng STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-5558 STEP: Waiting for namespaces [csi-mock-volumes-5558] to vanish STEP: uninstalling csi mock driver Nov 6 02:00:47.852: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5558-2848/csi-attacher Nov 6 02:00:47.857: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5558 Nov 6 02:00:47.861: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5558 Nov 6 02:00:47.865: INFO: deleting *v1.Role: csi-mock-volumes-5558-2848/external-attacher-cfg-csi-mock-volumes-5558 Nov 6 02:00:47.869: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5558-2848/csi-attacher-role-cfg Nov 6 02:00:47.872: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5558-2848/csi-provisioner Nov 6 02:00:47.876: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5558 Nov 6 02:00:47.879: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5558 Nov 6 02:00:47.882: INFO: deleting *v1.Role: csi-mock-volumes-5558-2848/external-provisioner-cfg-csi-mock-volumes-5558 Nov 6 02:00:47.885: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5558-2848/csi-provisioner-role-cfg Nov 6 02:00:47.888: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5558-2848/csi-resizer Nov 6 02:00:47.892: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5558 Nov 6 02:00:47.895: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5558 Nov 6 02:00:47.898: INFO: deleting *v1.Role: csi-mock-volumes-5558-2848/external-resizer-cfg-csi-mock-volumes-5558 Nov 6 02:00:47.901: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5558-2848/csi-resizer-role-cfg Nov 6 02:00:47.904: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5558-2848/csi-snapshotter Nov 6 02:00:47.908: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5558 Nov 6 02:00:47.912: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5558 Nov 6 02:00:47.915: INFO: deleting *v1.Role: csi-mock-volumes-5558-2848/external-snapshotter-leaderelection-csi-mock-volumes-5558 Nov 6 02:00:47.919: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5558-2848/external-snapshotter-leaderelection Nov 6 02:00:47.922: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5558-2848/csi-mock Nov 6 02:00:47.925: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5558 Nov 6 02:00:47.929: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5558 Nov 6 02:00:47.933: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5558 Nov 6 02:00:47.939: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5558 Nov 6 02:00:47.943: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5558 Nov 6 02:00:47.946: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5558 Nov 6 02:00:47.950: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5558 Nov 6 02:00:47.953: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5558-2848/csi-mockplugin Nov 6 02:00:47.957: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-5558 Nov 6 02:00:47.960: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5558-2848/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-5558-2848 STEP: Waiting for namespaces [csi-mock-volumes-5558-2848] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:00:53.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:51.995 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443 should be passed when podInfoOnMount=true /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true","total":-1,"completed":12,"skipped":615,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:00:27.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Nov 6 02:00:28.050: INFO: The status of Pod test-hostpath-type-cldvb is Pending, waiting for it to be Running (with Ready = true) Nov 6 02:00:30.053: INFO: The status of Pod test-hostpath-type-cldvb is Pending, waiting for it to be Running (with Ready = true) Nov 6 02:00:32.052: INFO: The status of Pod test-hostpath-type-cldvb is Pending, waiting for it to be Running (with Ready = true) Nov 6 02:00:34.053: INFO: The status of Pod test-hostpath-type-cldvb is Running (Ready = true) STEP: running on node node1 [It] Should fail on mounting socket 'asocket' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:226 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:01:24.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-8084" for this suite. • [SLOW TEST:56.096 seconds] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting socket 'asocket' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:226 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathCharDev","total":-1,"completed":11,"skipped":315,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:01:24.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Nov 6 02:01:24.195: INFO: The status of Pod test-hostpath-type-s5bwx is Pending, waiting for it to be Running (with Ready = true) Nov 6 02:01:26.198: INFO: The status of Pod test-hostpath-type-s5bwx is Pending, waiting for it to be Running (with Ready = true) Nov 6 02:01:28.199: INFO: The status of Pod test-hostpath-type-s5bwx is Running (Ready = true) STEP: running on node node2 STEP: Create a block device for further testing Nov 6 02:01:28.201: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-805 PodName:test-hostpath-type-s5bwx ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 02:01:28.201: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:359 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:01:30.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-805" for this suite. • [SLOW TEST:6.172 seconds] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting block device 'ablkdev' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:359 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathFile","total":-1,"completed":12,"skipped":341,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:01:30.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] volume on default medium should have the correct mode using FSGroup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:71 STEP: Creating a pod to test emptydir volume type on node default medium Nov 6 02:01:30.370: INFO: Waiting up to 5m0s for pod "pod-d80ec336-1a6d-444b-9722-00cd2543f245" in namespace "emptydir-7959" to be "Succeeded or Failed" Nov 6 02:01:30.375: INFO: Pod "pod-d80ec336-1a6d-444b-9722-00cd2543f245": Phase="Pending", Reason="", readiness=false. Elapsed: 5.278082ms Nov 6 02:01:32.377: INFO: Pod "pod-d80ec336-1a6d-444b-9722-00cd2543f245": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00762875s Nov 6 02:01:34.382: INFO: Pod "pod-d80ec336-1a6d-444b-9722-00cd2543f245": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012048384s STEP: Saw pod success Nov 6 02:01:34.382: INFO: Pod "pod-d80ec336-1a6d-444b-9722-00cd2543f245" satisfied condition "Succeeded or Failed" Nov 6 02:01:34.384: INFO: Trying to get logs from node node2 pod pod-d80ec336-1a6d-444b-9722-00cd2543f245 container test-container: STEP: delete the pod Nov 6 02:01:34.416: INFO: Waiting for pod pod-d80ec336-1a6d-444b-9722-00cd2543f245 to disappear Nov 6 02:01:34.418: INFO: Pod pod-d80ec336-1a6d-444b-9722-00cd2543f245 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:01:34.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7959" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":-1,"completed":13,"skipped":346,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:59:53.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume without restarting pod if nodeExpansion=off /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 STEP: Building a driver namespace object, basename csi-mock-volumes-9658 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 6 01:59:53.722: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9658-7710/csi-attacher Nov 6 01:59:53.724: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9658 Nov 6 01:59:53.724: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-9658 Nov 6 01:59:53.727: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9658 Nov 6 01:59:53.729: INFO: creating *v1.Role: csi-mock-volumes-9658-7710/external-attacher-cfg-csi-mock-volumes-9658 Nov 6 01:59:53.732: INFO: creating *v1.RoleBinding: csi-mock-volumes-9658-7710/csi-attacher-role-cfg Nov 6 01:59:53.735: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9658-7710/csi-provisioner Nov 6 01:59:53.738: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9658 Nov 6 01:59:53.738: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-9658 Nov 6 01:59:53.740: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9658 Nov 6 01:59:53.743: INFO: creating *v1.Role: csi-mock-volumes-9658-7710/external-provisioner-cfg-csi-mock-volumes-9658 Nov 6 01:59:53.745: INFO: creating *v1.RoleBinding: csi-mock-volumes-9658-7710/csi-provisioner-role-cfg Nov 6 01:59:53.748: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9658-7710/csi-resizer Nov 6 01:59:53.750: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9658 Nov 6 01:59:53.750: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-9658 Nov 6 01:59:53.753: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9658 Nov 6 01:59:53.756: INFO: creating *v1.Role: csi-mock-volumes-9658-7710/external-resizer-cfg-csi-mock-volumes-9658 Nov 6 01:59:53.760: INFO: creating *v1.RoleBinding: csi-mock-volumes-9658-7710/csi-resizer-role-cfg Nov 6 01:59:53.763: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9658-7710/csi-snapshotter Nov 6 01:59:53.765: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9658 Nov 6 01:59:53.765: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-9658 Nov 6 01:59:53.768: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9658 Nov 6 01:59:53.771: INFO: creating *v1.Role: csi-mock-volumes-9658-7710/external-snapshotter-leaderelection-csi-mock-volumes-9658 Nov 6 01:59:53.774: INFO: creating *v1.RoleBinding: csi-mock-volumes-9658-7710/external-snapshotter-leaderelection Nov 6 01:59:53.777: INFO: creating *v1.ServiceAccount: csi-mock-volumes-9658-7710/csi-mock Nov 6 01:59:53.779: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9658 Nov 6 01:59:53.782: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9658 Nov 6 01:59:53.785: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9658 Nov 6 01:59:53.788: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9658 Nov 6 01:59:53.790: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9658 Nov 6 01:59:53.793: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9658 Nov 6 01:59:53.795: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9658 Nov 6 01:59:53.799: INFO: creating *v1.StatefulSet: csi-mock-volumes-9658-7710/csi-mockplugin Nov 6 01:59:53.803: INFO: creating *v1.StatefulSet: csi-mock-volumes-9658-7710/csi-mockplugin-attacher Nov 6 01:59:53.807: INFO: creating *v1.StatefulSet: csi-mock-volumes-9658-7710/csi-mockplugin-resizer Nov 6 01:59:53.811: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-9658 to register on node node1 STEP: Creating pod Nov 6 02:00:10.082: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 6 02:00:10.087: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-l8h76] to have phase Bound Nov 6 02:00:10.089: INFO: PersistentVolumeClaim pvc-l8h76 found but phase is Pending instead of Bound. Nov 6 02:00:12.092: INFO: PersistentVolumeClaim pvc-l8h76 found and phase=Bound (2.004999875s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-q6z7z Nov 6 02:00:22.131: INFO: Deleting pod "pvc-volume-tester-q6z7z" in namespace "csi-mock-volumes-9658" Nov 6 02:00:22.136: INFO: Wait up to 5m0s for pod "pvc-volume-tester-q6z7z" to be fully deleted STEP: Deleting claim pvc-l8h76 Nov 6 02:00:26.149: INFO: Waiting up to 2m0s for PersistentVolume pvc-4c837cae-4574-4f14-b270-3381d64148d4 to get deleted Nov 6 02:00:26.152: INFO: PersistentVolume pvc-4c837cae-4574-4f14-b270-3381d64148d4 found and phase=Bound (2.54218ms) Nov 6 02:00:28.155: INFO: PersistentVolume pvc-4c837cae-4574-4f14-b270-3381d64148d4 found and phase=Released (2.005568535s) Nov 6 02:00:30.158: INFO: PersistentVolume pvc-4c837cae-4574-4f14-b270-3381d64148d4 found and phase=Released (4.009229665s) Nov 6 02:00:32.161: INFO: PersistentVolume pvc-4c837cae-4574-4f14-b270-3381d64148d4 found and phase=Released (6.011509592s) Nov 6 02:00:34.164: INFO: PersistentVolume pvc-4c837cae-4574-4f14-b270-3381d64148d4 found and phase=Released (8.014594461s) Nov 6 02:00:36.167: INFO: PersistentVolume pvc-4c837cae-4574-4f14-b270-3381d64148d4 found and phase=Released (10.018304253s) Nov 6 02:00:38.173: INFO: PersistentVolume pvc-4c837cae-4574-4f14-b270-3381d64148d4 found and phase=Released (12.024278759s) Nov 6 02:00:40.178: INFO: PersistentVolume pvc-4c837cae-4574-4f14-b270-3381d64148d4 found and phase=Released (14.029470072s) Nov 6 02:00:42.182: INFO: PersistentVolume pvc-4c837cae-4574-4f14-b270-3381d64148d4 was removed STEP: Deleting storageclass csi-mock-volumes-9658-scd5hlm STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-9658 STEP: Waiting for namespaces [csi-mock-volumes-9658] to vanish STEP: uninstalling csi mock driver Nov 6 02:00:48.196: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9658-7710/csi-attacher Nov 6 02:00:48.199: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-9658 Nov 6 02:00:48.203: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-9658 Nov 6 02:00:48.206: INFO: deleting *v1.Role: csi-mock-volumes-9658-7710/external-attacher-cfg-csi-mock-volumes-9658 Nov 6 02:00:48.210: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9658-7710/csi-attacher-role-cfg Nov 6 02:00:48.213: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9658-7710/csi-provisioner Nov 6 02:00:48.217: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-9658 Nov 6 02:00:48.220: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-9658 Nov 6 02:00:48.223: INFO: deleting *v1.Role: csi-mock-volumes-9658-7710/external-provisioner-cfg-csi-mock-volumes-9658 Nov 6 02:00:48.227: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9658-7710/csi-provisioner-role-cfg Nov 6 02:00:48.230: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9658-7710/csi-resizer Nov 6 02:00:48.233: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-9658 Nov 6 02:00:48.237: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-9658 Nov 6 02:00:48.240: INFO: deleting *v1.Role: csi-mock-volumes-9658-7710/external-resizer-cfg-csi-mock-volumes-9658 Nov 6 02:00:48.244: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9658-7710/csi-resizer-role-cfg Nov 6 02:00:48.248: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9658-7710/csi-snapshotter Nov 6 02:00:48.251: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-9658 Nov 6 02:00:48.255: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-9658 Nov 6 02:00:48.258: INFO: deleting *v1.Role: csi-mock-volumes-9658-7710/external-snapshotter-leaderelection-csi-mock-volumes-9658 Nov 6 02:00:48.261: INFO: deleting *v1.RoleBinding: csi-mock-volumes-9658-7710/external-snapshotter-leaderelection Nov 6 02:00:48.266: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-9658-7710/csi-mock Nov 6 02:00:48.270: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-9658 Nov 6 02:00:48.273: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-9658 Nov 6 02:00:48.279: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-9658 Nov 6 02:00:48.282: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-9658 Nov 6 02:00:48.285: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-9658 Nov 6 02:00:48.290: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9658 Nov 6 02:00:48.293: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9658 Nov 6 02:00:48.296: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9658-7710/csi-mockplugin Nov 6 02:00:48.300: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9658-7710/csi-mockplugin-attacher Nov 6 02:00:48.303: INFO: deleting *v1.StatefulSet: csi-mock-volumes-9658-7710/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-9658-7710 STEP: Waiting for namespaces [csi-mock-volumes-9658-7710] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:01:54.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:120.658 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI Volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561 should expand volume without restarting pod if nodeExpansion=off /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off","total":-1,"completed":14,"skipped":559,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:01:54.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating block device on node "node2" using path "/tmp/local-volume-test-0c94936f-c403-401a-ab2b-003c971244c1" Nov 6 02:01:58.442: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-0c94936f-c403-401a-ab2b-003c971244c1 && dd if=/dev/zero of=/tmp/local-volume-test-0c94936f-c403-401a-ab2b-003c971244c1/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-0c94936f-c403-401a-ab2b-003c971244c1/file] Namespace:persistent-local-volumes-test-1307 PodName:hostexec-node2-vkqfg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 02:01:58.442: INFO: >>> kubeConfig: /root/.kube/config Nov 6 02:01:58.565: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-0c94936f-c403-401a-ab2b-003c971244c1/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1307 PodName:hostexec-node2-vkqfg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 02:01:58.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 02:01:58.705: INFO: Creating a PV followed by a PVC Nov 6 02:01:58.713: INFO: Waiting for PV local-pvwzpwf to bind to PVC pvc-rbk4j Nov 6 02:01:58.713: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-rbk4j] to have phase Bound Nov 6 02:01:58.715: INFO: PersistentVolumeClaim pvc-rbk4j found but phase is Pending instead of Bound. Nov 6 02:02:00.720: INFO: PersistentVolumeClaim pvc-rbk4j found but phase is Pending instead of Bound. Nov 6 02:02:02.725: INFO: PersistentVolumeClaim pvc-rbk4j found but phase is Pending instead of Bound. Nov 6 02:02:04.730: INFO: PersistentVolumeClaim pvc-rbk4j found but phase is Pending instead of Bound. Nov 6 02:02:06.733: INFO: PersistentVolumeClaim pvc-rbk4j found but phase is Pending instead of Bound. Nov 6 02:02:08.736: INFO: PersistentVolumeClaim pvc-rbk4j found and phase=Bound (10.02304887s) Nov 6 02:02:08.736: INFO: Waiting up to 3m0s for PersistentVolume local-pvwzpwf to have phase Bound Nov 6 02:02:08.738: INFO: PersistentVolume local-pvwzpwf found and phase=Bound (2.376644ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 Nov 6 02:02:08.742: INFO: We don't set fsGroup on block device, skipped. [AfterEach] [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 02:02:08.744: INFO: Deleting PersistentVolumeClaim "pvc-rbk4j" Nov 6 02:02:08.749: INFO: Deleting PersistentVolume "local-pvwzpwf" Nov 6 02:02:08.752: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-0c94936f-c403-401a-ab2b-003c971244c1/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-1307 PodName:hostexec-node2-vkqfg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 02:02:08.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Tear down block device "/dev/loop0" on node "node2" at path /tmp/local-volume-test-0c94936f-c403-401a-ab2b-003c971244c1/file Nov 6 02:02:08.844: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:persistent-local-volumes-test-1307 PodName:hostexec-node2-vkqfg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 02:02:08.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory /tmp/local-volume-test-0c94936f-c403-401a-ab2b-003c971244c1 Nov 6 02:02:08.932: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0c94936f-c403-401a-ab2b-003c971244c1] Namespace:persistent-local-volumes-test-1307 PodName:hostexec-node2-vkqfg ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 02:02:08.932: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:02:09.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1307" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [14.694 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: block] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 We don't set fsGroup on block device, skipped. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:263 ------------------------------ SSSSSS ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local StatefulSet with pod affinity [Slow] should use volumes on one node when pod has affinity","total":-1,"completed":16,"skipped":602,"failed":0} [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:00:33.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:634 [It] all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:657 STEP: Create a PVC STEP: Create 50 pods to use this PVC STEP: Wait for all pods are running [AfterEach] Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:648 STEP: Clean PV local-pvc56h4 [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:02:18.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3790" for this suite. • [SLOW TEST:104.545 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:629 all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:657 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running","total":-1,"completed":17,"skipped":602,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:01:34.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 Nov 6 02:02:20.519: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-8288 PodName:hostexec-node1-chl7t ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 02:02:20.519: INFO: >>> kubeConfig: /root/.kube/config Nov 6 02:02:21.374: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Nov 6 02:02:21.374: INFO: exec node1: stdout: "0\n" Nov 6 02:02:21.374: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Nov 6 02:02:21.374: INFO: exec node1: exit code: 0 Nov 6 02:02:21.374: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:02:21.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8288" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [46.920 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:286 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1256 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:02:21.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 6 02:02:23.520: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-ee3b49b6-c5dc-4b08-9481-fb632122679e-backend && mount --bind /tmp/local-volume-test-ee3b49b6-c5dc-4b08-9481-fb632122679e-backend /tmp/local-volume-test-ee3b49b6-c5dc-4b08-9481-fb632122679e-backend && ln -s /tmp/local-volume-test-ee3b49b6-c5dc-4b08-9481-fb632122679e-backend /tmp/local-volume-test-ee3b49b6-c5dc-4b08-9481-fb632122679e] Namespace:persistent-local-volumes-test-6089 PodName:hostexec-node2-blsf2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 02:02:23.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 02:02:23.647: INFO: Creating a PV followed by a PVC Nov 6 02:02:23.653: INFO: Waiting for PV local-pvczhvj to bind to PVC pvc-44ww4 Nov 6 02:02:23.654: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-44ww4] to have phase Bound Nov 6 02:02:23.655: INFO: PersistentVolumeClaim pvc-44ww4 found but phase is Pending instead of Bound. Nov 6 02:02:25.661: INFO: PersistentVolumeClaim pvc-44ww4 found but phase is Pending instead of Bound. Nov 6 02:02:27.663: INFO: PersistentVolumeClaim pvc-44ww4 found but phase is Pending instead of Bound. Nov 6 02:02:29.669: INFO: PersistentVolumeClaim pvc-44ww4 found but phase is Pending instead of Bound. Nov 6 02:02:31.672: INFO: PersistentVolumeClaim pvc-44ww4 found but phase is Pending instead of Bound. Nov 6 02:02:33.676: INFO: PersistentVolumeClaim pvc-44ww4 found but phase is Pending instead of Bound. Nov 6 02:02:35.679: INFO: PersistentVolumeClaim pvc-44ww4 found but phase is Pending instead of Bound. Nov 6 02:02:37.682: INFO: PersistentVolumeClaim pvc-44ww4 found and phase=Bound (14.028497008s) Nov 6 02:02:37.682: INFO: Waiting up to 3m0s for PersistentVolume local-pvczhvj to have phase Bound Nov 6 02:02:37.686: INFO: PersistentVolume local-pvczhvj found and phase=Bound (4.35885ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 6 02:02:41.717: INFO: pod "pod-2c2d7cc0-5e9b-4638-8d54-def750b8b491" created on Node "node2" STEP: Writing in pod1 Nov 6 02:02:41.717: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6089 PodName:pod-2c2d7cc0-5e9b-4638-8d54-def750b8b491 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 02:02:41.717: INFO: >>> kubeConfig: /root/.kube/config Nov 6 02:02:41.806: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 Nov 6 02:02:41.806: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6089 PodName:pod-2c2d7cc0-5e9b-4638-8d54-def750b8b491 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 02:02:41.806: INFO: >>> kubeConfig: /root/.kube/config Nov 6 02:02:41.881: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod1 Nov 6 02:02:41.882: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-ee3b49b6-c5dc-4b08-9481-fb632122679e > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-6089 PodName:pod-2c2d7cc0-5e9b-4638-8d54-def750b8b491 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 02:02:41.882: INFO: >>> kubeConfig: /root/.kube/config Nov 6 02:02:41.959: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-ee3b49b6-c5dc-4b08-9481-fb632122679e > /mnt/volume1/test-file", out: "", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-2c2d7cc0-5e9b-4638-8d54-def750b8b491 in namespace persistent-local-volumes-test-6089 [AfterEach] [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 02:02:41.964: INFO: Deleting PersistentVolumeClaim "pvc-44ww4" Nov 6 02:02:41.968: INFO: Deleting PersistentVolume "local-pvczhvj" STEP: Removing the test directory Nov 6 02:02:41.973: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm /tmp/local-volume-test-ee3b49b6-c5dc-4b08-9481-fb632122679e && umount /tmp/local-volume-test-ee3b49b6-c5dc-4b08-9481-fb632122679e-backend && rm -r /tmp/local-volume-test-ee3b49b6-c5dc-4b08-9481-fb632122679e-backend] Namespace:persistent-local-volumes-test-6089 PodName:hostexec-node2-blsf2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 02:02:41.973: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:02:42.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6089" for this suite. • [SLOW TEST:20.633 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":14,"skipped":405,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:02:09.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] unlimited /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 STEP: Building a driver namespace object, basename csi-mock-volumes-4558 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 6 02:02:09.160: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4558-2029/csi-attacher Nov 6 02:02:09.163: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4558 Nov 6 02:02:09.163: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4558 Nov 6 02:02:09.166: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4558 Nov 6 02:02:09.169: INFO: creating *v1.Role: csi-mock-volumes-4558-2029/external-attacher-cfg-csi-mock-volumes-4558 Nov 6 02:02:09.172: INFO: creating *v1.RoleBinding: csi-mock-volumes-4558-2029/csi-attacher-role-cfg Nov 6 02:02:09.175: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4558-2029/csi-provisioner Nov 6 02:02:09.177: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4558 Nov 6 02:02:09.177: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4558 Nov 6 02:02:09.180: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4558 Nov 6 02:02:09.183: INFO: creating *v1.Role: csi-mock-volumes-4558-2029/external-provisioner-cfg-csi-mock-volumes-4558 Nov 6 02:02:09.185: INFO: creating *v1.RoleBinding: csi-mock-volumes-4558-2029/csi-provisioner-role-cfg Nov 6 02:02:09.187: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4558-2029/csi-resizer Nov 6 02:02:09.190: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4558 Nov 6 02:02:09.190: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4558 Nov 6 02:02:09.192: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4558 Nov 6 02:02:09.195: INFO: creating *v1.Role: csi-mock-volumes-4558-2029/external-resizer-cfg-csi-mock-volumes-4558 Nov 6 02:02:09.198: INFO: creating *v1.RoleBinding: csi-mock-volumes-4558-2029/csi-resizer-role-cfg Nov 6 02:02:09.201: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4558-2029/csi-snapshotter Nov 6 02:02:09.203: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4558 Nov 6 02:02:09.203: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4558 Nov 6 02:02:09.205: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4558 Nov 6 02:02:09.208: INFO: creating *v1.Role: csi-mock-volumes-4558-2029/external-snapshotter-leaderelection-csi-mock-volumes-4558 Nov 6 02:02:09.211: INFO: creating *v1.RoleBinding: csi-mock-volumes-4558-2029/external-snapshotter-leaderelection Nov 6 02:02:09.214: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4558-2029/csi-mock Nov 6 02:02:09.217: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4558 Nov 6 02:02:09.219: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4558 Nov 6 02:02:09.222: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4558 Nov 6 02:02:09.225: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4558 Nov 6 02:02:09.227: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4558 Nov 6 02:02:09.230: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4558 Nov 6 02:02:09.233: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4558 Nov 6 02:02:09.235: INFO: creating *v1.StatefulSet: csi-mock-volumes-4558-2029/csi-mockplugin Nov 6 02:02:09.239: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4558 Nov 6 02:02:09.242: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4558" Nov 6 02:02:09.244: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4558 to register on node node2 STEP: Creating pod Nov 6 02:02:14.259: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 6 02:02:14.265: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-9cm48] to have phase Bound Nov 6 02:02:14.267: INFO: PersistentVolumeClaim pvc-9cm48 found but phase is Pending instead of Bound. Nov 6 02:02:16.273: INFO: PersistentVolumeClaim pvc-9cm48 found and phase=Bound (2.008051956s) Nov 6 02:02:20.297: INFO: Deleting pod "pvc-volume-tester-gxd9j" in namespace "csi-mock-volumes-4558" Nov 6 02:02:20.302: INFO: Wait up to 5m0s for pod "pvc-volume-tester-gxd9j" to be fully deleted STEP: Checking PVC events Nov 6 02:02:31.333: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-9cm48", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4558", SelfLink:"", UID:"2e23f4d4-6f37-4d85-84bf-62ba7d406bba", ResourceVersion:"112385", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63771760934, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00434f3e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00434f3f8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc000ce9e80), VolumeMode:(*v1.PersistentVolumeMode)(0xc000ce9f10), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 6 02:02:31.333: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-9cm48", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4558", SelfLink:"", UID:"2e23f4d4-6f37-4d85-84bf-62ba7d406bba", ResourceVersion:"112386", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63771760934, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4558"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00434f470), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00434f488)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00434f4a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00434f4b8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0003060f0), VolumeMode:(*v1.PersistentVolumeMode)(0xc000306330), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 6 02:02:31.333: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-9cm48", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4558", SelfLink:"", UID:"2e23f4d4-6f37-4d85-84bf-62ba7d406bba", ResourceVersion:"112394", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63771760934, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4558"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004502660), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004502678)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004502690), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0045026a8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-2e23f4d4-6f37-4d85-84bf-62ba7d406bba", StorageClassName:(*string)(0xc000d64d10), VolumeMode:(*v1.PersistentVolumeMode)(0xc000d64d90), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 6 02:02:31.334: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-9cm48", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4558", SelfLink:"", UID:"2e23f4d4-6f37-4d85-84bf-62ba7d406bba", ResourceVersion:"112395", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63771760934, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4558"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00434f9e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00434f9f8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00434fa10), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00434fa28)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-2e23f4d4-6f37-4d85-84bf-62ba7d406bba", StorageClassName:(*string)(0xc004e38ad0), VolumeMode:(*v1.PersistentVolumeMode)(0xc004e38ae0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 6 02:02:31.334: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-9cm48", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4558", SelfLink:"", UID:"2e23f4d4-6f37-4d85-84bf-62ba7d406bba", ResourceVersion:"112844", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63771760934, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(0xc0045026d8), DeletionGracePeriodSeconds:(*int64)(0xc0046e8ad8), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4558"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0045026f0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004502708)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004502720), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004502738)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-2e23f4d4-6f37-4d85-84bf-62ba7d406bba", StorageClassName:(*string)(0xc000d64f60), VolumeMode:(*v1.PersistentVolumeMode)(0xc000d64fa0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} Nov 6 02:02:31.334: INFO: PVC event DELETED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-9cm48", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4558", SelfLink:"", UID:"2e23f4d4-6f37-4d85-84bf-62ba7d406bba", ResourceVersion:"112845", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63771760934, loc:(*time.Location)(0x9e12f00)}}, DeletionTimestamp:(*v1.Time)(0xc004502768), DeletionGracePeriodSeconds:(*int64)(0xc0046e8b88), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4558"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004502780), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004502798)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0045027b0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0045027c8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-2e23f4d4-6f37-4d85-84bf-62ba7d406bba", StorageClassName:(*string)(0xc000d651e0), VolumeMode:(*v1.PersistentVolumeMode)(0xc000d65200), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}} STEP: Deleting pod pvc-volume-tester-gxd9j Nov 6 02:02:31.334: INFO: Deleting pod "pvc-volume-tester-gxd9j" in namespace "csi-mock-volumes-4558" STEP: Deleting claim pvc-9cm48 STEP: Deleting storageclass csi-mock-volumes-4558-scn9xpl STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4558 STEP: Waiting for namespaces [csi-mock-volumes-4558] to vanish STEP: uninstalling csi mock driver Nov 6 02:02:37.350: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4558-2029/csi-attacher Nov 6 02:02:37.354: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4558 Nov 6 02:02:37.358: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4558 Nov 6 02:02:37.361: INFO: deleting *v1.Role: csi-mock-volumes-4558-2029/external-attacher-cfg-csi-mock-volumes-4558 Nov 6 02:02:37.365: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4558-2029/csi-attacher-role-cfg Nov 6 02:02:37.369: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4558-2029/csi-provisioner Nov 6 02:02:37.372: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4558 Nov 6 02:02:37.376: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4558 Nov 6 02:02:37.380: INFO: deleting *v1.Role: csi-mock-volumes-4558-2029/external-provisioner-cfg-csi-mock-volumes-4558 Nov 6 02:02:37.386: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4558-2029/csi-provisioner-role-cfg Nov 6 02:02:37.392: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4558-2029/csi-resizer Nov 6 02:02:37.396: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4558 Nov 6 02:02:37.399: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4558 Nov 6 02:02:37.407: INFO: deleting *v1.Role: csi-mock-volumes-4558-2029/external-resizer-cfg-csi-mock-volumes-4558 Nov 6 02:02:37.410: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4558-2029/csi-resizer-role-cfg Nov 6 02:02:37.414: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4558-2029/csi-snapshotter Nov 6 02:02:37.417: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4558 Nov 6 02:02:37.420: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4558 Nov 6 02:02:37.423: INFO: deleting *v1.Role: csi-mock-volumes-4558-2029/external-snapshotter-leaderelection-csi-mock-volumes-4558 Nov 6 02:02:37.428: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4558-2029/external-snapshotter-leaderelection Nov 6 02:02:37.431: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4558-2029/csi-mock Nov 6 02:02:37.434: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4558 Nov 6 02:02:37.438: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4558 Nov 6 02:02:37.441: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4558 Nov 6 02:02:37.444: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4558 Nov 6 02:02:37.447: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4558 Nov 6 02:02:37.451: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4558 Nov 6 02:02:37.454: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4558 Nov 6 02:02:37.458: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4558-2029/csi-mockplugin Nov 6 02:02:37.462: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-4558 STEP: deleting the driver namespace: csi-mock-volumes-4558-2029 STEP: Waiting for namespaces [csi-mock-volumes-4558-2029] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:03:05.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:56.382 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 storage capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900 unlimited /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume storage capacity unlimited","total":-1,"completed":15,"skipped":593,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:02:42.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity used, no capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 STEP: Building a driver namespace object, basename csi-mock-volumes-7920 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 6 02:02:42.171: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7920-451/csi-attacher Nov 6 02:02:42.176: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7920 Nov 6 02:02:42.176: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-7920 Nov 6 02:02:42.179: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7920 Nov 6 02:02:42.186: INFO: creating *v1.Role: csi-mock-volumes-7920-451/external-attacher-cfg-csi-mock-volumes-7920 Nov 6 02:02:42.189: INFO: creating *v1.RoleBinding: csi-mock-volumes-7920-451/csi-attacher-role-cfg Nov 6 02:02:42.191: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7920-451/csi-provisioner Nov 6 02:02:42.194: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7920 Nov 6 02:02:42.194: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-7920 Nov 6 02:02:42.197: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7920 Nov 6 02:02:42.199: INFO: creating *v1.Role: csi-mock-volumes-7920-451/external-provisioner-cfg-csi-mock-volumes-7920 Nov 6 02:02:42.202: INFO: creating *v1.RoleBinding: csi-mock-volumes-7920-451/csi-provisioner-role-cfg Nov 6 02:02:42.205: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7920-451/csi-resizer Nov 6 02:02:42.208: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7920 Nov 6 02:02:42.208: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-7920 Nov 6 02:02:42.211: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7920 Nov 6 02:02:42.213: INFO: creating *v1.Role: csi-mock-volumes-7920-451/external-resizer-cfg-csi-mock-volumes-7920 Nov 6 02:02:42.216: INFO: creating *v1.RoleBinding: csi-mock-volumes-7920-451/csi-resizer-role-cfg Nov 6 02:02:42.218: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7920-451/csi-snapshotter Nov 6 02:02:42.221: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7920 Nov 6 02:02:42.221: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-7920 Nov 6 02:02:42.223: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7920 Nov 6 02:02:42.226: INFO: creating *v1.Role: csi-mock-volumes-7920-451/external-snapshotter-leaderelection-csi-mock-volumes-7920 Nov 6 02:02:42.228: INFO: creating *v1.RoleBinding: csi-mock-volumes-7920-451/external-snapshotter-leaderelection Nov 6 02:02:42.230: INFO: creating *v1.ServiceAccount: csi-mock-volumes-7920-451/csi-mock Nov 6 02:02:42.233: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7920 Nov 6 02:02:42.236: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7920 Nov 6 02:02:42.239: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7920 Nov 6 02:02:42.242: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7920 Nov 6 02:02:42.244: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7920 Nov 6 02:02:42.247: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7920 Nov 6 02:02:42.249: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7920 Nov 6 02:02:42.252: INFO: creating *v1.StatefulSet: csi-mock-volumes-7920-451/csi-mockplugin Nov 6 02:02:42.256: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-7920 Nov 6 02:02:42.259: INFO: creating *v1.StatefulSet: csi-mock-volumes-7920-451/csi-mockplugin-attacher Nov 6 02:02:42.263: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-7920" Nov 6 02:02:42.264: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-7920 to register on node node2 STEP: Creating pod Nov 6 02:02:56.784: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Deleting the previously created pod Nov 6 02:02:56.812: INFO: Deleting pod "pvc-volume-tester-29n8l" in namespace "csi-mock-volumes-7920" Nov 6 02:02:56.817: INFO: Wait up to 5m0s for pod "pvc-volume-tester-29n8l" to be fully deleted STEP: Deleting pod pvc-volume-tester-29n8l Nov 6 02:02:56.819: INFO: Deleting pod "pvc-volume-tester-29n8l" in namespace "csi-mock-volumes-7920" STEP: Deleting claim pvc-rwvmm STEP: Deleting storageclass mock-csi-storage-capacity-csi-mock-volumes-7920 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-7920 STEP: Waiting for namespaces [csi-mock-volumes-7920] to vanish STEP: uninstalling csi mock driver Nov 6 02:03:02.838: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7920-451/csi-attacher Nov 6 02:03:02.842: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-7920 Nov 6 02:03:02.846: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-7920 Nov 6 02:03:02.850: INFO: deleting *v1.Role: csi-mock-volumes-7920-451/external-attacher-cfg-csi-mock-volumes-7920 Nov 6 02:03:02.853: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7920-451/csi-attacher-role-cfg Nov 6 02:03:02.856: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7920-451/csi-provisioner Nov 6 02:03:02.864: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-7920 Nov 6 02:03:02.870: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-7920 Nov 6 02:03:02.877: INFO: deleting *v1.Role: csi-mock-volumes-7920-451/external-provisioner-cfg-csi-mock-volumes-7920 Nov 6 02:03:02.887: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7920-451/csi-provisioner-role-cfg Nov 6 02:03:02.891: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7920-451/csi-resizer Nov 6 02:03:02.894: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-7920 Nov 6 02:03:02.897: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-7920 Nov 6 02:03:02.900: INFO: deleting *v1.Role: csi-mock-volumes-7920-451/external-resizer-cfg-csi-mock-volumes-7920 Nov 6 02:03:02.904: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7920-451/csi-resizer-role-cfg Nov 6 02:03:02.907: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7920-451/csi-snapshotter Nov 6 02:03:02.910: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-7920 Nov 6 02:03:02.913: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-7920 Nov 6 02:03:02.917: INFO: deleting *v1.Role: csi-mock-volumes-7920-451/external-snapshotter-leaderelection-csi-mock-volumes-7920 Nov 6 02:03:02.922: INFO: deleting *v1.RoleBinding: csi-mock-volumes-7920-451/external-snapshotter-leaderelection Nov 6 02:03:02.937: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-7920-451/csi-mock Nov 6 02:03:02.941: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-7920 Nov 6 02:03:02.945: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-7920 Nov 6 02:03:02.947: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-7920 Nov 6 02:03:02.950: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-7920 Nov 6 02:03:02.953: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-7920 Nov 6 02:03:02.956: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7920 Nov 6 02:03:02.959: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7920 Nov 6 02:03:02.963: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7920-451/csi-mockplugin Nov 6 02:03:02.967: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-7920 Nov 6 02:03:02.971: INFO: deleting *v1.StatefulSet: csi-mock-volumes-7920-451/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-7920-451 STEP: Waiting for namespaces [csi-mock-volumes-7920-451] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:03:08.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:26.881 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134 CSIStorageCapacity used, no capacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","total":-1,"completed":15,"skipped":406,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:03:08.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pvc-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:72 Nov 6 02:03:09.023: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PVC Nov 6 02:03:09.028: INFO: error finding default storageClass : No default storage class found [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:03:09.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pvc-protection-7034" for this suite. [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:108 S [SKIPPING] in Spec Setup (BeforeEach) [0.043 seconds] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Verify "immediate" deletion of a PVC that is not in active use by a pod [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:114 error finding default storageClass : No default storage class found /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pv/pv.go:819 ------------------------------ S ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:00:44.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 6 02:02:20.318: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-5548b9fb-527a-4f99-96e9-51009b87c087-backend && ln -s /tmp/local-volume-test-5548b9fb-527a-4f99-96e9-51009b87c087-backend /tmp/local-volume-test-5548b9fb-527a-4f99-96e9-51009b87c087] Namespace:persistent-local-volumes-test-581 PodName:hostexec-node1-lmqpx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 02:02:20.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 02:02:20.972: INFO: Creating a PV followed by a PVC Nov 6 02:02:20.981: INFO: Waiting for PV local-pv9s667 to bind to PVC pvc-s5hdc Nov 6 02:02:20.981: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-s5hdc] to have phase Bound Nov 6 02:02:20.983: INFO: PersistentVolumeClaim pvc-s5hdc found but phase is Pending instead of Bound. Nov 6 02:02:22.987: INFO: PersistentVolumeClaim pvc-s5hdc found and phase=Bound (2.005677609s) Nov 6 02:02:22.987: INFO: Waiting up to 3m0s for PersistentVolume local-pv9s667 to have phase Bound Nov 6 02:02:22.990: INFO: PersistentVolume local-pv9s667 found and phase=Bound (3.237605ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 STEP: Checking fsGroup is set STEP: Creating a pod Nov 6 02:03:11.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-581 exec pod-33db45be-af6f-4a5d-b044-2a9178300c40 --namespace=persistent-local-volumes-test-581 -- stat -c %g /mnt/volume1' Nov 6 02:03:11.258: INFO: stderr: "" Nov 6 02:03:11.258: INFO: stdout: "1234\n" STEP: Deleting pod STEP: Deleting pod pod-33db45be-af6f-4a5d-b044-2a9178300c40 in namespace persistent-local-volumes-test-581 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 02:03:11.263: INFO: Deleting PersistentVolumeClaim "pvc-s5hdc" Nov 6 02:03:11.267: INFO: Deleting PersistentVolume "local-pv9s667" STEP: Removing the test directory Nov 6 02:03:11.271: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-5548b9fb-527a-4f99-96e9-51009b87c087 && rm -r /tmp/local-volume-test-5548b9fb-527a-4f99-96e9-51009b87c087-backend] Namespace:persistent-local-volumes-test-581 PodName:hostexec-node1-lmqpx ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 02:03:11.271: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:03:11.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-581" for this suite. • [SLOW TEST:147.128 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:267 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] Set fsGroup for local volume should set fsGroup for one pod [Slow]","total":-1,"completed":19,"skipped":668,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:03:05.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-block-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:325 STEP: Create a pod for further testing Nov 6 02:03:05.562: INFO: The status of Pod test-hostpath-type-2sj9p is Pending, waiting for it to be Running (with Ready = true) Nov 6 02:03:07.565: INFO: The status of Pod test-hostpath-type-2sj9p is Pending, waiting for it to be Running (with Ready = true) Nov 6 02:03:09.565: INFO: The status of Pod test-hostpath-type-2sj9p is Running (Ready = true) STEP: running on node node2 STEP: Create a block device for further testing Nov 6 02:03:09.567: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/ablkdev b 89 1] Namespace:host-path-type-block-dev-6484 PodName:test-hostpath-type-2sj9p ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 02:03:09.567: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:350 [AfterEach] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:03:13.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-block-dev-6484" for this suite. • [SLOW TEST:8.170 seconds] [sig-storage] HostPathType Block Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathUnset /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:350 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Block Device [Slow] Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathUnset","total":-1,"completed":16,"skipped":610,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:03:13.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Nov 6 02:03:13.824: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:03:13.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3039" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create unbound pvc count metrics for pvc controller after creating pvc only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:494 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:03:13.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Nov 6 02:03:13.903: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:03:13.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-8743" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:383 should create bound pv/pvc count metrics for pvc controller after creating both pv and pvc /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:503 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:00:54.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 6 02:02:22.110: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-babd0f5e-293c-4277-98ff-1f7b0156a928 && mount --bind /tmp/local-volume-test-babd0f5e-293c-4277-98ff-1f7b0156a928 /tmp/local-volume-test-babd0f5e-293c-4277-98ff-1f7b0156a928] Namespace:persistent-local-volumes-test-9958 PodName:hostexec-node1-lkqp9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 02:02:22.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 02:02:22.229: INFO: Creating a PV followed by a PVC Nov 6 02:02:22.239: INFO: Waiting for PV local-pvd7s6n to bind to PVC pvc-6shtt Nov 6 02:02:22.239: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-6shtt] to have phase Bound Nov 6 02:02:22.241: INFO: PersistentVolumeClaim pvc-6shtt found but phase is Pending instead of Bound. Nov 6 02:02:24.244: INFO: PersistentVolumeClaim pvc-6shtt found and phase=Bound (2.005767828s) Nov 6 02:02:24.244: INFO: Waiting up to 3m0s for PersistentVolume local-pvd7s6n to have phase Bound Nov 6 02:02:24.247: INFO: PersistentVolume local-pvd7s6n found and phase=Bound (2.143879ms) [BeforeEach] Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:261 [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 STEP: Create first pod and check fsGroup is set STEP: Creating a pod Nov 6 02:03:08.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-9958 exec pod-2e6032d5-d01d-4bd8-a26a-c2a180adb637 --namespace=persistent-local-volumes-test-9958 -- stat -c %g /mnt/volume1' Nov 6 02:03:08.603: INFO: stderr: "" Nov 6 02:03:08.603: INFO: stdout: "1000\n" Nov 6 02:03:10.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=persistent-local-volumes-test-9958 exec pod-2e6032d5-d01d-4bd8-a26a-c2a180adb637 --namespace=persistent-local-volumes-test-9958 -- stat -c %g /mnt/volume1' Nov 6 02:03:10.885: INFO: stderr: "" Nov 6 02:03:10.885: INFO: stdout: "1000\n" Nov 6 02:03:12.889: FAIL: failed to get expected fsGroup 1234 on directory /mnt/volume1 in pod pod-2e6032d5-d01d-4bd8-a26a-c2a180adb637 Unexpected error: <*errors.errorString | 0xc004d14560>: { s: "Failed to find \"1234\", last result: \"1000\n\"", } Failed to find "1234", last result: "1000 " occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage.createPodWithFsGroupTest(0xc001b725a0, 0xc001ccb620, 0x4d2, 0x4d2, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:810 +0x317 k8s.io/kubernetes/test/e2e/storage.glob..func21.2.6.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:277 +0x8d k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000803080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000803080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc000803080, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 02:03:12.891: INFO: Deleting PersistentVolumeClaim "pvc-6shtt" Nov 6 02:03:12.896: INFO: Deleting PersistentVolume "local-pvd7s6n" STEP: Removing the test directory Nov 6 02:03:12.901: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-volume-test-babd0f5e-293c-4277-98ff-1f7b0156a928 && rm -r /tmp/local-volume-test-babd0f5e-293c-4277-98ff-1f7b0156a928] Namespace:persistent-local-volumes-test-9958 PodName:hostexec-node1-lkqp9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 02:03:12.901: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "persistent-local-volumes-test-9958". STEP: Found 11 events. Nov 6 02:03:13.012: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for hostexec-node1-lkqp9: { } Scheduled: Successfully assigned persistent-local-volumes-test-9958/hostexec-node1-lkqp9 to node1 Nov 6 02:03:13.012: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-2e6032d5-d01d-4bd8-a26a-c2a180adb637: { } Scheduled: Successfully assigned persistent-local-volumes-test-9958/pod-2e6032d5-d01d-4bd8-a26a-c2a180adb637 to node1 Nov 6 02:03:13.012: INFO: At 2021-11-06 02:01:31 +0000 UTC - event for hostexec-node1-lkqp9: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" Nov 6 02:03:13.012: INFO: At 2021-11-06 02:01:31 +0000 UTC - event for hostexec-node1-lkqp9: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 285.268119ms Nov 6 02:03:13.012: INFO: At 2021-11-06 02:01:31 +0000 UTC - event for hostexec-node1-lkqp9: {kubelet node1} Created: Created container agnhost-container Nov 6 02:03:13.012: INFO: At 2021-11-06 02:01:31 +0000 UTC - event for hostexec-node1-lkqp9: {kubelet node1} Started: Started container agnhost-container Nov 6 02:03:13.012: INFO: At 2021-11-06 02:02:26 +0000 UTC - event for pod-2e6032d5-d01d-4bd8-a26a-c2a180adb637: {kubelet node1} AlreadyMountedVolume: The requested fsGroup is 1234, but the volume local-pvd7s6n has GID 1000. The volume may not be shareable. Nov 6 02:03:13.012: INFO: At 2021-11-06 02:02:44 +0000 UTC - event for pod-2e6032d5-d01d-4bd8-a26a-c2a180adb637: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" Nov 6 02:03:13.012: INFO: At 2021-11-06 02:02:44 +0000 UTC - event for pod-2e6032d5-d01d-4bd8-a26a-c2a180adb637: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" in 288.79765ms Nov 6 02:03:13.012: INFO: At 2021-11-06 02:02:45 +0000 UTC - event for pod-2e6032d5-d01d-4bd8-a26a-c2a180adb637: {kubelet node1} Created: Created container write-pod Nov 6 02:03:13.012: INFO: At 2021-11-06 02:02:48 +0000 UTC - event for pod-2e6032d5-d01d-4bd8-a26a-c2a180adb637: {kubelet node1} Started: Started container write-pod Nov 6 02:03:13.015: INFO: POD NODE PHASE GRACE CONDITIONS Nov 6 02:03:13.015: INFO: hostexec-node1-lkqp9 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 02:00:54 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 02:01:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 02:01:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 02:00:54 +0000 UTC }] Nov 6 02:03:13.015: INFO: pod-2e6032d5-d01d-4bd8-a26a-c2a180adb637 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 02:02:24 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 02:02:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 02:02:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 02:02:24 +0000 UTC }] Nov 6 02:03:13.015: INFO: Nov 6 02:03:13.020: INFO: Logging node info for node master1 Nov 6 02:03:13.023: INFO: Node Info: &Node{ObjectMeta:{master1 acabf68f-e6fa-4376-87a7-953399a106b3 113296 0 2021-11-05 20:58:52 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-05 20:58:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-05 21:01:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-05 21:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-05 21:06:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:29 +0000 UTC,LastTransitionTime:2021-11-05 21:04:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-06 02:03:06 +0000 UTC,LastTransitionTime:2021-11-05 20:58:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-06 02:03:06 +0000 UTC,LastTransitionTime:2021-11-05 20:58:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-06 02:03:06 +0000 UTC,LastTransitionTime:2021-11-05 20:58:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-06 02:03:06 +0000 UTC,LastTransitionTime:2021-11-05 21:01:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b66bbe4d404942179ce344aa1da0c494,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:b59c0f0e-9c14-460c-acfa-6e83037bd04e,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:50c0d65dc6b2d7618e01dd2fb431c2312ad7490d0461d040b112b04809b07401 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 6 02:03:13.023: INFO: Logging kubelet events for node master1 Nov 6 02:03:13.025: INFO: Logging pods the kubelet thinks is on node master1 Nov 6 02:03:13.053: INFO: coredns-8474476ff8-nq2jw started at 2021-11-05 21:02:14 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.053: INFO: Container coredns ready: true, restart count 2 Nov 6 02:03:13.054: INFO: node-exporter-lgdzv started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded) Nov 6 02:03:13.054: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 6 02:03:13.054: INFO: Container node-exporter ready: true, restart count 0 Nov 6 02:03:13.054: INFO: kube-apiserver-master1 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.054: INFO: Container kube-apiserver ready: true, restart count 0 Nov 6 02:03:13.054: INFO: kube-controller-manager-master1 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.054: INFO: Container kube-controller-manager ready: true, restart count 3 Nov 6 02:03:13.054: INFO: kube-scheduler-master1 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.054: INFO: Container kube-scheduler ready: true, restart count 0 Nov 6 02:03:13.054: INFO: kube-flannel-hkkhj started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded) Nov 6 02:03:13.054: INFO: Init container install-cni ready: true, restart count 2 Nov 6 02:03:13.054: INFO: Container kube-flannel ready: true, restart count 2 Nov 6 02:03:13.054: INFO: kube-proxy-r4cf7 started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.054: INFO: Container kube-proxy ready: true, restart count 1 Nov 6 02:03:13.054: INFO: kube-multus-ds-amd64-rr699 started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.054: INFO: Container kube-multus ready: true, restart count 1 Nov 6 02:03:13.054: INFO: container-registry-65d7c44b96-dwrs5 started at 2021-11-05 21:06:01 +0000 UTC (0+2 container statuses recorded) Nov 6 02:03:13.054: INFO: Container docker-registry ready: true, restart count 0 Nov 6 02:03:13.054: INFO: Container nginx ready: true, restart count 0 W1106 02:03:13.067167 38 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 6 02:03:13.148: INFO: Latency metrics for node master1 Nov 6 02:03:13.148: INFO: Logging node info for node master2 Nov 6 02:03:13.150: INFO: Node Info: &Node{ObjectMeta:{master2 004d4571-8588-4d18-93d0-ad0af4174866 113301 0 2021-11-05 20:59:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-05 20:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-11-05 21:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-11-05 21:01:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-05 21:09:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-11-05 21:09:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:41 +0000 UTC,LastTransitionTime:2021-11-05 21:04:41 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-06 02:03:06 +0000 UTC,LastTransitionTime:2021-11-05 20:59:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-06 02:03:06 +0000 UTC,LastTransitionTime:2021-11-05 20:59:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-06 02:03:06 +0000 UTC,LastTransitionTime:2021-11-05 20:59:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-06 02:03:06 +0000 UTC,LastTransitionTime:2021-11-05 21:01:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0f1bc4b4acc1463992265eb9f006d2f4,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:d0e797a3-7d35-4e63-b584-b18006ef67fe,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 6 02:03:13.151: INFO: Logging kubelet events for node master2 Nov 6 02:03:13.153: INFO: Logging pods the kubelet thinks is on node master2 Nov 6 02:03:13.179: INFO: kube-controller-manager-master2 started at 2021-11-05 21:04:18 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.179: INFO: Container kube-controller-manager ready: true, restart count 2 Nov 6 02:03:13.179: INFO: kube-proxy-9vm9v started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.179: INFO: Container kube-proxy ready: true, restart count 1 Nov 6 02:03:13.179: INFO: kube-flannel-g7q4k started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded) Nov 6 02:03:13.179: INFO: Init container install-cni ready: true, restart count 0 Nov 6 02:03:13.179: INFO: Container kube-flannel ready: true, restart count 3 Nov 6 02:03:13.179: INFO: kube-apiserver-master2 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.179: INFO: Container kube-apiserver ready: true, restart count 0 Nov 6 02:03:13.179: INFO: kube-scheduler-master2 started at 2021-11-05 21:08:18 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.179: INFO: Container kube-scheduler ready: true, restart count 3 Nov 6 02:03:13.179: INFO: kube-multus-ds-amd64-m5646 started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.179: INFO: Container kube-multus ready: true, restart count 1 Nov 6 02:03:13.179: INFO: node-feature-discovery-controller-cff799f9f-8cg9j started at 2021-11-05 21:09:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.179: INFO: Container nfd-controller ready: true, restart count 0 Nov 6 02:03:13.179: INFO: node-exporter-8mxjv started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded) Nov 6 02:03:13.179: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 6 02:03:13.179: INFO: Container node-exporter ready: true, restart count 0 W1106 02:03:13.194635 38 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 6 02:03:13.256: INFO: Latency metrics for node master2 Nov 6 02:03:13.256: INFO: Logging node info for node master3 Nov 6 02:03:13.260: INFO: Node Info: &Node{ObjectMeta:{master3 d3395dfc-1d8f-4527-88b4-7f472f6a6c0f 113370 0 2021-11-05 20:59:34 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-05 20:59:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-05 21:01:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-05 21:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-05 21:12:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:26 +0000 UTC,LastTransitionTime:2021-11-05 21:04:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-06 02:03:10 +0000 UTC,LastTransitionTime:2021-11-05 20:59:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-06 02:03:10 +0000 UTC,LastTransitionTime:2021-11-05 20:59:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-06 02:03:10 +0000 UTC,LastTransitionTime:2021-11-05 20:59:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-06 02:03:10 +0000 UTC,LastTransitionTime:2021-11-05 21:04:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:006015d4e2a7441aa293fbb9db938e38,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:a0f65291-184f-4994-a7ea-d1a5b4d71ffa,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 6 02:03:13.260: INFO: Logging kubelet events for node master3 Nov 6 02:03:13.263: INFO: Logging pods the kubelet thinks is on node master3 Nov 6 02:03:13.280: INFO: kube-apiserver-master3 started at 2021-11-05 21:04:19 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.280: INFO: Container kube-apiserver ready: true, restart count 0 Nov 6 02:03:13.280: INFO: kube-controller-manager-master3 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.280: INFO: Container kube-controller-manager ready: true, restart count 2 Nov 6 02:03:13.280: INFO: kube-flannel-f55xz started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded) Nov 6 02:03:13.280: INFO: Init container install-cni ready: true, restart count 0 Nov 6 02:03:13.280: INFO: Container kube-flannel ready: true, restart count 1 Nov 6 02:03:13.280: INFO: coredns-8474476ff8-qbn9j started at 2021-11-05 21:02:10 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.280: INFO: Container coredns ready: true, restart count 1 Nov 6 02:03:13.280: INFO: node-exporter-mqcvx started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded) Nov 6 02:03:13.280: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 6 02:03:13.280: INFO: Container node-exporter ready: true, restart count 0 Nov 6 02:03:13.280: INFO: kube-scheduler-master3 started at 2021-11-05 21:08:19 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.280: INFO: Container kube-scheduler ready: true, restart count 3 Nov 6 02:03:13.280: INFO: kube-proxy-s2pzt started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.280: INFO: Container kube-proxy ready: true, restart count 2 Nov 6 02:03:13.280: INFO: kube-multus-ds-amd64-cp25f started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.280: INFO: Container kube-multus ready: true, restart count 1 Nov 6 02:03:13.280: INFO: dns-autoscaler-7df78bfcfb-z9dxm started at 2021-11-05 21:02:12 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.280: INFO: Container autoscaler ready: true, restart count 1 W1106 02:03:13.293464 38 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 6 02:03:13.359: INFO: Latency metrics for node master3 Nov 6 02:03:13.359: INFO: Logging node info for node node1 Nov 6 02:03:13.362: INFO: Node Info: &Node{ObjectMeta:{node1 290b18e7-da33-4da8-b78a-8a7f28c49abf 113395 0 2021-11-05 21:00:39 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-05 21:01:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-05 21:09:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-05 21:13:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2021-11-06 01:45:26 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{}}}}} {kube-controller-manager Update v1 2021-11-06 02:00:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubelet Update v1 2021-11-06 02:00:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:40 +0000 UTC,LastTransitionTime:2021-11-05 21:04:40 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-06 02:03:11 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-06 02:03:11 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-06 02:03:11 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-06 02:03:11 +0000 UTC,LastTransitionTime:2021-11-05 21:01:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7f2fc144f1734ec29780a435d0602675,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:7c24c54c-15ba-4c20-b196-32ad0c82be71,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432896,},ContainerImage{Names:[localhost:30500/cmk@sha256:8c85a99f0d621f4580064e48909402b99a2e98aff1c1e4164e4e52d46568c5be cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:373281573,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:d226f3fd5f293ff513f53573a40c069b89d57d42338a1045b493bf702ac6b1f6 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.5],SizeBytes:60182158,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:51645752,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:46131354,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:46041582,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:e30d246f33998514fb41fc34b11174e739efb54af715b53723b81cc5d2ebd29d nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:42321438,},ContainerImage{Names:[localhost:30500/tasextender@sha256:50c0d65dc6b2d7618e01dd2fb431c2312ad7490d0461d040b112b04809b07401 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:d8d3bc2c183ed2f9f10e7258f84971202325ee6011ba137112e01e30f206de67 k8s.gcr.io/busybox:latest],SizeBytes:2433303,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 6 02:03:13.363: INFO: Logging kubelet events for node node1 Nov 6 02:03:13.365: INFO: Logging pods the kubelet thinks is on node node1 Nov 6 02:03:13.989: INFO: node-exporter-fvksz started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded) Nov 6 02:03:13.989: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 6 02:03:13.989: INFO: Container node-exporter ready: true, restart count 0 Nov 6 02:03:13.989: INFO: pod-6fa4dca5-dd80-4c99-a585-f603cd17f0a0 started at 2021-11-06 02:00:33 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.989: INFO: pod-e99b75dd-f9df-4b69-a9fc-10adcbc685f7 started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.989: INFO: kube-multus-ds-amd64-mqrl8 started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container kube-multus ready: true, restart count 1 Nov 6 02:03:13.989: INFO: pod-e174eb65-f4d1-4dd8-bfc7-9b4870f826c7 started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.989: INFO: pod-7f7b68ed-e883-4f9a-bc40-1843630ff15e started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.989: INFO: pod-a17c1e25-10a2-4c54-820c-a0aa7190dde1 started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.989: INFO: pod-7cfc9d1b-8df1-41ab-b36c-e60fd8993178 started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.989: INFO: pod-ec4668e5-5b9d-4b73-a8ef-14be5312b5d7 started at 2021-11-06 02:00:33 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.989: INFO: pod-fc802444-b158-4eff-8b8c-6cc3fd961b94 started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.989: INFO: pod-43d91d89-01b7-4b67-9b91-f0f1dbdd7dbe started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.989: INFO: pod-50e27db8-cb66-4731-b228-2bb0221f3c18 started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.989: INFO: pod-25628b39-597b-4583-9c02-d16fe354069d started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.989: INFO: prometheus-k8s-0 started at 2021-11-05 21:14:58 +0000 UTC (0+4 container statuses recorded) Nov 6 02:03:13.989: INFO: Container config-reloader ready: true, restart count 0 Nov 6 02:03:13.989: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 6 02:03:13.989: INFO: Container grafana ready: true, restart count 0 Nov 6 02:03:13.989: INFO: Container prometheus ready: true, restart count 1 Nov 6 02:03:13.989: INFO: pod-c8044528-4d11-4f34-b80b-0270e674c61b started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.989: INFO: pod-c8402e24-470e-46a2-8657-0862e897ca03 started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.989: INFO: pod-a687a84a-3e61-4aaa-ad91-fd24206de444 started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.989: INFO: pod-e7819cbe-1d7f-41a0-9e67-2962d4515177 started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.989: INFO: cmk-webhook-6c9d5f8578-wq5mk started at 2021-11-05 21:13:47 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container cmk-webhook ready: true, restart count 0 Nov 6 02:03:13.989: INFO: pod-configmaps-6f4f54b2-3368-46ae-9249-15e33d763641 started at 2021-11-06 01:58:30 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container agnhost-container ready: false, restart count 0 Nov 6 02:03:13.989: INFO: pod-c9591dd5-bd0e-4db9-8774-5c18666cc211 started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.989: INFO: pod-3750f5c1-87ab-4844-8602-2ff9728c810b started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.989: INFO: nginx-proxy-node1 started at 2021-11-05 21:00:39 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container nginx-proxy ready: true, restart count 2 Nov 6 02:03:13.989: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-l4npn started at 2021-11-05 21:10:45 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 6 02:03:13.989: INFO: pod-5e95350b-afb2-4ebc-a4d2-0940d38efcfc started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.989: INFO: pod-07998503-5d9c-44e0-9f4e-430e7834a873 started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.989: INFO: node-feature-discovery-worker-spmbf started at 2021-11-05 21:09:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container nfd-worker ready: true, restart count 0 Nov 6 02:03:13.989: INFO: pod-fe39ba6e-0196-4587-b570-e8910ef611be started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.989: INFO: pod-ee5030b7-06ef-4c50-a256-ece94ba0ee2a started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.989: INFO: pod-c0c41328-2f4c-4791-851a-fa61f8fd9e93 started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.989: INFO: hostexec-node1-q24b9 started at 2021-11-06 01:59:32 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container agnhost-container ready: true, restart count 0 Nov 6 02:03:13.989: INFO: pod-717e1400-a9dc-457b-8243-9f29912bc41c started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.989: INFO: pod-3ef1dc27-590c-4ce5-bbb6-09349eb18171 started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.989: INFO: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s started at 2021-11-05 21:17:51 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container tas-extender ready: true, restart count 0 Nov 6 02:03:13.989: INFO: cmk-init-discover-node1-nnkks started at 2021-11-05 21:13:04 +0000 UTC (0+3 container statuses recorded) Nov 6 02:03:13.989: INFO: Container discover ready: false, restart count 0 Nov 6 02:03:13.989: INFO: Container init ready: false, restart count 0 Nov 6 02:03:13.989: INFO: Container install ready: false, restart count 0 Nov 6 02:03:13.989: INFO: cmk-cfm9r started at 2021-11-05 21:13:47 +0000 UTC (0+2 container statuses recorded) Nov 6 02:03:13.989: INFO: Container nodereport ready: true, restart count 0 Nov 6 02:03:13.989: INFO: Container reconcile ready: true, restart count 0 Nov 6 02:03:13.989: INFO: pod-2e6032d5-d01d-4bd8-a26a-c2a180adb637 started at 2021-11-06 02:02:24 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container write-pod ready: true, restart count 0 Nov 6 02:03:13.989: INFO: pod-961711f0-e9e8-4b56-98fd-12f8e219e3b3 started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.989: INFO: pod-98829f5e-871a-437d-8fa7-1d217555c0fe started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.989: INFO: pod-fcbc8c18-8d8c-4da3-bc9d-3847b0e4fac8 started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.989: INFO: pod-fe5f34db-8d88-4051-96ea-3fd21786a135 started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.989: INFO: kube-proxy-mc4cs started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container kube-proxy ready: true, restart count 2 Nov 6 02:03:13.989: INFO: pod-b7a24878-2dee-4196-8b43-5336af58471d started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.989: INFO: pod-fa1ee96c-0961-41d1-b580-f05e02644aaa started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.989: INFO: pod-9396b51d-cce6-434e-9d20-4fc028eb0c7d started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.989: INFO: pod-c62f84e3-a715-4b93-bc2a-47084124dad2 started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.989: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.990: INFO: pod-7bc2705f-ccc1-4478-b693-aad93042b88d started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.990: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.990: INFO: pod-4a3fc66e-12ec-46db-9ec0-f009ee9e4017 started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.990: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.990: INFO: pod-ed275da6-ce25-46a0-a25b-82188006f6a6 started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.990: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.990: INFO: pod-8f928a94-18fd-4d3e-b17d-6aa39a55cff6 started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.990: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.990: INFO: hostexec-node1-lkqp9 started at 2021-11-06 02:00:54 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.990: INFO: Container agnhost-container ready: true, restart count 0 Nov 6 02:03:13.990: INFO: pod-386deeb5-55cc-4fc1-8b79-c121e975fd1c started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.990: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.990: INFO: collectd-5k6s9 started at 2021-11-05 21:18:40 +0000 UTC (0+3 container statuses recorded) Nov 6 02:03:13.990: INFO: Container collectd ready: true, restart count 0 Nov 6 02:03:13.990: INFO: Container collectd-exporter ready: true, restart count 0 Nov 6 02:03:13.990: INFO: Container rbac-proxy ready: true, restart count 0 Nov 6 02:03:13.990: INFO: pod-configmaps-733b641b-4f3c-4768-85f4-857ea02f6ead started at 2021-11-06 01:59:46 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.990: INFO: Container agnhost-container ready: false, restart count 0 Nov 6 02:03:13.990: INFO: pod-a28653dd-739d-47b3-a713-c670d7e72e55 started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.990: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.990: INFO: pod-41f5da93-3065-4d25-a22e-d494c48cd199 started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.990: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.990: INFO: hostexec-node1-lmqpx started at 2021-11-06 02:00:44 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.990: INFO: Container agnhost-container ready: true, restart count 0 Nov 6 02:03:13.990: INFO: kubernetes-dashboard-785dcbb76d-9wtdz started at 2021-11-05 21:02:14 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.990: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 6 02:03:13.990: INFO: pod-a63db214-6953-4537-8c15-4b45005c6df8 started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.990: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.990: INFO: pod-ea8184cf-5ac6-4e95-97b3-2922a901fc01 started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.990: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.990: INFO: pod-33db45be-af6f-4a5d-b044-2a9178300c40 started at 2021-11-06 02:02:23 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.990: INFO: Container write-pod ready: true, restart count 0 Nov 6 02:03:13.990: INFO: pod-cdc59fe8-693a-4229-b616-f0788331430e started at 2021-11-06 02:00:33 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.990: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.990: INFO: pod-f6842784-878f-4308-a1b7-10251c346243 started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.990: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.990: INFO: pod-36992857-4ccc-4e56-be8b-578d0dfc7d7c started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.990: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.990: INFO: pod-7f21e9d8-a293-4fdb-9fa0-e53c1342eb2d started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.990: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.990: INFO: pod-291a00ae-5d3e-49ad-b3be-ff4124c81276 started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.990: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.990: INFO: pod-863b8be2-4561-44d1-a5b2-8f8cfebb4b76 started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.990: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.990: INFO: pod-0e6b105a-605d-4ff7-b019-ab9838b7fd21 started at 2021-11-06 02:03:11 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.990: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.990: INFO: pod-secrets-02ed41df-c19d-43fa-b8b5-b891609debc0 started at 2021-11-06 01:58:43 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.990: INFO: Container creates-volume-test ready: false, restart count 0 Nov 6 02:03:13.990: INFO: pod-98dd8a88-ff05-4fb0-8f41-f4bfb640a685 started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.990: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.990: INFO: pod-a1784587-cc4a-4412-8351-fb8e63854e9f started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.990: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.990: INFO: pod-331262d7-198a-464b-bbf2-7020505ec0eb started at 2021-11-06 02:00:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:13.990: INFO: Container write-pod ready: false, restart count 0 Nov 6 02:03:13.990: INFO: kube-flannel-hxwks started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded) Nov 6 02:03:13.990: INFO: Init container install-cni ready: true, restart count 2 Nov 6 02:03:13.990: INFO: Container kube-flannel ready: true, restart count 3 W1106 02:03:14.004847 38 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 6 02:03:14.859: INFO: Latency metrics for node node1 Nov 6 02:03:14.859: INFO: Logging node info for node node2 Nov 6 02:03:14.863: INFO: Node Info: &Node{ObjectMeta:{node2 7d7e71f0-82d7-49ba-b69a-56600dd59b3f 113398 0 2021-11-05 21:00:39 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-05 21:01:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-05 21:09:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-05 21:13:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2021-11-06 01:44:03 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{}}}}} {kube-controller-manager Update v1 2021-11-06 02:00:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubelet Update v1 2021-11-06 02:02:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:43 +0000 UTC,LastTransitionTime:2021-11-05 21:04:43 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-06 02:03:11 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-06 02:03:11 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-06 02:03:11 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-06 02:03:11 +0000 UTC,LastTransitionTime:2021-11-05 21:01:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:415d65c0f8484c488059b324e675b5bd,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:c5482a76-3a9a-45bb-ab12-c74550bfe71f,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[localhost:30500/cmk@sha256:8c85a99f0d621f4580064e48909402b99a2e98aff1c1e4164e4e52d46568c5be localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[k8s.gcr.io/sig-storage/nfs-provisioner@sha256:c1bedac8758029948afe060bf8f6ee63ea489b5e08d29745f44fab68ee0d46ca k8s.gcr.io/sig-storage/nfs-provisioner:v2.2.2],SizeBytes:391772778,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:51645752,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:46131354,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:46041582,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:e30d246f33998514fb41fc34b11174e739efb54af715b53723b81cc5d2ebd29d localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:42321438,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:d8d3bc2c183ed2f9f10e7258f84971202325ee6011ba137112e01e30f206de67 k8s.gcr.io/busybox:latest],SizeBytes:2433303,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 6 02:03:14.864: INFO: Logging kubelet events for node node2 Nov 6 02:03:14.866: INFO: Logging pods the kubelet thinks is on node node2 Nov 6 02:03:14.883: INFO: cmk-init-discover-node2-9svdd started at 2021-11-05 21:13:24 +0000 UTC (0+3 container statuses recorded) Nov 6 02:03:14.883: INFO: Container discover ready: false, restart count 0 Nov 6 02:03:14.883: INFO: Container init ready: false, restart count 0 Nov 6 02:03:14.883: INFO: Container install ready: false, restart count 0 Nov 6 02:03:14.883: INFO: node-exporter-k7p79 started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded) Nov 6 02:03:14.883: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 6 02:03:14.883: INFO: Container node-exporter ready: true, restart count 0 Nov 6 02:03:14.883: INFO: kubernetes-metrics-scraper-5558854cb-v9vgg started at 2021-11-05 21:02:14 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:14.883: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 6 02:03:14.883: INFO: pod-secrets-ca0dbc3e-7ef8-4489-b621-e7b024284c05 started at 2021-11-06 02:02:18 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:14.883: INFO: Container creates-volume-test ready: false, restart count 0 Nov 6 02:03:14.883: INFO: test-hostpath-type-2sj9p started at 2021-11-06 02:03:05 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:14.883: INFO: Container host-path-testing ready: true, restart count 0 Nov 6 02:03:14.883: INFO: kube-flannel-cqj7j started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded) Nov 6 02:03:14.883: INFO: Init container install-cni ready: true, restart count 1 Nov 6 02:03:14.883: INFO: Container kube-flannel ready: true, restart count 2 Nov 6 02:03:14.883: INFO: test-hostpath-type-fj52j started at 2021-11-06 02:03:14 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:14.883: INFO: Container host-path-sh-testing ready: false, restart count 0 Nov 6 02:03:14.883: INFO: kube-proxy-j9lmg started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:14.883: INFO: Container kube-proxy ready: true, restart count 2 Nov 6 02:03:14.883: INFO: pod-configmaps-42cce2fd-7fe0-46a0-9215-2a55b1a25ef8 started at 2021-11-06 01:58:21 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:14.883: INFO: Container agnhost-container ready: false, restart count 0 Nov 6 02:03:14.883: INFO: kube-multus-ds-amd64-p7bxx started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:14.883: INFO: Container kube-multus ready: true, restart count 1 Nov 6 02:03:14.883: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-tzh4p started at 2021-11-05 21:10:45 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:14.883: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 6 02:03:14.883: INFO: collectd-r2g57 started at 2021-11-05 21:18:40 +0000 UTC (0+3 container statuses recorded) Nov 6 02:03:14.884: INFO: Container collectd ready: true, restart count 0 Nov 6 02:03:14.884: INFO: Container collectd-exporter ready: true, restart count 0 Nov 6 02:03:14.884: INFO: Container rbac-proxy ready: true, restart count 0 Nov 6 02:03:14.884: INFO: cmk-bnvd2 started at 2021-11-05 21:13:46 +0000 UTC (0+2 container statuses recorded) Nov 6 02:03:14.884: INFO: Container nodereport ready: true, restart count 0 Nov 6 02:03:14.884: INFO: Container reconcile ready: true, restart count 0 Nov 6 02:03:14.884: INFO: prometheus-operator-585ccfb458-vh55q started at 2021-11-05 21:14:41 +0000 UTC (0+2 container statuses recorded) Nov 6 02:03:14.884: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 6 02:03:14.884: INFO: Container prometheus-operator ready: true, restart count 0 Nov 6 02:03:14.884: INFO: nginx-proxy-node2 started at 2021-11-05 21:00:39 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:14.884: INFO: Container nginx-proxy ready: true, restart count 2 Nov 6 02:03:14.884: INFO: node-feature-discovery-worker-pn6cr started at 2021-11-05 21:09:34 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:14.884: INFO: Container nfd-worker ready: true, restart count 0 Nov 6 02:03:14.884: INFO: hostexec-node2-rcvhb started at 2021-11-06 02:03:11 +0000 UTC (0+1 container statuses recorded) Nov 6 02:03:14.884: INFO: Container agnhost-container ready: true, restart count 0 W1106 02:03:14.895568 38 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 6 02:03:15.859: INFO: Latency metrics for node node2 Nov 6 02:03:15.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9958" for this suite. • Failure [141.809 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-bindmounted] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Set fsGroup for local volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:260 should set same fsGroup for two pods simultaneously [Slow] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:274 Nov 6 02:03:12.889: failed to get expected fsGroup 1234 on directory /mnt/volume1 in pod pod-2e6032d5-d01d-4bd8-a26a-c2a180adb637 Unexpected error: <*errors.errorString | 0xc004d14560>: { s: "Failed to find \"1234\", last result: \"1000\n\"", } Failed to find "1234", last result: "1000 " occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:810 ------------------------------ {"msg":"FAILED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]","total":-1,"completed":12,"skipped":648,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:03:13.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Nov 6 02:03:13.969: INFO: The status of Pod test-hostpath-type-fj52j is Pending, waiting for it to be Running (with Ready = true) Nov 6 02:03:15.972: INFO: The status of Pod test-hostpath-type-fj52j is Pending, waiting for it to be Running (with Ready = true) Nov 6 02:03:17.974: INFO: The status of Pod test-hostpath-type-fj52j is Running (Ready = true) STEP: running on node node2 [It] Should fail on mounting non-existent socket 'does-not-exist-socket' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:202 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:03:20.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-5615" for this suite. • [SLOW TEST:6.085 seconds] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting non-existent socket 'does-not-exist-socket' when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:202 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should fail on mounting non-existent socket 'does-not-exist-socket' when HostPathType is HostPathSocket","total":-1,"completed":17,"skipped":694,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:58:21.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:557 STEP: Creating configMap with name cm-test-opt-create-e9fe185b-eb54-4c20-b191-149f4d4b759c STEP: Creating the pod [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:03:21.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2251" for this suite. • [SLOW TEST:300.064 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:557 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]","total":-1,"completed":18,"skipped":655,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:03:15.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-file STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:124 STEP: Create a pod for further testing Nov 6 02:03:15.919: INFO: The status of Pod test-hostpath-type-6dqct is Pending, waiting for it to be Running (with Ready = true) Nov 6 02:03:17.923: INFO: The status of Pod test-hostpath-type-6dqct is Pending, waiting for it to be Running (with Ready = true) Nov 6 02:03:19.923: INFO: The status of Pod test-hostpath-type-6dqct is Running (Ready = true) STEP: running on node node2 STEP: Should automatically create a new file 'afile' when HostPathType is HostPathFileOrCreate [It] Should fail on mounting file 'afile' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:161 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:03:27.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-file-838" for this suite. • [SLOW TEST:12.107 seconds] [sig-storage] HostPathType File [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting file 'afile' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:161 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathCharDev","total":-1,"completed":13,"skipped":655,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:03:11.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes Nov 6 02:03:13.448: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-fa5431a6-2d46-44dc-ac48-cef33bfcb9e6-backend && ln -s /tmp/local-volume-test-fa5431a6-2d46-44dc-ac48-cef33bfcb9e6-backend /tmp/local-volume-test-fa5431a6-2d46-44dc-ac48-cef33bfcb9e6] Namespace:persistent-local-volumes-test-3713 PodName:hostexec-node2-rcvhb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 02:03:13.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 02:03:13.532: INFO: Creating a PV followed by a PVC Nov 6 02:03:13.539: INFO: Waiting for PV local-pvbnw4g to bind to PVC pvc-f6ktp Nov 6 02:03:13.539: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-f6ktp] to have phase Bound Nov 6 02:03:13.542: INFO: PersistentVolumeClaim pvc-f6ktp found but phase is Pending instead of Bound. Nov 6 02:03:15.546: INFO: PersistentVolumeClaim pvc-f6ktp found and phase=Bound (2.00702142s) Nov 6 02:03:15.546: INFO: Waiting up to 3m0s for PersistentVolume local-pvbnw4g to have phase Bound Nov 6 02:03:15.548: INFO: PersistentVolume local-pvbnw4g found and phase=Bound (1.68487ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Nov 6 02:03:21.575: INFO: pod "pod-e9311029-2728-4989-b0c7-0ea30c84cbb2" created on Node "node2" STEP: Writing in pod1 Nov 6 02:03:21.575: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3713 PodName:pod-e9311029-2728-4989-b0c7-0ea30c84cbb2 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 02:03:21.575: INFO: >>> kubeConfig: /root/.kube/config Nov 6 02:03:22.223: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 6 02:03:22.223: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3713 PodName:pod-e9311029-2728-4989-b0c7-0ea30c84cbb2 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 02:03:22.223: INFO: >>> kubeConfig: /root/.kube/config Nov 6 02:03:22.512: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-e9311029-2728-4989-b0c7-0ea30c84cbb2 in namespace persistent-local-volumes-test-3713 STEP: Creating pod2 STEP: Creating a pod Nov 6 02:03:28.539: INFO: pod "pod-d90eea46-3121-4e60-a736-4ec9e35f83e9" created on Node "node2" STEP: Reading in pod2 Nov 6 02:03:28.539: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3713 PodName:pod-d90eea46-3121-4e60-a736-4ec9e35f83e9 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 02:03:28.539: INFO: >>> kubeConfig: /root/.kube/config Nov 6 02:03:28.623: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-d90eea46-3121-4e60-a736-4ec9e35f83e9 in namespace persistent-local-volumes-test-3713 [AfterEach] [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 02:03:28.629: INFO: Deleting PersistentVolumeClaim "pvc-f6ktp" Nov 6 02:03:28.632: INFO: Deleting PersistentVolume "local-pvbnw4g" STEP: Removing the test directory Nov 6 02:03:28.636: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-fa5431a6-2d46-44dc-ac48-cef33bfcb9e6 && rm -r /tmp/local-volume-test-fa5431a6-2d46-44dc-ac48-cef33bfcb9e6-backend] Namespace:persistent-local-volumes-test-3713 PodName:hostexec-node2-rcvhb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 02:03:28.636: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:03:28.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3713" for this suite. • [SLOW TEST:17.343 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: dir-link] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":20,"skipped":670,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:58:29.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:460 STEP: Creating the pod [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:03:30.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1383" for this suite. • [SLOW TEST:300.055 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:460 ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:03:20.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-fb5aa220-b4c1-4c4d-8bea-fbb310c07c4a" Nov 6 02:03:24.086: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-fb5aa220-b4c1-4c4d-8bea-fbb310c07c4a" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-fb5aa220-b4c1-4c4d-8bea-fbb310c07c4a" "/tmp/local-volume-test-fb5aa220-b4c1-4c4d-8bea-fbb310c07c4a"] Namespace:persistent-local-volumes-test-3606 PodName:hostexec-node2-4wx6f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 02:03:24.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 02:03:24.314: INFO: Creating a PV followed by a PVC Nov 6 02:03:24.321: INFO: Waiting for PV local-pv9nfxh to bind to PVC pvc-nwkc7 Nov 6 02:03:24.321: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-nwkc7] to have phase Bound Nov 6 02:03:24.324: INFO: PersistentVolumeClaim pvc-nwkc7 found but phase is Pending instead of Bound. Nov 6 02:03:26.328: INFO: PersistentVolumeClaim pvc-nwkc7 found and phase=Bound (2.006344356s) Nov 6 02:03:26.328: INFO: Waiting up to 3m0s for PersistentVolume local-pv9nfxh to have phase Bound Nov 6 02:03:26.331: INFO: PersistentVolume local-pv9nfxh found and phase=Bound (3.028077ms) [BeforeEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:215 STEP: Creating pod1 STEP: Creating a pod Nov 6 02:03:34.356: INFO: pod "pod-7f924451-e2a8-4b37-84b2-f895be75ff77" created on Node "node2" STEP: Writing in pod1 Nov 6 02:03:34.356: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3606 PodName:pod-7f924451-e2a8-4b37-84b2-f895be75ff77 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 02:03:34.356: INFO: >>> kubeConfig: /root/.kube/config Nov 6 02:03:34.725: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: [It] should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 STEP: Reading in pod1 Nov 6 02:03:34.725: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-3606 PodName:pod-7f924451-e2a8-4b37-84b2-f895be75ff77 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 02:03:34.725: INFO: >>> kubeConfig: /root/.kube/config Nov 6 02:03:34.993: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: [AfterEach] One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:227 STEP: Deleting pod1 STEP: Deleting pod pod-7f924451-e2a8-4b37-84b2-f895be75ff77 in namespace persistent-local-volumes-test-3606 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 02:03:34.999: INFO: Deleting PersistentVolumeClaim "pvc-nwkc7" Nov 6 02:03:35.004: INFO: Deleting PersistentVolume "local-pv9nfxh" STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-fb5aa220-b4c1-4c4d-8bea-fbb310c07c4a" Nov 6 02:03:35.010: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-fb5aa220-b4c1-4c4d-8bea-fbb310c07c4a"] Namespace:persistent-local-volumes-test-3606 PodName:hostexec-node2-4wx6f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 02:03:35.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 02:03:35.371: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-fb5aa220-b4c1-4c4d-8bea-fbb310c07c4a] Namespace:persistent-local-volumes-test-3606 PodName:hostexec-node2-4wx6f ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 02:03:35.371: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:03:35.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3606" for this suite. • [SLOW TEST:15.517 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 One pod requesting one prebound PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":18,"skipped":700,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:03:35.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename volume STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:68 Nov 6 02:03:35.598: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian) [AfterEach] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:03:35.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "volume-5402" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.042 seconds] [sig-storage] Volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 GlusterFS [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:128 should be mountable /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:129 Only supported for node OS distro [gci ubuntu custom] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:69 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap Should fail non-optional pod creation due to configMap object does not exist [Slow]","total":-1,"completed":15,"skipped":412,"failed":0} [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:03:30.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50 [It] new files should be created with FSGroup ownership when container is non-root /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:59 STEP: Creating a pod to test emptydir 0644 on tmpfs Nov 6 02:03:30.078: INFO: Waiting up to 5m0s for pod "pod-8545a0c5-b260-46eb-9603-20535983641b" in namespace "emptydir-219" to be "Succeeded or Failed" Nov 6 02:03:30.082: INFO: Pod "pod-8545a0c5-b260-46eb-9603-20535983641b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.680482ms Nov 6 02:03:32.085: INFO: Pod "pod-8545a0c5-b260-46eb-9603-20535983641b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007581729s Nov 6 02:03:34.089: INFO: Pod "pod-8545a0c5-b260-46eb-9603-20535983641b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01181028s Nov 6 02:03:36.094: INFO: Pod "pod-8545a0c5-b260-46eb-9603-20535983641b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016670139s STEP: Saw pod success Nov 6 02:03:36.094: INFO: Pod "pod-8545a0c5-b260-46eb-9603-20535983641b" satisfied condition "Succeeded or Failed" Nov 6 02:03:36.096: INFO: Trying to get logs from node node2 pod pod-8545a0c5-b260-46eb-9603-20535983641b container test-container: STEP: delete the pod Nov 6 02:03:36.828: INFO: Waiting for pod pod-8545a0c5-b260-46eb-9603-20535983641b to disappear Nov 6 02:03:36.830: INFO: Pod pod-8545a0c5-b260-46eb-9603-20535983641b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:03:36.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-219" for this suite. • [SLOW TEST:6.796 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48 new files should be created with FSGroup ownership when container is non-root /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:59 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root","total":-1,"completed":16,"skipped":412,"failed":0} [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:03:36.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pvc-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:72 Nov 6 02:03:36.862: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PVC Nov 6 02:03:36.867: INFO: error finding default storageClass : No default storage class found [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:03:36.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pvc-protection-1409" for this suite. [AfterEach] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:108 S [SKIPPING] in Spec Setup (BeforeEach) [0.036 seconds] [sig-storage] PVC Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Verify that PVC in active use by a pod is not removed immediately [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:126 error finding default storageClass : No default storage class found /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pv/pv.go:819 ------------------------------ SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:58:43.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/secrets_volume.go:430 STEP: Creating the pod [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:03:43.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8796" for this suite. • [SLOW TEST:300.059 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/secrets_volume.go:430 ------------------------------ {"msg":"PASSED [sig-storage] Secrets Should fail non-optional pod creation due to secret object does not exist [Slow]","total":-1,"completed":10,"skipped":462,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:03:36.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-char-dev STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:256 STEP: Create a pod for further testing Nov 6 02:03:36.930: INFO: The status of Pod test-hostpath-type-hqmsv is Pending, waiting for it to be Running (with Ready = true) Nov 6 02:03:38.932: INFO: The status of Pod test-hostpath-type-hqmsv is Pending, waiting for it to be Running (with Ready = true) Nov 6 02:03:40.935: INFO: The status of Pod test-hostpath-type-hqmsv is Pending, waiting for it to be Running (with Ready = true) Nov 6 02:03:42.936: INFO: The status of Pod test-hostpath-type-hqmsv is Running (Ready = true) STEP: running on node node2 STEP: Create a character device for further testing Nov 6 02:03:42.939: INFO: ExecWithOptions {Command:[/bin/sh -c mknod /mnt/test/achardev c 89 1] Namespace:host-path-type-char-dev-8545 PodName:test-hostpath-type-hqmsv ContainerName:host-path-testing Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 02:03:42.939: INFO: >>> kubeConfig: /root/.kube/config [It] Should fail on mounting non-existent character device 'does-not-exist-char-dev' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:271 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:03:45.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-char-dev-8545" for this suite. • [SLOW TEST:8.499 seconds] [sig-storage] HostPathType Character Device [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting non-existent character device 'does-not-exist-char-dev' when HostPathType is HostPathCharDev /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:271 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Character Device [Slow] Should fail on mounting non-existent character device 'does-not-exist-char-dev' when HostPathType is HostPathCharDev","total":-1,"completed":17,"skipped":421,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:03:45.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Nov 6 02:03:45.490: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:03:45.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-5567" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:82 S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create metrics for total time taken in volume operations in P/V Controller [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:261 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:03:35.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Nov 6 02:03:35.682: INFO: The status of Pod test-hostpath-type-w9822 is Pending, waiting for it to be Running (with Ready = true) Nov 6 02:03:37.685: INFO: The status of Pod test-hostpath-type-w9822 is Pending, waiting for it to be Running (with Ready = true) Nov 6 02:03:39.686: INFO: The status of Pod test-hostpath-type-w9822 is Pending, waiting for it to be Running (with Ready = true) Nov 6 02:03:41.684: INFO: The status of Pod test-hostpath-type-w9822 is Running (Ready = true) STEP: running on node node2 STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should fail on mounting non-existent directory 'does-not-exist-dir' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:70 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:03:47.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-6847" for this suite. • [SLOW TEST:12.103 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting non-existent directory 'does-not-exist-dir' when HostPathType is HostPathDirectory /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:70 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should fail on mounting non-existent directory 'does-not-exist-dir' when HostPathType is HostPathDirectory","total":-1,"completed":19,"skipped":721,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:03:21.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-4b880311-fba6-4eb6-85b3-57c87c9ec487" Nov 6 02:03:25.490: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-4b880311-fba6-4eb6-85b3-57c87c9ec487" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-4b880311-fba6-4eb6-85b3-57c87c9ec487" "/tmp/local-volume-test-4b880311-fba6-4eb6-85b3-57c87c9ec487"] Namespace:persistent-local-volumes-test-908 PodName:hostexec-node1-x5bk5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 02:03:25.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 02:03:25.592: INFO: Creating a PV followed by a PVC Nov 6 02:03:25.598: INFO: Waiting for PV local-pv7fvsk to bind to PVC pvc-nkp2g Nov 6 02:03:25.598: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-nkp2g] to have phase Bound Nov 6 02:03:25.600: INFO: PersistentVolumeClaim pvc-nkp2g found but phase is Pending instead of Bound. Nov 6 02:03:27.605: INFO: PersistentVolumeClaim pvc-nkp2g found but phase is Pending instead of Bound. Nov 6 02:03:29.610: INFO: PersistentVolumeClaim pvc-nkp2g found but phase is Pending instead of Bound. Nov 6 02:03:31.614: INFO: PersistentVolumeClaim pvc-nkp2g found but phase is Pending instead of Bound. Nov 6 02:03:33.617: INFO: PersistentVolumeClaim pvc-nkp2g found but phase is Pending instead of Bound. Nov 6 02:03:35.619: INFO: PersistentVolumeClaim pvc-nkp2g found but phase is Pending instead of Bound. Nov 6 02:03:37.623: INFO: PersistentVolumeClaim pvc-nkp2g found and phase=Bound (12.025371558s) Nov 6 02:03:37.623: INFO: Waiting up to 3m0s for PersistentVolume local-pv7fvsk to have phase Bound Nov 6 02:03:37.626: INFO: PersistentVolume local-pv7fvsk found and phase=Bound (2.614137ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 STEP: Creating pod1 STEP: Creating a pod Nov 6 02:03:53.652: INFO: pod "pod-b243143a-e331-4018-9d70-4bc78aabc785" created on Node "node1" STEP: Writing in pod1 Nov 6 02:03:53.652: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-908 PodName:pod-b243143a-e331-4018-9d70-4bc78aabc785 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 02:03:53.652: INFO: >>> kubeConfig: /root/.kube/config Nov 6 02:03:53.738: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 6 02:03:53.738: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-908 PodName:pod-b243143a-e331-4018-9d70-4bc78aabc785 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 02:03:53.738: INFO: >>> kubeConfig: /root/.kube/config Nov 6 02:03:53.822: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-b243143a-e331-4018-9d70-4bc78aabc785 in namespace persistent-local-volumes-test-908 STEP: Creating pod2 STEP: Creating a pod Nov 6 02:03:59.853: INFO: pod "pod-5f360e5d-99fe-4387-87fb-6922b3bde680" created on Node "node1" STEP: Reading in pod2 Nov 6 02:03:59.853: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-908 PodName:pod-5f360e5d-99fe-4387-87fb-6922b3bde680 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 02:03:59.853: INFO: >>> kubeConfig: /root/.kube/config Nov 6 02:04:00.052: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Deleting pod2 STEP: Deleting pod pod-5f360e5d-99fe-4387-87fb-6922b3bde680 in namespace persistent-local-volumes-test-908 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 02:04:00.058: INFO: Deleting PersistentVolumeClaim "pvc-nkp2g" Nov 6 02:04:00.062: INFO: Deleting PersistentVolume "local-pv7fvsk" STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-4b880311-fba6-4eb6-85b3-57c87c9ec487" Nov 6 02:04:00.066: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-4b880311-fba6-4eb6-85b3-57c87c9ec487"] Namespace:persistent-local-volumes-test-908 PodName:hostexec-node1-x5bk5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 02:04:00.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 02:04:00.183: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4b880311-fba6-4eb6-85b3-57c87c9ec487] Namespace:persistent-local-volumes-test-908 PodName:hostexec-node1-x5bk5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 02:04:00.183: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:04:00.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-908" for this suite. • [SLOW TEST:38.878 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume one after the other /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":19,"skipped":662,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:04:00.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename regional-pd STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:68 Nov 6 02:04:00.338: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:04:00.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "regional-pd-7777" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.037 seconds] [sig-storage] Regional PD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 RegionalPD [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:76 should provision storage in the allowedTopologies with delayed binding [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:90 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/regional_pd.go:72 ------------------------------ SSS ------------------------------ [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:04:00.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-socket STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:191 STEP: Create a pod for further testing Nov 6 02:04:00.397: INFO: The status of Pod test-hostpath-type-69pgn is Pending, waiting for it to be Running (with Ready = true) Nov 6 02:04:02.400: INFO: The status of Pod test-hostpath-type-69pgn is Pending, waiting for it to be Running (with Ready = true) Nov 6 02:04:04.402: INFO: The status of Pod test-hostpath-type-69pgn is Running (Ready = true) STEP: running on node node2 [It] Should be able to mount socket 'asocket' successfully when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:208 [AfterEach] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:04:10.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-socket-2000" for this suite. • [SLOW TEST:10.082 seconds] [sig-storage] HostPathType Socket [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should be able to mount socket 'asocket' successfully when HostPathType is HostPathSocket /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:208 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Socket [Slow] Should be able to mount socket 'asocket' successfully when HostPathType is HostPathSocket","total":-1,"completed":20,"skipped":668,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:03:43.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 STEP: Initializing test volumes STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-430a7e23-9627-4037-8d7a-e43d77d071a0" Nov 6 02:03:53.846: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-430a7e23-9627-4037-8d7a-e43d77d071a0" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-430a7e23-9627-4037-8d7a-e43d77d071a0" "/tmp/local-volume-test-430a7e23-9627-4037-8d7a-e43d77d071a0"] Namespace:persistent-local-volumes-test-4808 PodName:hostexec-node1-v6pdk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 02:03:53.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 02:03:53.950: INFO: Creating a PV followed by a PVC Nov 6 02:03:53.956: INFO: Waiting for PV local-pvtrbtg to bind to PVC pvc-8h87c Nov 6 02:03:53.957: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-8h87c] to have phase Bound Nov 6 02:03:53.959: INFO: PersistentVolumeClaim pvc-8h87c found but phase is Pending instead of Bound. Nov 6 02:03:55.964: INFO: PersistentVolumeClaim pvc-8h87c found but phase is Pending instead of Bound. Nov 6 02:03:57.970: INFO: PersistentVolumeClaim pvc-8h87c found but phase is Pending instead of Bound. Nov 6 02:03:59.978: INFO: PersistentVolumeClaim pvc-8h87c found but phase is Pending instead of Bound. Nov 6 02:04:01.983: INFO: PersistentVolumeClaim pvc-8h87c found but phase is Pending instead of Bound. Nov 6 02:04:03.987: INFO: PersistentVolumeClaim pvc-8h87c found but phase is Pending instead of Bound. Nov 6 02:04:05.992: INFO: PersistentVolumeClaim pvc-8h87c found but phase is Pending instead of Bound. Nov 6 02:04:07.998: INFO: PersistentVolumeClaim pvc-8h87c found and phase=Bound (14.041559045s) Nov 6 02:04:07.998: INFO: Waiting up to 3m0s for PersistentVolume local-pvtrbtg to have phase Bound Nov 6 02:04:08.000: INFO: PersistentVolume local-pvtrbtg found and phase=Bound (1.819064ms) [It] should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 STEP: Creating pod1 to write to the PV STEP: Creating a pod Nov 6 02:04:12.026: INFO: pod "pod-3b01f61b-8b2c-4a78-b981-08c3cbaa8077" created on Node "node1" STEP: Writing in pod1 Nov 6 02:04:12.026: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4808 PodName:pod-3b01f61b-8b2c-4a78-b981-08c3cbaa8077 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 02:04:12.026: INFO: >>> kubeConfig: /root/.kube/config Nov 6 02:04:12.114: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo test-file-content > /mnt/volume1/test-file", out: "", stderr: "", err: Nov 6 02:04:12.114: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4808 PodName:pod-3b01f61b-8b2c-4a78-b981-08c3cbaa8077 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 02:04:12.114: INFO: >>> kubeConfig: /root/.kube/config Nov 6 02:04:12.201: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Creating pod2 to read from the PV STEP: Creating a pod Nov 6 02:04:16.228: INFO: pod "pod-c1e47782-f86c-469c-8e54-88caa31e39cd" created on Node "node1" Nov 6 02:04:16.228: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4808 PodName:pod-c1e47782-f86c-469c-8e54-88caa31e39cd ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 02:04:16.228: INFO: >>> kubeConfig: /root/.kube/config Nov 6 02:04:16.315: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "test-file-content", stderr: "", err: STEP: Writing in pod2 Nov 6 02:04:16.315: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir -p /mnt/volume1; echo /tmp/local-volume-test-430a7e23-9627-4037-8d7a-e43d77d071a0 > /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4808 PodName:pod-c1e47782-f86c-469c-8e54-88caa31e39cd ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 02:04:16.315: INFO: >>> kubeConfig: /root/.kube/config Nov 6 02:04:16.399: INFO: podRWCmdExec cmd: "mkdir -p /mnt/volume1; echo /tmp/local-volume-test-430a7e23-9627-4037-8d7a-e43d77d071a0 > /mnt/volume1/test-file", out: "", stderr: "", err: STEP: Reading in pod1 Nov 6 02:04:16.399: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/volume1/test-file] Namespace:persistent-local-volumes-test-4808 PodName:pod-3b01f61b-8b2c-4a78-b981-08c3cbaa8077 ContainerName:write-pod Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 6 02:04:16.399: INFO: >>> kubeConfig: /root/.kube/config Nov 6 02:04:16.504: INFO: podRWCmdExec cmd: "cat /mnt/volume1/test-file", out: "/tmp/local-volume-test-430a7e23-9627-4037-8d7a-e43d77d071a0", stderr: "", err: STEP: Deleting pod1 STEP: Deleting pod pod-3b01f61b-8b2c-4a78-b981-08c3cbaa8077 in namespace persistent-local-volumes-test-4808 STEP: Deleting pod2 STEP: Deleting pod pod-c1e47782-f86c-469c-8e54-88caa31e39cd in namespace persistent-local-volumes-test-4808 [AfterEach] [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 STEP: Cleaning up PVC and PV Nov 6 02:04:16.514: INFO: Deleting PersistentVolumeClaim "pvc-8h87c" Nov 6 02:04:16.517: INFO: Deleting PersistentVolume "local-pvtrbtg" STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-430a7e23-9627-4037-8d7a-e43d77d071a0" Nov 6 02:04:16.522: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-430a7e23-9627-4037-8d7a-e43d77d071a0"] Namespace:persistent-local-volumes-test-4808 PodName:hostexec-node1-v6pdk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 02:04:16.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Nov 6 02:04:16.628: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-430a7e23-9627-4037-8d7a-e43d77d071a0] Namespace:persistent-local-volumes-test-4808 PodName:hostexec-node1-v6pdk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 02:04:16.628: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:04:16.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4808" for this suite. • [SLOW TEST:32.936 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: tmpfs] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192 Two pods mounting a local volume at the same time /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":11,"skipped":493,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:59:32.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [It] should fail due to wrong node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:324 STEP: Initializing test volumes Nov 6 01:59:34.179: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-9bd8dbd7-09ff-4de4-a5dc-794199600f8d] Namespace:persistent-local-volumes-test-9546 PodName:hostexec-node1-q24b9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 01:59:34.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating local PVCs and PVs Nov 6 01:59:34.375: INFO: Creating a PV followed by a PVC Nov 6 01:59:34.381: INFO: Waiting for PV local-pvg67l9 to bind to PVC pvc-4ncdx Nov 6 01:59:34.381: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-4ncdx] to have phase Bound Nov 6 01:59:34.383: INFO: PersistentVolumeClaim pvc-4ncdx found but phase is Pending instead of Bound. Nov 6 01:59:36.388: INFO: PersistentVolumeClaim pvc-4ncdx found and phase=Bound (2.00734441s) Nov 6 01:59:36.388: INFO: Waiting up to 3m0s for PersistentVolume local-pvg67l9 to have phase Bound Nov 6 01:59:36.391: INFO: PersistentVolume local-pvg67l9 found and phase=Bound (2.854956ms) STEP: Cleaning up PVC and PV Nov 6 02:04:36.417: INFO: Deleting PersistentVolumeClaim "pvc-4ncdx" Nov 6 02:04:36.422: INFO: Deleting PersistentVolume "local-pvg67l9" STEP: Removing the test directory Nov 6 02:04:36.426: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-9bd8dbd7-09ff-4de4-a5dc-794199600f8d] Namespace:persistent-local-volumes-test-9546 PodName:hostexec-node1-q24b9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 6 02:04:36.426: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:04:36.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9546" for this suite. • [SLOW TEST:304.404 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Local volume that cannot be mounted [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:304 should fail due to wrong node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:324 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Local volume that cannot be mounted [Slow] should fail due to wrong node","total":-1,"completed":8,"skipped":451,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:03:45.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not be passed when CSIDriver does not exist /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 STEP: Building a driver namespace object, basename csi-mock-volumes-3206 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 6 02:03:45.641: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3206-4202/csi-attacher Nov 6 02:03:45.644: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3206 Nov 6 02:03:45.644: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-3206 Nov 6 02:03:45.647: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3206 Nov 6 02:03:45.650: INFO: creating *v1.Role: csi-mock-volumes-3206-4202/external-attacher-cfg-csi-mock-volumes-3206 Nov 6 02:03:45.652: INFO: creating *v1.RoleBinding: csi-mock-volumes-3206-4202/csi-attacher-role-cfg Nov 6 02:03:45.655: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3206-4202/csi-provisioner Nov 6 02:03:45.657: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3206 Nov 6 02:03:45.657: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-3206 Nov 6 02:03:45.660: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3206 Nov 6 02:03:45.662: INFO: creating *v1.Role: csi-mock-volumes-3206-4202/external-provisioner-cfg-csi-mock-volumes-3206 Nov 6 02:03:45.665: INFO: creating *v1.RoleBinding: csi-mock-volumes-3206-4202/csi-provisioner-role-cfg Nov 6 02:03:45.667: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3206-4202/csi-resizer Nov 6 02:03:45.669: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3206 Nov 6 02:03:45.669: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-3206 Nov 6 02:03:45.672: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3206 Nov 6 02:03:45.675: INFO: creating *v1.Role: csi-mock-volumes-3206-4202/external-resizer-cfg-csi-mock-volumes-3206 Nov 6 02:03:45.677: INFO: creating *v1.RoleBinding: csi-mock-volumes-3206-4202/csi-resizer-role-cfg Nov 6 02:03:45.680: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3206-4202/csi-snapshotter Nov 6 02:03:45.682: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3206 Nov 6 02:03:45.682: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-3206 Nov 6 02:03:45.684: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3206 Nov 6 02:03:45.688: INFO: creating *v1.Role: csi-mock-volumes-3206-4202/external-snapshotter-leaderelection-csi-mock-volumes-3206 Nov 6 02:03:45.690: INFO: creating *v1.RoleBinding: csi-mock-volumes-3206-4202/external-snapshotter-leaderelection Nov 6 02:03:45.693: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3206-4202/csi-mock Nov 6 02:03:45.694: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3206 Nov 6 02:03:45.697: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3206 Nov 6 02:03:45.699: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3206 Nov 6 02:03:45.703: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3206 Nov 6 02:03:45.705: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3206 Nov 6 02:03:45.708: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3206 Nov 6 02:03:45.711: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3206 Nov 6 02:03:45.713: INFO: creating *v1.StatefulSet: csi-mock-volumes-3206-4202/csi-mockplugin Nov 6 02:03:45.719: INFO: creating *v1.StatefulSet: csi-mock-volumes-3206-4202/csi-mockplugin-attacher Nov 6 02:03:45.722: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-3206 to register on node node2 STEP: Creating pod Nov 6 02:03:55.240: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 6 02:03:55.244: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-d65cf] to have phase Bound Nov 6 02:03:55.247: INFO: PersistentVolumeClaim pvc-d65cf found but phase is Pending instead of Bound. Nov 6 02:03:57.250: INFO: PersistentVolumeClaim pvc-d65cf found and phase=Bound (2.005274316s) STEP: Deleting the previously created pod Nov 6 02:04:11.276: INFO: Deleting pod "pvc-volume-tester-8m2tx" in namespace "csi-mock-volumes-3206" Nov 6 02:04:11.282: INFO: Wait up to 5m0s for pod "pvc-volume-tester-8m2tx" to be fully deleted STEP: Checking CSI driver logs Nov 6 02:04:19.404: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/ef38f33e-2184-4229-b0f7-5f43fa1fc23e/volumes/kubernetes.io~csi/pvc-522cb639-3e65-4a5f-a151-f44969a1027f/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} STEP: Deleting pod pvc-volume-tester-8m2tx Nov 6 02:04:19.404: INFO: Deleting pod "pvc-volume-tester-8m2tx" in namespace "csi-mock-volumes-3206" STEP: Deleting claim pvc-d65cf Nov 6 02:04:19.413: INFO: Waiting up to 2m0s for PersistentVolume pvc-522cb639-3e65-4a5f-a151-f44969a1027f to get deleted Nov 6 02:04:19.415: INFO: PersistentVolume pvc-522cb639-3e65-4a5f-a151-f44969a1027f found and phase=Bound (1.918385ms) Nov 6 02:04:21.419: INFO: PersistentVolume pvc-522cb639-3e65-4a5f-a151-f44969a1027f found and phase=Released (2.005937372s) Nov 6 02:04:23.423: INFO: PersistentVolume pvc-522cb639-3e65-4a5f-a151-f44969a1027f found and phase=Released (4.009499417s) Nov 6 02:04:25.427: INFO: PersistentVolume pvc-522cb639-3e65-4a5f-a151-f44969a1027f found and phase=Released (6.013424677s) Nov 6 02:04:27.430: INFO: PersistentVolume pvc-522cb639-3e65-4a5f-a151-f44969a1027f was removed STEP: Deleting storageclass csi-mock-volumes-3206-scwtd87 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-3206 STEP: Waiting for namespaces [csi-mock-volumes-3206] to vanish STEP: uninstalling csi mock driver Nov 6 02:04:33.443: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3206-4202/csi-attacher Nov 6 02:04:33.447: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3206 Nov 6 02:04:33.451: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3206 Nov 6 02:04:33.455: INFO: deleting *v1.Role: csi-mock-volumes-3206-4202/external-attacher-cfg-csi-mock-volumes-3206 Nov 6 02:04:33.459: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3206-4202/csi-attacher-role-cfg Nov 6 02:04:33.463: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3206-4202/csi-provisioner Nov 6 02:04:33.466: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3206 Nov 6 02:04:33.471: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3206 Nov 6 02:04:33.478: INFO: deleting *v1.Role: csi-mock-volumes-3206-4202/external-provisioner-cfg-csi-mock-volumes-3206 Nov 6 02:04:33.484: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3206-4202/csi-provisioner-role-cfg Nov 6 02:04:33.491: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3206-4202/csi-resizer Nov 6 02:04:33.496: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3206 Nov 6 02:04:33.499: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3206 Nov 6 02:04:33.503: INFO: deleting *v1.Role: csi-mock-volumes-3206-4202/external-resizer-cfg-csi-mock-volumes-3206 Nov 6 02:04:33.506: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3206-4202/csi-resizer-role-cfg Nov 6 02:04:33.509: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3206-4202/csi-snapshotter Nov 6 02:04:33.512: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3206 Nov 6 02:04:33.516: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3206 Nov 6 02:04:33.519: INFO: deleting *v1.Role: csi-mock-volumes-3206-4202/external-snapshotter-leaderelection-csi-mock-volumes-3206 Nov 6 02:04:33.522: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3206-4202/external-snapshotter-leaderelection Nov 6 02:04:33.525: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3206-4202/csi-mock Nov 6 02:04:33.528: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3206 Nov 6 02:04:33.533: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3206 Nov 6 02:04:33.536: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3206 Nov 6 02:04:33.539: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3206 Nov 6 02:04:33.543: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3206 Nov 6 02:04:33.546: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3206 Nov 6 02:04:33.549: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3206 Nov 6 02:04:33.552: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3206-4202/csi-mockplugin Nov 6 02:04:33.555: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3206-4202/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-3206-4202 STEP: Waiting for namespaces [csi-mock-volumes-3206-4202] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:04:39.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:54.043 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI workload information using mock driver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443 should not be passed when CSIDriver does not exist /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist","total":-1,"completed":18,"skipped":470,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:04:39.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv-protection STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:51 Nov 6 02:04:39.807: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable STEP: Creating a PV STEP: Waiting for PV to enter phase Available Nov 6 02:04:39.814: INFO: Waiting up to 30s for PersistentVolume hostpath-9p8vt to have phase Available Nov 6 02:04:39.816: INFO: PersistentVolume hostpath-9p8vt found and phase=Available (2.084675ms) STEP: Checking that PV Protection finalizer is set [It] Verify that PV bound to a PVC is not removed immediately /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:107 STEP: Creating a PVC STEP: Waiting for PVC to become Bound Nov 6 02:04:39.823: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-vd45r] to have phase Bound Nov 6 02:04:39.825: INFO: PersistentVolumeClaim pvc-vd45r found but phase is Pending instead of Bound. Nov 6 02:04:41.829: INFO: PersistentVolumeClaim pvc-vd45r found and phase=Bound (2.006089446s) STEP: Deleting the PV, however, the PV must not be removed from the system as it's bound to a PVC STEP: Checking that the PV status is Terminating STEP: Deleting the PVC that is bound to the PV STEP: Checking that the PV is automatically removed from the system because it's no longer bound to a PVC Nov 6 02:04:41.839: INFO: Waiting up to 3m0s for PersistentVolume hostpath-9p8vt to get deleted Nov 6 02:04:41.841: INFO: PersistentVolume hostpath-9p8vt found and phase=Bound (1.981254ms) Nov 6 02:04:43.845: INFO: PersistentVolume hostpath-9p8vt was removed [AfterEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:04:43.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-protection-4598" for this suite. [AfterEach] [sig-storage] PV Protection /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pv_protection.go:92 Nov 6 02:04:43.853: INFO: AfterEach: Cleaning up test resources. Nov 6 02:04:43.853: INFO: Deleting PersistentVolumeClaim "pvc-vd45r" Nov 6 02:04:43.855: INFO: Deleting PersistentVolume "hostpath-9p8vt" • ------------------------------ {"msg":"PASSED [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately","total":-1,"completed":19,"skipped":578,"failed":0} SSS ------------------------------ Nov 6 02:04:43.866: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:03:47.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume without restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687 STEP: Building a driver namespace object, basename csi-mock-volumes-3677 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 6 02:03:47.822: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3677-1938/csi-attacher Nov 6 02:03:47.825: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3677 Nov 6 02:03:47.825: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-3677 Nov 6 02:03:47.827: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3677 Nov 6 02:03:47.831: INFO: creating *v1.Role: csi-mock-volumes-3677-1938/external-attacher-cfg-csi-mock-volumes-3677 Nov 6 02:03:47.833: INFO: creating *v1.RoleBinding: csi-mock-volumes-3677-1938/csi-attacher-role-cfg Nov 6 02:03:47.836: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3677-1938/csi-provisioner Nov 6 02:03:47.838: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3677 Nov 6 02:03:47.838: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-3677 Nov 6 02:03:47.842: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3677 Nov 6 02:03:47.844: INFO: creating *v1.Role: csi-mock-volumes-3677-1938/external-provisioner-cfg-csi-mock-volumes-3677 Nov 6 02:03:47.847: INFO: creating *v1.RoleBinding: csi-mock-volumes-3677-1938/csi-provisioner-role-cfg Nov 6 02:03:47.850: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3677-1938/csi-resizer Nov 6 02:03:47.852: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3677 Nov 6 02:03:47.852: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-3677 Nov 6 02:03:47.855: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3677 Nov 6 02:03:47.858: INFO: creating *v1.Role: csi-mock-volumes-3677-1938/external-resizer-cfg-csi-mock-volumes-3677 Nov 6 02:03:47.861: INFO: creating *v1.RoleBinding: csi-mock-volumes-3677-1938/csi-resizer-role-cfg Nov 6 02:03:47.863: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3677-1938/csi-snapshotter Nov 6 02:03:47.866: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3677 Nov 6 02:03:47.866: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-3677 Nov 6 02:03:47.868: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3677 Nov 6 02:03:47.871: INFO: creating *v1.Role: csi-mock-volumes-3677-1938/external-snapshotter-leaderelection-csi-mock-volumes-3677 Nov 6 02:03:47.873: INFO: creating *v1.RoleBinding: csi-mock-volumes-3677-1938/external-snapshotter-leaderelection Nov 6 02:03:47.875: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3677-1938/csi-mock Nov 6 02:03:47.878: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3677 Nov 6 02:03:47.880: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3677 Nov 6 02:03:47.883: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3677 Nov 6 02:03:47.886: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3677 Nov 6 02:03:47.888: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3677 Nov 6 02:03:47.891: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3677 Nov 6 02:03:47.894: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3677 Nov 6 02:03:47.897: INFO: creating *v1.StatefulSet: csi-mock-volumes-3677-1938/csi-mockplugin Nov 6 02:03:47.901: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-3677 Nov 6 02:03:47.904: INFO: creating *v1.StatefulSet: csi-mock-volumes-3677-1938/csi-mockplugin-resizer Nov 6 02:03:47.907: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-3677" Nov 6 02:03:47.910: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-3677 to register on node node2 STEP: Creating pod Nov 6 02:03:57.428: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 6 02:03:57.433: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-g2hkh] to have phase Bound Nov 6 02:03:57.435: INFO: PersistentVolumeClaim pvc-g2hkh found but phase is Pending instead of Bound. Nov 6 02:03:59.441: INFO: PersistentVolumeClaim pvc-g2hkh found and phase=Bound (2.008547507s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-8qrdg Nov 6 02:04:05.483: INFO: Deleting pod "pvc-volume-tester-8qrdg" in namespace "csi-mock-volumes-3677" Nov 6 02:04:05.487: INFO: Wait up to 5m0s for pod "pvc-volume-tester-8qrdg" to be fully deleted STEP: Deleting claim pvc-g2hkh Nov 6 02:04:09.504: INFO: Waiting up to 2m0s for PersistentVolume pvc-0d0638c8-08ff-437d-9437-a4ea06a1db16 to get deleted Nov 6 02:04:09.507: INFO: PersistentVolume pvc-0d0638c8-08ff-437d-9437-a4ea06a1db16 found and phase=Bound (2.308118ms) Nov 6 02:04:11.515: INFO: PersistentVolume pvc-0d0638c8-08ff-437d-9437-a4ea06a1db16 was removed STEP: Deleting storageclass csi-mock-volumes-3677-sc5rspc STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-3677 STEP: Waiting for namespaces [csi-mock-volumes-3677] to vanish STEP: uninstalling csi mock driver Nov 6 02:04:17.528: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3677-1938/csi-attacher Nov 6 02:04:17.534: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3677 Nov 6 02:04:17.537: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3677 Nov 6 02:04:17.540: INFO: deleting *v1.Role: csi-mock-volumes-3677-1938/external-attacher-cfg-csi-mock-volumes-3677 Nov 6 02:04:17.544: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3677-1938/csi-attacher-role-cfg Nov 6 02:04:17.547: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3677-1938/csi-provisioner Nov 6 02:04:17.551: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3677 Nov 6 02:04:17.554: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3677 Nov 6 02:04:17.562: INFO: deleting *v1.Role: csi-mock-volumes-3677-1938/external-provisioner-cfg-csi-mock-volumes-3677 Nov 6 02:04:17.568: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3677-1938/csi-provisioner-role-cfg Nov 6 02:04:17.577: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3677-1938/csi-resizer Nov 6 02:04:17.582: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3677 Nov 6 02:04:17.587: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3677 Nov 6 02:04:17.591: INFO: deleting *v1.Role: csi-mock-volumes-3677-1938/external-resizer-cfg-csi-mock-volumes-3677 Nov 6 02:04:17.595: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3677-1938/csi-resizer-role-cfg Nov 6 02:04:17.599: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3677-1938/csi-snapshotter Nov 6 02:04:17.603: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3677 Nov 6 02:04:17.606: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3677 Nov 6 02:04:17.609: INFO: deleting *v1.Role: csi-mock-volumes-3677-1938/external-snapshotter-leaderelection-csi-mock-volumes-3677 Nov 6 02:04:17.612: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3677-1938/external-snapshotter-leaderelection Nov 6 02:04:17.615: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3677-1938/csi-mock Nov 6 02:04:17.618: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3677 Nov 6 02:04:17.622: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3677 Nov 6 02:04:17.625: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3677 Nov 6 02:04:17.628: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3677 Nov 6 02:04:17.631: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3677 Nov 6 02:04:17.634: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3677 Nov 6 02:04:17.637: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3677 Nov 6 02:04:17.640: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3677-1938/csi-mockplugin Nov 6 02:04:17.643: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-3677 Nov 6 02:04:17.647: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3677-1938/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-3677-1938 STEP: Waiting for namespaces [csi-mock-volumes-3677-1938] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:04:45.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:57.907 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI online volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:672 should expand volume without restarting pod if attach=off, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":20,"skipped":724,"failed":0} Nov 6 02:04:45.667: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:04:36.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename host-path-type-directory STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:57 STEP: Create a pod for further testing Nov 6 02:04:36.605: INFO: The status of Pod test-hostpath-type-dl4hb is Pending, waiting for it to be Running (with Ready = true) Nov 6 02:04:38.611: INFO: The status of Pod test-hostpath-type-dl4hb is Pending, waiting for it to be Running (with Ready = true) Nov 6 02:04:40.610: INFO: The status of Pod test-hostpath-type-dl4hb is Running (Ready = true) STEP: running on node node1 STEP: Should automatically create a new directory 'adir' when HostPathType is HostPathDirectoryOrCreate [It] Should fail on mounting directory 'adir' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:84 STEP: Creating pod STEP: Checking for HostPathType error event [AfterEach] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:04:46.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "host-path-type-directory-4286" for this suite. • [SLOW TEST:10.103 seconds] [sig-storage] HostPathType Directory [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Should fail on mounting directory 'adir' when HostPathType is HostPathFile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/host_path_type.go:84 ------------------------------ {"msg":"PASSED [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathFile","total":-1,"completed":9,"skipped":461,"failed":0} Nov 6 02:04:46.668: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:59:46.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:469 STEP: Creating configMap with name cm-test-opt-create-384de3fe-9810-41f2-a89e-e9da2cc75059 STEP: Creating the pod [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:04:46.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-536" for this suite. • [SLOW TEST:300.068 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:469 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]","total":-1,"completed":12,"skipped":274,"failed":0} Nov 6 02:04:46.923: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:04:16.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] CSIStorageCapacity disabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 STEP: Building a driver namespace object, basename csi-mock-volumes-5549 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 6 02:04:16.840: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5549-6024/csi-attacher Nov 6 02:04:16.843: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5549 Nov 6 02:04:16.843: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-5549 Nov 6 02:04:16.847: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5549 Nov 6 02:04:16.849: INFO: creating *v1.Role: csi-mock-volumes-5549-6024/external-attacher-cfg-csi-mock-volumes-5549 Nov 6 02:04:16.852: INFO: creating *v1.RoleBinding: csi-mock-volumes-5549-6024/csi-attacher-role-cfg Nov 6 02:04:16.854: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5549-6024/csi-provisioner Nov 6 02:04:16.856: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5549 Nov 6 02:04:16.856: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-5549 Nov 6 02:04:16.860: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5549 Nov 6 02:04:16.863: INFO: creating *v1.Role: csi-mock-volumes-5549-6024/external-provisioner-cfg-csi-mock-volumes-5549 Nov 6 02:04:16.865: INFO: creating *v1.RoleBinding: csi-mock-volumes-5549-6024/csi-provisioner-role-cfg Nov 6 02:04:16.868: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5549-6024/csi-resizer Nov 6 02:04:16.870: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5549 Nov 6 02:04:16.870: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-5549 Nov 6 02:04:16.873: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5549 Nov 6 02:04:16.876: INFO: creating *v1.Role: csi-mock-volumes-5549-6024/external-resizer-cfg-csi-mock-volumes-5549 Nov 6 02:04:16.879: INFO: creating *v1.RoleBinding: csi-mock-volumes-5549-6024/csi-resizer-role-cfg Nov 6 02:04:16.881: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5549-6024/csi-snapshotter Nov 6 02:04:16.884: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5549 Nov 6 02:04:16.884: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-5549 Nov 6 02:04:16.887: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5549 Nov 6 02:04:16.890: INFO: creating *v1.Role: csi-mock-volumes-5549-6024/external-snapshotter-leaderelection-csi-mock-volumes-5549 Nov 6 02:04:16.892: INFO: creating *v1.RoleBinding: csi-mock-volumes-5549-6024/external-snapshotter-leaderelection Nov 6 02:04:16.895: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5549-6024/csi-mock Nov 6 02:04:16.898: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5549 Nov 6 02:04:16.900: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5549 Nov 6 02:04:16.902: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5549 Nov 6 02:04:16.905: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5549 Nov 6 02:04:16.908: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5549 Nov 6 02:04:16.911: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5549 Nov 6 02:04:16.914: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5549 Nov 6 02:04:16.916: INFO: creating *v1.StatefulSet: csi-mock-volumes-5549-6024/csi-mockplugin Nov 6 02:04:16.921: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-5549 Nov 6 02:04:16.924: INFO: creating *v1.StatefulSet: csi-mock-volumes-5549-6024/csi-mockplugin-attacher Nov 6 02:04:16.927: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-5549" Nov 6 02:04:16.929: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5549 to register on node node2 STEP: Creating pod Nov 6 02:04:31.450: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil STEP: Deleting the previously created pod Nov 6 02:04:39.477: INFO: Deleting pod "pvc-volume-tester-8psc9" in namespace "csi-mock-volumes-5549" Nov 6 02:04:39.481: INFO: Wait up to 5m0s for pod "pvc-volume-tester-8psc9" to be fully deleted STEP: Deleting pod pvc-volume-tester-8psc9 Nov 6 02:04:43.487: INFO: Deleting pod "pvc-volume-tester-8psc9" in namespace "csi-mock-volumes-5549" STEP: Deleting claim pvc-l4kkf Nov 6 02:04:43.495: INFO: Waiting up to 2m0s for PersistentVolume pvc-50ccad74-538f-4f9d-a678-c3427338ed9c to get deleted Nov 6 02:04:43.497: INFO: PersistentVolume pvc-50ccad74-538f-4f9d-a678-c3427338ed9c found and phase=Bound (2.241772ms) Nov 6 02:04:45.501: INFO: PersistentVolume pvc-50ccad74-538f-4f9d-a678-c3427338ed9c was removed STEP: Deleting storageclass mock-csi-storage-capacity-csi-mock-volumes-5549 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-5549 STEP: Waiting for namespaces [csi-mock-volumes-5549] to vanish STEP: uninstalling csi mock driver Nov 6 02:04:51.515: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5549-6024/csi-attacher Nov 6 02:04:51.520: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5549 Nov 6 02:04:51.524: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5549 Nov 6 02:04:51.527: INFO: deleting *v1.Role: csi-mock-volumes-5549-6024/external-attacher-cfg-csi-mock-volumes-5549 Nov 6 02:04:51.532: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5549-6024/csi-attacher-role-cfg Nov 6 02:04:51.536: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5549-6024/csi-provisioner Nov 6 02:04:51.540: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5549 Nov 6 02:04:51.544: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5549 Nov 6 02:04:51.550: INFO: deleting *v1.Role: csi-mock-volumes-5549-6024/external-provisioner-cfg-csi-mock-volumes-5549 Nov 6 02:04:51.556: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5549-6024/csi-provisioner-role-cfg Nov 6 02:04:51.562: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5549-6024/csi-resizer Nov 6 02:04:51.568: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5549 Nov 6 02:04:51.572: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5549 Nov 6 02:04:51.576: INFO: deleting *v1.Role: csi-mock-volumes-5549-6024/external-resizer-cfg-csi-mock-volumes-5549 Nov 6 02:04:51.579: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5549-6024/csi-resizer-role-cfg Nov 6 02:04:51.583: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5549-6024/csi-snapshotter Nov 6 02:04:51.587: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5549 Nov 6 02:04:51.590: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5549 Nov 6 02:04:51.594: INFO: deleting *v1.Role: csi-mock-volumes-5549-6024/external-snapshotter-leaderelection-csi-mock-volumes-5549 Nov 6 02:04:51.597: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5549-6024/external-snapshotter-leaderelection Nov 6 02:04:51.601: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5549-6024/csi-mock Nov 6 02:04:51.604: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5549 Nov 6 02:04:51.607: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5549 Nov 6 02:04:51.610: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5549 Nov 6 02:04:51.613: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5549 Nov 6 02:04:51.616: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5549 Nov 6 02:04:51.619: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5549 Nov 6 02:04:51.624: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5549 Nov 6 02:04:51.627: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5549-6024/csi-mockplugin Nov 6 02:04:51.631: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-5549 Nov 6 02:04:51.635: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5549-6024/csi-mockplugin-attacher STEP: deleting the driver namespace: csi-mock-volumes-5549-6024 STEP: Waiting for namespaces [csi-mock-volumes-5549-6024] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:05:03.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:46.872 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSIStorageCapacity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134 CSIStorageCapacity disabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177 ------------------------------ [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:04:10.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should not call NodeUnstage after NodeStage final error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 STEP: Building a driver namespace object, basename csi-mock-volumes-4052 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock proxy Nov 6 02:04:10.544: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4052-6049/csi-attacher Nov 6 02:04:10.547: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4052 Nov 6 02:04:10.547: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-4052 Nov 6 02:04:10.550: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4052 Nov 6 02:04:10.552: INFO: creating *v1.Role: csi-mock-volumes-4052-6049/external-attacher-cfg-csi-mock-volumes-4052 Nov 6 02:04:10.555: INFO: creating *v1.RoleBinding: csi-mock-volumes-4052-6049/csi-attacher-role-cfg Nov 6 02:04:10.558: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4052-6049/csi-provisioner Nov 6 02:04:10.560: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4052 Nov 6 02:04:10.560: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-4052 Nov 6 02:04:10.566: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4052 Nov 6 02:04:10.568: INFO: creating *v1.Role: csi-mock-volumes-4052-6049/external-provisioner-cfg-csi-mock-volumes-4052 Nov 6 02:04:10.575: INFO: creating *v1.RoleBinding: csi-mock-volumes-4052-6049/csi-provisioner-role-cfg Nov 6 02:04:10.580: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4052-6049/csi-resizer Nov 6 02:04:10.586: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4052 Nov 6 02:04:10.586: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-4052 Nov 6 02:04:10.588: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4052 Nov 6 02:04:10.591: INFO: creating *v1.Role: csi-mock-volumes-4052-6049/external-resizer-cfg-csi-mock-volumes-4052 Nov 6 02:04:10.594: INFO: creating *v1.RoleBinding: csi-mock-volumes-4052-6049/csi-resizer-role-cfg Nov 6 02:04:10.597: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4052-6049/csi-snapshotter Nov 6 02:04:10.599: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4052 Nov 6 02:04:10.599: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-4052 Nov 6 02:04:10.602: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4052 Nov 6 02:04:10.605: INFO: creating *v1.Role: csi-mock-volumes-4052-6049/external-snapshotter-leaderelection-csi-mock-volumes-4052 Nov 6 02:04:10.607: INFO: creating *v1.RoleBinding: csi-mock-volumes-4052-6049/external-snapshotter-leaderelection Nov 6 02:04:10.610: INFO: creating *v1.ServiceAccount: csi-mock-volumes-4052-6049/csi-mock Nov 6 02:04:10.612: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4052 Nov 6 02:04:10.615: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4052 Nov 6 02:04:10.618: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4052 Nov 6 02:04:10.620: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4052 Nov 6 02:04:10.623: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4052 Nov 6 02:04:10.626: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4052 Nov 6 02:04:10.628: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4052 Nov 6 02:04:10.631: INFO: creating *v1.StatefulSet: csi-mock-volumes-4052-6049/csi-mockplugin Nov 6 02:04:10.635: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4052 Nov 6 02:04:10.637: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4052" Nov 6 02:04:10.639: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4052 to register on node node2 I1106 02:04:15.712106 34 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-4052","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1106 02:04:15.794361 34 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I1106 02:04:15.834663 34 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-4052","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I1106 02:04:15.883291 34 csi.go:431] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I1106 02:04:15.885915 34 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I1106 02:04:15.993252 34 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-4052"},"Error":"","FullError":null} STEP: Creating pod Nov 6 02:04:20.155: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 6 02:04:20.161: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-c9mxb] to have phase Bound Nov 6 02:04:20.163: INFO: PersistentVolumeClaim pvc-c9mxb found but phase is Pending instead of Bound. I1106 02:04:20.250556 34 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-bfd4f269-bd85-4dc3-9011-42ee9ee23f28","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-bfd4f269-bd85-4dc3-9011-42ee9ee23f28"}}},"Error":"","FullError":null} Nov 6 02:04:22.166: INFO: PersistentVolumeClaim pvc-c9mxb found and phase=Bound (2.005602806s) Nov 6 02:04:22.180: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-c9mxb] to have phase Bound Nov 6 02:04:22.185: INFO: PersistentVolumeClaim pvc-c9mxb found and phase=Bound (5.134636ms) STEP: Waiting for expected CSI calls I1106 02:04:22.423194 34 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1106 02:04:22.427604 34 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-bfd4f269-bd85-4dc3-9011-42ee9ee23f28/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-bfd4f269-bd85-4dc3-9011-42ee9ee23f28","storage.kubernetes.io/csiProvisionerIdentity":"1636164255921-8081-csi-mock-csi-mock-volumes-4052"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} I1106 02:04:22.941634 34 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1106 02:04:22.944519 34 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-bfd4f269-bd85-4dc3-9011-42ee9ee23f28/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-bfd4f269-bd85-4dc3-9011-42ee9ee23f28","storage.kubernetes.io/csiProvisionerIdentity":"1636164255921-8081-csi-mock-csi-mock-volumes-4052"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} STEP: Deleting the previously created pod Nov 6 02:04:23.185: INFO: Deleting pod "pvc-volume-tester-hr77v" in namespace "csi-mock-volumes-4052" Nov 6 02:04:23.190: INFO: Wait up to 5m0s for pod "pvc-volume-tester-hr77v" to be fully deleted I1106 02:04:24.041322 34 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1106 02:04:24.043689 34 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-bfd4f269-bd85-4dc3-9011-42ee9ee23f28/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-bfd4f269-bd85-4dc3-9011-42ee9ee23f28","storage.kubernetes.io/csiProvisionerIdentity":"1636164255921-8081-csi-mock-csi-mock-volumes-4052"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} I1106 02:04:26.064340 34 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I1106 02:04:26.066766 34 csi.go:431] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-bfd4f269-bd85-4dc3-9011-42ee9ee23f28/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-bfd4f269-bd85-4dc3-9011-42ee9ee23f28","storage.kubernetes.io/csiProvisionerIdentity":"1636164255921-8081-csi-mock-csi-mock-volumes-4052"}},"Response":null,"Error":"rpc error: code = InvalidArgument desc = fake error","FullError":{"code":3,"message":"fake error"}} STEP: Waiting for all remaining expected CSI calls STEP: Deleting pod pvc-volume-tester-hr77v Nov 6 02:04:30.197: INFO: Deleting pod "pvc-volume-tester-hr77v" in namespace "csi-mock-volumes-4052" STEP: Deleting claim pvc-c9mxb Nov 6 02:04:30.208: INFO: Waiting up to 2m0s for PersistentVolume pvc-bfd4f269-bd85-4dc3-9011-42ee9ee23f28 to get deleted Nov 6 02:04:30.211: INFO: PersistentVolume pvc-bfd4f269-bd85-4dc3-9011-42ee9ee23f28 found and phase=Bound (3.307879ms) I1106 02:04:30.225989 34 csi.go:431] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} Nov 6 02:04:32.216: INFO: PersistentVolume pvc-bfd4f269-bd85-4dc3-9011-42ee9ee23f28 was removed STEP: Deleting storageclass csi-mock-volumes-4052-scxc9m4 STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-4052 STEP: Waiting for namespaces [csi-mock-volumes-4052] to vanish STEP: uninstalling csi mock driver Nov 6 02:04:38.247: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4052-6049/csi-attacher Nov 6 02:04:38.250: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-4052 Nov 6 02:04:38.254: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-4052 Nov 6 02:04:38.258: INFO: deleting *v1.Role: csi-mock-volumes-4052-6049/external-attacher-cfg-csi-mock-volumes-4052 Nov 6 02:04:38.262: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4052-6049/csi-attacher-role-cfg Nov 6 02:04:38.266: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4052-6049/csi-provisioner Nov 6 02:04:38.270: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-4052 Nov 6 02:04:38.273: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-4052 Nov 6 02:04:38.276: INFO: deleting *v1.Role: csi-mock-volumes-4052-6049/external-provisioner-cfg-csi-mock-volumes-4052 Nov 6 02:04:38.281: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4052-6049/csi-provisioner-role-cfg Nov 6 02:04:38.285: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4052-6049/csi-resizer Nov 6 02:04:38.288: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-4052 Nov 6 02:04:38.293: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-4052 Nov 6 02:04:38.296: INFO: deleting *v1.Role: csi-mock-volumes-4052-6049/external-resizer-cfg-csi-mock-volumes-4052 Nov 6 02:04:38.300: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4052-6049/csi-resizer-role-cfg Nov 6 02:04:38.303: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4052-6049/csi-snapshotter Nov 6 02:04:38.307: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-4052 Nov 6 02:04:38.311: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-4052 Nov 6 02:04:38.314: INFO: deleting *v1.Role: csi-mock-volumes-4052-6049/external-snapshotter-leaderelection-csi-mock-volumes-4052 Nov 6 02:04:38.318: INFO: deleting *v1.RoleBinding: csi-mock-volumes-4052-6049/external-snapshotter-leaderelection Nov 6 02:04:38.321: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-4052-6049/csi-mock Nov 6 02:04:38.324: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-4052 Nov 6 02:04:38.328: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-4052 Nov 6 02:04:38.331: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-4052 Nov 6 02:04:38.335: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-4052 Nov 6 02:04:38.338: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-4052 Nov 6 02:04:38.342: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4052 Nov 6 02:04:38.346: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4052 Nov 6 02:04:38.349: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4052-6049/csi-mockplugin Nov 6 02:04:38.353: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-4052 STEP: deleting the driver namespace: csi-mock-volumes-4052-6049 STEP: Waiting for namespaces [csi-mock-volumes-4052-6049] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:05:22.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:71.890 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI NodeStage error cases [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:734 should not call NodeUnstage after NodeStage final error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:828 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should not call NodeUnstage after NodeStage final error","total":-1,"completed":21,"skipped":683,"failed":0} Nov 6 02:05:22.376: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:03:28.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that container can restart successfully after configmaps modified /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:131 STEP: Create configmap STEP: Creating pod pod-subpath-test-configmap-hg5v STEP: Failing liveness probe Nov 6 02:03:38.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=subpath-1566 exec pod-subpath-test-configmap-hg5v --container test-container-volume-configmap-hg5v -- /bin/sh -c rm /probe-volume/probe-file' Nov 6 02:03:38.388: INFO: stderr: "" Nov 6 02:03:38.388: INFO: stdout: "" Nov 6 02:03:38.388: INFO: Pod exec output: STEP: Waiting for container to restart Nov 6 02:03:38.391: INFO: Container test-container-subpath-configmap-hg5v, restarts: 0 Nov 6 02:03:48.394: INFO: Container test-container-subpath-configmap-hg5v, restarts: 1 Nov 6 02:03:48.394: INFO: Container has restart count: 1 STEP: Fix liveness probe STEP: Waiting for container to stop restarting Nov 6 02:03:52.404: INFO: Container has restart count: 2 Nov 6 02:04:10.406: INFO: Container has restart count: 3 Nov 6 02:05:12.404: INFO: Container restart has stabilized Nov 6 02:05:12.404: INFO: Deleting pod "pod-subpath-test-configmap-hg5v" in namespace "subpath-1566" Nov 6 02:05:12.409: INFO: Wait up to 5m0s for pod "pod-subpath-test-configmap-hg5v" to be fully deleted [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:05:30.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1566" for this suite. • [SLOW TEST:122.383 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Container restart /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:130 should verify that container can restart successfully after configmaps modified /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:131 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Container restart should verify that container can restart successfully after configmaps modified","total":-1,"completed":14,"skipped":681,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]"]} Nov 6 02:05:30.433: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:03:28.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename csi-mock-volumes STEP: Waiting for a default service account to be provisioned in namespace [It] should expand volume without restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687 STEP: Building a driver namespace object, basename csi-mock-volumes-2298 STEP: Waiting for a default service account to be provisioned in namespace STEP: deploying csi mock driver Nov 6 02:03:28.814: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2298-6371/csi-attacher Nov 6 02:03:28.816: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2298 Nov 6 02:03:28.816: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-2298 Nov 6 02:03:28.818: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2298 Nov 6 02:03:28.821: INFO: creating *v1.Role: csi-mock-volumes-2298-6371/external-attacher-cfg-csi-mock-volumes-2298 Nov 6 02:03:28.824: INFO: creating *v1.RoleBinding: csi-mock-volumes-2298-6371/csi-attacher-role-cfg Nov 6 02:03:28.827: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2298-6371/csi-provisioner Nov 6 02:03:28.829: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2298 Nov 6 02:03:28.829: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-2298 Nov 6 02:03:28.832: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2298 Nov 6 02:03:28.835: INFO: creating *v1.Role: csi-mock-volumes-2298-6371/external-provisioner-cfg-csi-mock-volumes-2298 Nov 6 02:03:28.838: INFO: creating *v1.RoleBinding: csi-mock-volumes-2298-6371/csi-provisioner-role-cfg Nov 6 02:03:28.841: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2298-6371/csi-resizer Nov 6 02:03:28.843: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2298 Nov 6 02:03:28.844: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-2298 Nov 6 02:03:28.846: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2298 Nov 6 02:03:28.849: INFO: creating *v1.Role: csi-mock-volumes-2298-6371/external-resizer-cfg-csi-mock-volumes-2298 Nov 6 02:03:28.851: INFO: creating *v1.RoleBinding: csi-mock-volumes-2298-6371/csi-resizer-role-cfg Nov 6 02:03:28.854: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2298-6371/csi-snapshotter Nov 6 02:03:28.856: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2298 Nov 6 02:03:28.856: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-2298 Nov 6 02:03:28.859: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2298 Nov 6 02:03:28.861: INFO: creating *v1.Role: csi-mock-volumes-2298-6371/external-snapshotter-leaderelection-csi-mock-volumes-2298 Nov 6 02:03:28.863: INFO: creating *v1.RoleBinding: csi-mock-volumes-2298-6371/external-snapshotter-leaderelection Nov 6 02:03:28.866: INFO: creating *v1.ServiceAccount: csi-mock-volumes-2298-6371/csi-mock Nov 6 02:03:28.869: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2298 Nov 6 02:03:28.871: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2298 Nov 6 02:03:28.874: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2298 Nov 6 02:03:28.876: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2298 Nov 6 02:03:28.879: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2298 Nov 6 02:03:28.882: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2298 Nov 6 02:03:28.884: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2298 Nov 6 02:03:28.887: INFO: creating *v1.StatefulSet: csi-mock-volumes-2298-6371/csi-mockplugin Nov 6 02:03:28.891: INFO: creating *v1.StatefulSet: csi-mock-volumes-2298-6371/csi-mockplugin-attacher Nov 6 02:03:28.894: INFO: creating *v1.StatefulSet: csi-mock-volumes-2298-6371/csi-mockplugin-resizer Nov 6 02:03:28.898: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-2298 to register on node node1 STEP: Creating pod Nov 6 02:03:38.415: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 6 02:03:38.419: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-v5kg2] to have phase Bound Nov 6 02:03:38.421: INFO: PersistentVolumeClaim pvc-v5kg2 found but phase is Pending instead of Bound. Nov 6 02:03:40.427: INFO: PersistentVolumeClaim pvc-v5kg2 found and phase=Bound (2.007705347s) STEP: Expanding current pvc STEP: Waiting for persistent volume resize to finish STEP: Waiting for PVC resize to finish STEP: Deleting pod pvc-volume-tester-5vwkd Nov 6 02:05:18.465: INFO: Deleting pod "pvc-volume-tester-5vwkd" in namespace "csi-mock-volumes-2298" Nov 6 02:05:18.470: INFO: Wait up to 5m0s for pod "pvc-volume-tester-5vwkd" to be fully deleted STEP: Deleting claim pvc-v5kg2 Nov 6 02:05:30.482: INFO: Waiting up to 2m0s for PersistentVolume pvc-6f7b0398-adf9-4836-8199-46501fa2f26d to get deleted Nov 6 02:05:30.484: INFO: PersistentVolume pvc-6f7b0398-adf9-4836-8199-46501fa2f26d found and phase=Bound (1.783554ms) Nov 6 02:05:32.488: INFO: PersistentVolume pvc-6f7b0398-adf9-4836-8199-46501fa2f26d was removed STEP: Deleting storageclass csi-mock-volumes-2298-scbvjzv STEP: Cleaning up resources STEP: deleting the test namespace: csi-mock-volumes-2298 STEP: Waiting for namespaces [csi-mock-volumes-2298] to vanish STEP: uninstalling csi mock driver Nov 6 02:05:38.499: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2298-6371/csi-attacher Nov 6 02:05:38.504: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-2298 Nov 6 02:05:38.508: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-2298 Nov 6 02:05:38.511: INFO: deleting *v1.Role: csi-mock-volumes-2298-6371/external-attacher-cfg-csi-mock-volumes-2298 Nov 6 02:05:38.514: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2298-6371/csi-attacher-role-cfg Nov 6 02:05:38.517: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2298-6371/csi-provisioner Nov 6 02:05:38.522: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-2298 Nov 6 02:05:38.525: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-2298 Nov 6 02:05:38.528: INFO: deleting *v1.Role: csi-mock-volumes-2298-6371/external-provisioner-cfg-csi-mock-volumes-2298 Nov 6 02:05:38.531: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2298-6371/csi-provisioner-role-cfg Nov 6 02:05:38.535: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2298-6371/csi-resizer Nov 6 02:05:38.541: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-2298 Nov 6 02:05:38.549: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-2298 Nov 6 02:05:38.555: INFO: deleting *v1.Role: csi-mock-volumes-2298-6371/external-resizer-cfg-csi-mock-volumes-2298 Nov 6 02:05:38.559: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2298-6371/csi-resizer-role-cfg Nov 6 02:05:38.562: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2298-6371/csi-snapshotter Nov 6 02:05:38.566: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-2298 Nov 6 02:05:38.569: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-2298 Nov 6 02:05:38.573: INFO: deleting *v1.Role: csi-mock-volumes-2298-6371/external-snapshotter-leaderelection-csi-mock-volumes-2298 Nov 6 02:05:38.576: INFO: deleting *v1.RoleBinding: csi-mock-volumes-2298-6371/external-snapshotter-leaderelection Nov 6 02:05:38.579: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-2298-6371/csi-mock Nov 6 02:05:38.582: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-2298 Nov 6 02:05:38.585: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-2298 Nov 6 02:05:38.588: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-2298 Nov 6 02:05:38.591: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-2298 Nov 6 02:05:38.595: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-2298 Nov 6 02:05:38.598: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-2298 Nov 6 02:05:38.601: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-2298 Nov 6 02:05:38.604: INFO: deleting *v1.StatefulSet: csi-mock-volumes-2298-6371/csi-mockplugin Nov 6 02:05:38.608: INFO: deleting *v1.StatefulSet: csi-mock-volumes-2298-6371/csi-mockplugin-attacher Nov 6 02:05:38.610: INFO: deleting *v1.StatefulSet: csi-mock-volumes-2298-6371/csi-mockplugin-resizer STEP: deleting the driver namespace: csi-mock-volumes-2298-6371 STEP: Waiting for namespaces [csi-mock-volumes-2298-6371] to vanish [AfterEach] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:05:50.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready • [SLOW TEST:141.872 seconds] [sig-storage] CSI mock volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 CSI online volume expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:672 should expand volume without restarting pod if attach=on, nodeExpansion=on /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:687 ------------------------------ {"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=on, nodeExpansion=on","total":-1,"completed":21,"skipped":674,"failed":0} Nov 6 02:05:50.629: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:02:18.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] Should fail non-optional pod creation due to the key in the secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/secrets_volume.go:440 STEP: Creating secret with name s-test-opt-create-ab3c9e5a-58d5-4ce9-8eef-aec385a8724c STEP: Creating the pod [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:07:18.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7978" for this suite. • [SLOW TEST:300.072 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 Should fail non-optional pod creation due to the key in the secret object does not exist [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/secrets_volume.go:440 ------------------------------ {"msg":"PASSED [sig-storage] Secrets Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]","total":-1,"completed":18,"skipped":611,"failed":0} Nov 6 02:07:18.452: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 02:03:09.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [It] should fail due to non-existent path /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:307 STEP: Creating local PVC and PV Nov 6 02:03:09.074: INFO: Creating a PV followed by a PVC Nov 6 02:03:09.082: INFO: Waiting for PV local-pvzdrn6 to bind to PVC pvc-flh2m Nov 6 02:03:09.082: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-flh2m] to have phase Bound Nov 6 02:03:09.088: INFO: PersistentVolumeClaim pvc-flh2m found but phase is Pending instead of Bound. Nov 6 02:03:11.091: INFO: PersistentVolumeClaim pvc-flh2m found and phase=Bound (2.009432115s) Nov 6 02:03:11.092: INFO: Waiting up to 3m0s for PersistentVolume local-pvzdrn6 to have phase Bound Nov 6 02:03:11.094: INFO: PersistentVolume local-pvzdrn6 found and phase=Bound (2.152714ms) STEP: Creating a pod STEP: Cleaning up PVC and PV Nov 6 02:13:11.138: INFO: Deleting PersistentVolumeClaim "pvc-flh2m" Nov 6 02:13:11.142: INFO: Deleting PersistentVolume "local-pvzdrn6" [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 02:13:11.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7730" for this suite. • [SLOW TEST:602.109 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Local volume that cannot be mounted [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:304 should fail due to non-existent path /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:307 ------------------------------ {"msg":"PASSED [sig-storage] PersistentVolumes-local Local volume that cannot be mounted [Slow] should fail due to non-existent path","total":-1,"completed":16,"skipped":409,"failed":0} Nov 6 02:13:11.156: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity disabled","total":-1,"completed":12,"skipped":509,"failed":0} Nov 6 02:05:03.656: INFO: Running AfterSuite actions on all nodes Nov 6 02:13:11.191: INFO: Running AfterSuite actions on node 1 Nov 6 02:13:11.191: INFO: Skipping dumping logs from cluster Summarizing 1 Failure: [Fail] [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Set fsGroup for local volume [It] should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:810 Ran 163 of 5770 Specs in 1358.036 seconds FAIL! -- 162 Passed | 1 Failed | 0 Pending | 5607 Skipped Ginkgo ran 1 suite in 22m39.60698386s Test Suite Failed